Sunday, January 23, 2011

The Economist: A special report on global leaders

"The few: In the information age, brainy people are rewarded with wealth and influence, says Robert Guest. What does this mean for everyone else?"

The Economist has a special report this week with relevance to the topic of this blog. It includes the following articles:
  • The rise and rise of the cognitive elite: Brains bring ever larger rewards
    In America, for example, in 1987 the top 1% of taxpayers received 12.3% of all pre-tax income. Twenty years later their share, at 23.5%, was nearly twice as large. The bottom half’s share fell from 15.6% to 12.2% over the same period.
  • The global campus:The best universities now have worldwide reach
    Global universities are “reshaping the world”, argues Ben Wildavsky, the author of “The Great Brain Race”. Because big problems often transcend borders, many ambitious students demand a global education. The number of people studying outside their home country jumped from below 2m in 2000 to 3.3m in 2008, according to the OECD.

    The most popular destination is the English-speaking world, led by America, which hosts 19% of the world’s mobile students. French and German universities are also popular, but more narrow in their allure.
  • They work for us: In democracies the elites serve the masses
    Today there are well over 400 billionaires in America alone, many of them in fine fettle and eager to embark on a second career. Such people are often workaholics and have no wish to retire.

    The charitable rich do their bit to soothe the social tensions that arise from growing inequality. Yet their work should be seen in perspective. Even in America voluntary transfers of wealth are dwarfed by public spending. Americans gave away $217 billion in 2009, estimates Mr Schervish. Government spending on health care and pensions was ten times that.

    By and large, global leaders change the world more by doing their day jobs than in their spare time. Even Mr Gates, who was widely reviled for his business activities, probably did more good by amassing his fortune than he is doing by giving it away. The computer revolution he helped to bring about transformed the way people handle information. Perhaps his foundation will spur some equally momentous change, but it seems unlikely.
The Information Revolution is based on the development of technology which has enormous potential for good. It is the basis for the development of a greatly enhanced global information infrastructure. The ways in which that infrastructure develops and is used will determine who benefits and how much from the technology. The development of the global information infrastructure is clearly driving increases in productivity and wealth. The special report shows that more and more of the benefits from that growth are being captured by a small portion of the world population.

I fear that that increasing disparity between the rich and the poor within nations will be bad for society and will need to be redressed. More fundamentally, there are billions of people who are in desperate need of resources for their very survival, and policies that emphasize increased riches for the already very, very rich rather than survival for the poor seem immoral.

For the first time in centuries, the production of the rest of the world has surpassed that of Europe and America. It is not surprising that ten percent of the worlds people concentrated in a relatively small area would not permanently out produce the other 90 percent of the world's population occupying the rest of the world. On the other hand, the growing economic power of the East, based upon the growing knowledge power of the East, will lead to major changes in international relations.

United States Still Leads in FDI inflows!

Friday, January 21, 2011

WP Covers my book club!

Michael Rosenwald's article on the troubles of book superstores in the face of Amazon's challenge and ebooks is featured on the first page of today's Washington Post. The article begins with a few paragraphs on last week's meeting of our history book club at the local Borders book store, a meeting which Mr. Rosenwald observed. As readers of this blog may know, I have been a member of the club for something like five or six years.

Let me begin by carping on a comment made "above the fold" of the WP:
"This group is challenged by e-mail," said Christian Minor, a physicist and longtime member.
I am a long time member of the group, and I do not feel challenged by e-mail, except that I receive too much of it. I may be an old guy, but my blogs have had something like a million page views; I have something like 4,500 followers on Twitter, and I am heavily connected on social networking sites, especially Linked In.

I have put away my Kindle and I buy books from Borders face-to-face with the staff. I do so because I value the advice and assistance of the staff. I don't know about the executives who run the Borders chain, but I find the people who are actually in the stores to be friendly and to know a lot about books. I realize that unless I buy books on paper from my local Borders book store, I won't have the chance to do so in the future and that would be a shame.

While the graph above suggests that Borders has lost market share, especially to, eight percent of the U.S. book market is nothing to sneeze at and can serve as a good base for restoration of the competitiveness of the chain. Part of the problem of the Borders superstores is that they lost a lot of business as CDs have been replaced by other forms of music, since Borders used to do a lot of music business. I see that the stores are now diversifying to do more sales of DVDs and book related accessories, as well as even some toys. Borders offers a variety of e-book readers in its stores, and does compete for online book sales.

Obviously the digital revolution is changing the market for lots of products, including books and book stores as well as the WP itself. It may be that book stores will suffer the fate of the merchants who once sold buggys (to be pulled by horses), with many going out of business and some becoming successful in auto sales. Even as fast food restaurants have become ubiquitous supermarkets have become bigger and able to offer a larger variety of consumer goods; perhaps Borders will make a similar adjustment to that of the successful food store chains. I hope so.

In my last posting, based on a book I bought at Borders and expect to discuss at the next meeting of the history book club, I considered the impact of the Information Revolution on libraries. Clearly they too are adjusting to new conditions but I find it hard to believe that our society would be so foolish as to dispense with organized repositories for books and the service of professional librarians. I hope we will continue to enjoy both libraries and book stores, and benefit from the people who serve in them. I encourage you to patronize both your local Borders and your local library!

Thoughts on reading Matthew Battle's "LIbrary"

Library: An Unquiet History

I have just finished reading Library: An Unquiet History by Matthew Battle. The author is a special collections librarian at Harvard, and has written a book that is part memoir, part "literature", and part historical account. Battle organizes his book chronologically, first discussing some examples of ancient libraries, moving to the monastic libraries of the middle ages, considering the development of university libraries, the libraries of the 19th century, and ultimately the modern library. In the process he considers the evolving role of the librarian, the cataloging of books, and even library architecture and library furnishing,
"Das war ein Vorspiel nur, dort wo man Bücher verbrennt, verbrennt man am Ende auch Menschen."
("That was but a prelude; where they burn books, they will ultimately burn people also.")
Heinrich Heine from his 1821 play Almansor
One of the themes of the book is the destruction of libraries and the burning of books. Sometimes, as (perhaps) in the burning of the library of Alexandria by Julius Caeser, libraries have been destroyed by accident. Sometimes, as (perhaps) in the shelling of the library of Louvain University by the German army in World War II, they have been destroyed by deliberate acts of anger. Whatever the cause, mankind has lost a huge cultural heritage due to the burning of books and the destruction of their depositories.

I enjoyed this short book for the stories it tells, and for the way they are told. I warn those who would read this book not to expect a formal historical text, but rather an anecdotal set of accounts by a literate and literary librarian on the background of his profession.

I found myself thinking about some of the historical themes that the author does not address. For example, the role of books and collections of books in major religions. Can a religious tradition have a continuous tradition over centuries or millennia, and can it spread to tens of millions of believers without sacred books and an accumulation of texts? Judaism, Christianity, Islam, Buddhism, Hinduism all have their fundamental texts.

I found a riverine analogy interesting in thinking about the theme of this book. In the middle ages in Europe there was a trickle of books, copied by hand, and serving a limited community of people who could read and write. Libraries were tiny, found in monasteries, and limited to bibles and religious texts. (Fortunately, Byzantium and Islam kept book culture alive and available to spark a revival in Europe.)

Technology, book collections, and the reading population co-evolved from a trickle to a flood of information. The technology for producing books evolved and books were produced ever more inexpensively in ever greater numbers. Libraries evolved until today the Library of Congress (the world's largest library) holds more than 100 million books in some 450 languages. The goal of "Education for All" which has been agreed globally for two decades, even if far from being accomplished would have been unthinkable even a century ago, much less by the ancients who saw literacy as an arcane ability to be shared only be a few scribes. Today we have a flood of book born information filling library shelves and slaking the thirst for knowledge of billions of readers.

We now see libraries of many sorts, from those serving educational facilities (from K-12 to graduate schools), to religious libraries, to those abroad supported by governments as part of their "public diplomacy", to corporate professional libraries, to community libraries. The Carnegie Corporation supported the creation of thousands of libraries in the early 20th century, and the Gates Foundation today supports development of libraries. Librarians are trained in library schools in our universities, and there is a global network of library associations.

Modern information and communications technology is again transforming our approach to literature. More books are available in digital form on the World Wide Web than are there books in paper in Harvard University (which has more books than any other university in the world). For more than a billion people, the WWW makes books by the million available in their homes. Our community libraries offer computer terminals connected to the Internet. For much of the world, where libraries are few and far between, cybercafes and community computer centers are providing poor people with access to a world of books.

Thursday, January 20, 2011

Bad Science

Science magazine has an a piece in its Policy Forum on the science used to justify the use of torture on suspected terrorists. Bad science can be of two kinds:

  • Done badly, in the sense that it fails either to develop valid information or to communicate information accurately and precisely.
  • Immoral, especially if in the process of scientific investigation it violates human rights, such as the right of human subjects to be informed as to the nature of the research and with that information to consent to participate.
Both kinds of bad apply to the science used by the Bush administration to justify torture.

It seems to me that people who consent to be subject to torture or severe physical or mental stress could not be relied upon to respond in the same way as people who were tortured or stressed without their consent. So there is a basic problem in doing other than bad prospective studies of torture.

Perhaps it might be possible to do studies of victims of torture if one were to do longitudinal studies of the physical and mental aftereffects of torture. Unfortunately, there are still many people being tortured each year. I suspect that randomized case-control studies of post torture therapy would not only produce good science, but perhaps would help develop post-torture therapies that would help future victims.

Virtual meetings for peer review!

There is an interesting article in Science magazine about peer review. NSF had 19,000 scientists attend face-to-face peer review meetings last year. NIH had 17,000 scientists involved in peer review, but 20 percent of them participated via phone or video conferencing.

Of course, professional journals also depend on peer review, but do not (can not afford) in-person meetings of the reviewers.

In my work over a long period of time, I found peer reviewers would participate not only without pay, but without reimbursement of costs of participation. Of course that was in the context of foreign assistance, but I found that reviewers enjoy the professional exchanges in peer review meetings and actually often learn in the process of sharing their knowledge.

I also came to the conclusion that reviewers frequently change their opinions in peer review meetings. At the very minimum they draw out more information from the reviewers than occurs with only written reviews and they allow program staff to ask questions to resolve doubts and improve communication.
Starting this year, NIH has been testing a technology called Cisco TelePresence, a videoconferencing system used by the military and large corporations. Strategically placed screens, speakers, and microphones make it seem as though remote participants are sitting around the same table. “It is so convincing that people reach for coffee cups that are not really there,” says Scarpa. Although the system is expensive—NIH declined to cite a number because they are currently negotiating the price—Scarpa predicts that it will eventually reduce the cost of face-to-face meetings “by one-third.”

Although very few scientists currently have access to a Cisco TelePresence station, anyone can log in to Second Life with a laptop. Since March 2009, six grant-review panels have convened on NSF's island in Second Life, known as IISLand. “Realworld panelists are provided with some resources,” says Bainbridge. “So it was felt appropriate to provide them with the cost of a decent set of virtual clothes.” Once the scientists had created avatars, they each received 1000 Linden dollars—which cost NSF $4—to shop in Second Life's virtual stores. (They also received a $240 honorarium of real money per day.)

Aside from those virtual quirks, the format of the meetings followed a traditional schedule, and all of the work was completed on time. Bainbridge estimates that switching to virtual review can save as much as $10,000 per panel. NSF pays $3600 in rent per year to Linden Labs, the company that operates Second Life, he says, so “just one normal-sized panel pays for the island more than twice over.”
Sounds like a good idea to me!

The Patent Office has been working hard.

One wonders whether the patent office maintained high quality of examinations in the face of a major expansion in the number of patents granted. I suppose most patents don't result in commercial products, and thus are never litigated. However, unmerited patents that are litigated cost a lot.

Wednesday, January 19, 2011

More and safer biomedical research labs in Africa

The Economist has an article calling for better assurance that African laboratories working with human pathogens will not allow them into the hands of bio-terrorists. The article, while pointing to a real problem, suggests that we should not be too concerned in that "weaponizing" pathogens is difficult. I think the article is simply wrong in that one need not weaponize a disease agent, but simply infect people as they get on airlines with highly communicable diseases and let nature take its course.

I would suggest that it is more important to strengthen biomedical research labs in Africa in order that they do a better job in fighting disease in Africa. Infectious diseases there kill millions of people per year. In strengthening those labs, of course one would want to assure that they were safe. The safety of research personnel and the safety of the neighbors of the labs should be assured.

Teachers count: Lets make sure our kids get the best teachers possible!

Source: "Lessons learned: At last, America may change the way it trains, recruits and rewards teachers," The Economist, January 6th 2011

We should recruit from our best and best educated young people for our new teachers. We should train them not only in the subject matter that they are to teach, but also how to teach well in the schools for which we are preparing them. We should evaluate teachers according to the best evidence we can develop as to how well they help the students under their charge to learn. We should reward good teaching well, both monetarily and with appropriate recognition. We should not tolerate poor teaching or bad teachers. We should continue to educate our teachers in the substance that they are to teach and in the best ways to teach it. If teachers need help to teach better, we should provide it. If, as is sometimes necessary, we need to make cuts in teaching staffs, we should do so to support the level of teaching for the children (helping those who must be laid off as much as possible).

Did you know Myanmar has been very successful economically?

Source: The Economist

We read about the coercive, dictatorial government of Myanmar, but this table suggests that its growth has been very rapid over the last decade.

An interesting Gallup Poll Result

Source: "Very Conservative Americans: Leaders Should Stick to Beliefs: Other groups of Americans tilt more toward compromise," by Frank Newport, January 12, 2011

If you think that government has an important role in defense, promotion of economic growth, law enforcement, protection of the public health and other fields, then you should agree that it is more important that political leaders compromise to get things done rather than that they stick to their beliefs.

On the other hand, the question seems to me to create a false dichotomy. Certainly there are many areas in which most political leaders should be able to reach compromises that don't violate important beliefs of either party.

On U.S.-China Relations

I am no expert in this field, but some things seem obvious to me. Both the United States Government and the Government of China must be concerned with the welfare of their citizens, and both must seek to promote the economic growth of their economies in order to benefit their citizens. Each Government understands this fact about the other.

Each Government will insist on developing military capacity in order to assure its security. Both countries will seek to be increasingly involved in international trade to obtain products that they need or desire from abroad and to sell into foreign markets, and both will seek in the long run to have a balanced trade with the value of imports equal to the value of exports.

Neither the American people nor the Chinese people have any antagonism toward the other. While there is considerable ignorance in the United States about our history, those who know American history are happy that China and the United States were allied against the Japanese during World War II, are pleased that there was a history of Americans seeking to help China during the early 20th century, and are saddened by the economic imperialism of the West toward's China in an earlier time.

In future decades, the two countries will inevitably find their interests coincide in some cases, don't affect each other in others, and are in opposition in still other situations. With good will, their Governments will find ways to cooperate where their interests coincide and will find peaceful means for dealing with opposition of interests. In some cases there will be a "you scratch my back and I will scratch yours" trade-off.

Both peoples and both Governments want to avoid war. The relations between the USSR and the United States were much worse during much of the second half of the 20th century than are those between China and the United States now. During the Cold War the United States and the USSR avoided armed conflict, and China and the United States should also be able to avoid armed conflict. It is important that they do so, and their Governments should be held accountable to assure peace and prosperity.

I assume that each Government is monitoring the opinion of the people of the other. I hope that other American bloggers will make public their support for peaceful and mutually prosperous relations between China and the United States.

"Inside the vaccine-autism scare"

Source: Sandra G. Boodman, The Washington Post, January 16, 2011

I quote:
Opposition to childhood vaccines simmered mostly on the fringes until 1998, when London gastroenterologist Andrew Wakefield co-authored a study in the British medical journal Lancet linking autism to the measles, mumps and rubella (MMR) shot. Although his research was immediately challenged, it was not until last year that the study was retracted and Wakefield stripped of his medical license. Two weeks ago the British Medical Journal published an investigative article on the study, as well as an editorial calling it "an elaborate fraud" based on falsified data. The influential journal reported that crucial details in the case histories of the dozen children included in Wakefield's report had been altered or were misrepresented.
I would suggest that the British justice system consider charges against Mr. Wakefield based on the impact his publication has had in convincing parents to withhold vital preventive immunization from their children.

Tuesday, January 18, 2011

What constitutes "serious" art?

We just bought a print at a local thrift shop. It is by J. Warren Cutler, who was a scientific illustrator who worked for the National Zoo for nearly three decades.  He also did illustrations for a number of books, including children's books published by National Geographic. The print is one of a run of 300, signed by the artist and dated 1983. Relatively large, the print looks like a pen drawing of two desert foxes hunting a kangaroo rat.

The print is quite beautiful. The overall design utilizes the area of the paper well. The two foxes are looking at the rodent which is near the entrance to its nest. Each of the animals is beautifully drawn, not only showing clearly the form of the head and body but conveying the tension of the muscles and the intensity of the moment before action.

As one would hope from a scientific illustrator for the nation's zoo, the three animals appear to be anatomically correct. On the other hand, the composition is about the moment just before the life and death of the kangaroo rat will be decided, rather than about illustrating the special features of the species involved for scientific review. There is great economy in the way line has been used to indicate the shape of the animals, and the lines themselves are often quite beautiful.

I am reminded a bit of Durer's rabbit, in which the artist uses great skill to portray an animal subject. In the case of the Durer there is no doubt that we are looking at art. Yet I suspect that Mr. Cutler's print shows up in a thrift shop rather than a major gallery because that kind of art is now out of fashion. Too bad, because the print combines great knowledge of the subject and great skill with a nice artistic sensibility.

The print was ridiculously cheep, and I have been unable to find examples of Cutler's prints on eBay or the Internet. His works seem to have gone out of fashion. Too bad, because this is really quite meritorious!

I have been wondering about the ways museums and art galleries deal with art. The "great museums" such as the National Gallery or the Metropolitan Museum of Art seek to show the historical evolution of art through examples from influential artists and master works. On the other hand, smaller museums often seem to focus on the work of groups of artists who worked in the region, such as New England landscapes of the paintings of the Santa Fe school. There are specialized fields such as Native American art, cowboy art, and wildlife painting. And of course there is religious art, which seems to be OK if it is old enough but does not get into galleries if it is contemporaneous.

These institutions have the effect of forming public taste, but (fortunately I think) there are commercial galleries that will deal with any class of work that finds a market. Yet somehow the curators of the public, not-for-profit institutions seem to set the standard.

There seems now, as a result of the influence of these standard setters, to be a perception that if a work is done for some purpose in addition to the expression of the artist's aesthetic impulse it is not art, and that strongly representational works done in the last century and a half are somehow less worthy as art than more abstract or expressionistic works. Fortunately, the Wyeths seem to be an exception to this trend. In my opinion, the work of Warren Cutler might merit a similar exception.

Monday, January 17, 2011

Do cities have characteristic moods?

Orhan Pamuk in his book Istanbul: Memories and the City describes a mood as characteristic of his city, contrasting that relatively sad mood with the moods that have been described for other cities.

Now the New Scientist has an article suggesting that the immune system may affect the mind, and thus that the history of infections and lack of infections may have long term effects on ones mental state, including mood. Perhaps the endemic and epidemic disease pattern of a city, by influencing patterns of immunological system status, may tend to produce a mood characteristic of a city.

I suspect that there are other things that affect the mood of a city. For example, Geneva is overcast for months at a time, and people have observed that during those winter months the mood of the city changes. Similarly, people in one city may be genetically happier than those in another city. Who knows?

Incidentally, Nobel Literature Prize winner Pamuk has written a very nice book in Istanbul: Memories and the City.

Sunday, January 16, 2011

Some Thoughts about Evaluation Studies

I wonder whether we bring to evaluation all of the discipline that we have learned to use in project planning and management.

What are my credentials for writing this post? I am the "owner" of the Monitoring and Evaluation group on Zunia, and a member of the Monitoring and Evaluation Professionals group on Linked In. I have been on an evaluation team for a World Bank project, and I have done two Project Completion Reports for World Bank projects; I have led evaluations of a couple of projects for USAID; and I have done an evaluation for infoDev. I also worked with a team on an evaluation of a large project supported by a public private partnership. I have published on impact evaluation, and I have managed evaluations of programs for which I had oversight. Thus I have experience in a specific kind of evaluation of donor-funded, international development projects.

Donor agencies spend considerable effort in the design of projects. Those with which I have worked utilize the logical framework approach which involves linking project inputs, outputs, purposes, and the larger objectives. The approach requires the designers to specify objectively verifiable indicators for inputs, outputs, purpose achievement and goal achievement. It also requires explicit statement of key assumptions that are made in the project design.

Evaluations, of course, when done well are carefully designed. Teams of evaluators are carefully selected. There is a plan and a budget. If peer review is involved, peers are carefully chosen. If, as is usually the case, interviews are involved, care is taken in the design of the interviews and development of the interview forms. There is planning as to how many project sites are to be visited, and when those visits are to take place. Data analysis is planned in advance, and there is clear understanding of how information gathered in the evaluation is to be related to the questions that the evaluation is to address.

Still, I wonder if evaluation teams assume too much about the purposes of the evaluations that they will undertake? Do they draw on the theory behind the logical framework to organize their own evaluation work?

Let me give an example about examining the assumptions made prior to an evaluation. I well recall a major evaluation I led of a program that was working in Africa, Asia and Latin America. The program staff was organized into three relatively independent divisions: Africa, Asia and Latin America. We in turn divided the evaluation team into three site visit teams of regional experts -- one to make site visits in Africa, another in Asia and the third in Latin America. Several months later we discovered that the reported quality of the program in Africa was different than that of the reported program in Asia, and both were different than the reported quality in Latin America. We were never able to attribute how much of the difference was due to the difficulty in working in each of the three regions, how much due to the random variations in the small baskets of project activities chosen in each region (either in the program or in the evaluation team's sampling), how much was due to the differences in personnel in the project teams working in the different continents, and how much due to the variations among the three evaluation site visit teams. Clearly we made the (unjustified) assumption that evaluators we chose to work in Africa, Asia and Latin America would all do the same thing and not introduce a source of variation in the results they reported. Had we documented that assumption up front, we might well have chosen a different evaluation method. The reporting of the results of the evaluation turned out to be especially difficult and perhaps not as useful as it might have been.

I wonder if evaluation teams really ask the purposes of an evaluation. They sometimes seem to be simply devised to address whether the original project indicator benchmarks have been met. It seems to me that often more can be accomplished by setting very ambitious benchmarks that are unlikely to be met but which motivate the project personnel, rather than modest benchmarks that are likely to be exceeded. On the other hand, a project manager who is going to be held responsible for achieving the original benchmark is likely to opt for modest benchmarks rather than maximizing project achievement. It seems to me that evaluations should focus on what was versus what might have been accomplished.

I worry about the limits of rationality. Project planners have limited rationality, and I strongly believe that any project evaluation should carefully consider unplanned effects of the project, weather they be positive or negative. Evaluations therefore should begin with a review of what actually happened prior to an assessment of whether the project was sufficiently useful or not. Think about all the projects in Haiti that just stopped when the earthquake killed a couple of hundred thousand people and stopped most activity in the country. While seismologists may have predicted the quake, one could not have expected those in other fields to understand the risk, nor for any to predict just how severe the impact of the quake would be.

Evaluators too have limited rationality, and will not adequately predict all the events and conditions that might influence their work. Similarly, they will probably not fully understand how the social, political, economic, cultural and physical environmental conditions influenced the project that they evaluate. It is important to try to make assessments of these conditions, but it is also important to recognize that the assessments will be limited in their success.

More fundamentally, what is the "real purpose" of the evaluation, as seen by the various stakeholders. Is it merely a bureaucratic requirement, to be accomplished at as low a cost of key resources as possible? There certainly are such evaluations.

I recall an occasion in which word came down that senior officials of my organization were searching for "reliable people" to conduct a series of impact evaluations. Were the selected people to be relied upon to provide information that would make the organization look good or to make the senior officials look good; it was clear that senior officials were not looking for the most gifted people in figuring out what was really happening and making that clear to others.

If the real purpose of the evaluation is to obtain actionable information for project and program managers, what kind of information do they want and need? What kind of information is likely to really be actionable? I wonder how many evaluation teams really address these questions?

The previous paragraphs have been suggested that evaluations serve the interests of stakeholders, and indeed it is useful for evaluators to know about those stakeholders. But what about the entire community of people supporting and implementing donor assistance programs, the collaborators with those programs, and the people who should be the beneficiaries of those programs. Should not all evaluations add to the growing body of credible evidence as to what works and what does not work?

In an international development program, evaluations are likely to cross cultural bounds. In those circumstances, the cultural assumptions of the evaluation team as it carries out the evaluation might best be made explicit in order that the evaluation can be fully understood. I don't recall ever reading an evaluation report that documented the assumptions of the evaluation team.

By what means is the evaluation itself to be evaluated? What are the verifiable indices of its quality? Of its accuracy? Of its precision? Of its value to decision makers?

How should the donor community allocate resources among evaluation studies? Should each development project have a specific portion of its budget allocated to monitoring and evaluation? Should there be an algorithm for the portion of the project budget for M&E that recognizes the economies of scale and the differences in evaluation difficulty of work on different types of projects, in different environments? Should agencies simply allocate a fixed amount of money to each project? The set of possible allocations of resources for M&E in a large donor organization such as the World Bank or USAID is extremely large. The probability than any one of these allocations chosen at random is optimal is very small. Leaving the designers of each project to independently decide on how much to spend on that project's M&E is almost surely suboptimal.

Perhaps there should be at least a minimum investment in monitoring and evaluation for every project, if only to provide information for mid course corrections and to keep people honest. Indeed, the intellectual rigor in defining goals and objectives, their objectively verifiable indicators and the key assumptions is itself valuable.

On the other hand, it seems clear that there should be proportionately more resources devoted to the evaluation of projects exploring new technologies, new institutional approaches, or other elements that may be prototypes for future projects than those "me too" projects which replicate many others using well established approaches.

The essential point is that an organizational evaluation strategy should allocate resources among evaluations according to some careful thinking as to what information that organization needs to obtain to best manage its future activities. In allocating resources, money is certainly of concern, but so too are expertise and management attention.

Friday, January 14, 2011

A Thought on reading Battles' Library.

In his book, Library: An Unquiet History, Matthew Battles tells us that monastery libraries (the only ones that existed in Christian Europe at the time) would have at most a few hundred books, and those would have included multiple copies of the bible and books by St. Augustine. Monks would be put to copying such books in part to allow copies to be loaned to others for study and copying, but also that the copyist would study the book he was copying.

By the early 18th century, libraries could be found in universities and available to secular scholars, and those libraries included many more books, including modern works as well as those of church elders and ancient Roman and Greek authors. Still, the number of books was tiny.

Today, Harvard University boasts the largest university book collection in the world with more than 14 million volumes, while the Library of Congress holds the largest general collection with more than 100 million volumes.

Google estimates that some 130 million books have been written and still exist, and that it has scanned some 13 million making them available to be scanned or read via the Internet.

In 1700 there was actually a debate as to whether then "modern" writers were comparable in worth to the classics. (We tend to think rather positively today about Shakespeare!) Today there is little doubt that among the more than a quarter of a million books published each year in the United States, there are some gems. Of course, one has to kiss a lot of frogs to find a prince, and one has to go through a lot of newly published books to find a really good one.

In the middle ages, a learned man might have read a few books, but would perhaps have studied a handful in great detail. I figure I have read a perhaps two or three thousand books in my life, but don't recall very much about any of them. On the other hand, I have looked up several things on the Internet even in the few minutes I have been drafting this post. I don't need to directly recall the detail of things I have read if I have technology to sufficiently augment my memory.

It is perhaps no wonder that the generation that has grown up linked to the World Wide Web through the Internet, connected by desk top, lap top, and hand held devices, multitasks. Kids scan a lot more information than we did, knowing that they can retrieve more detail rapidly if they should feel that need.

I recall how strange I found it when years ago I lived in developing nations, that people spent so much time talking about where to find shops and facilities, where to buy things. Of course, I was used to living in a society with telephone directories and want adds in which I could look up that kind of information, while those aids were not available in Latin America at the time. If you have to devote time to discovering information you might need in the future and committing that information to memory, you don't have time and mental energy to devote to other mental tasks.

On the other hand, kids trying to sample the huge flow of information from the Internet may also have to give up other ways of thinking, such as analysis and savoring of tidbits of information. One wonders what would be the ideal balance!

I would point out that the post deals so far with countries already deeply involved in the information society. It has been suggested that very few books are published per year in Arabic and that the portion of those books on religion is greater than the portion of English language books on religion. I recall being shocked years ago in Egypt by how little reading of books was required of university students in that country. There has been a lot of writing on the digital divide, but there is also a print divide between people who read different languages. Of course, there are thousands of languages, and many many languages have fewer books available than does Arabic. Unfortunately, the people speaking only these languages tend also to have less access to ICT; they do not easily find materials on the World Wide Web that they can read, nor do they have easy access to translation software that can make materials written in global languages available their minority language.

Wednesday, January 12, 2011

Thoughts on reading 1848

1848: Year of RevolutionMy history book club met tonight to discuss 1848: Year of Revolution by Mike Rapport. The book tells the story of the revolutions that swept through much of Europe that year, only to fall to conservative reaction. Still 1848 saw the end of serfdom in most of Europe, and the introduction of "the social problem" (dealing with poverty) in European politics, as well as some spread of constitutional government, rule of law, and other more liberal policies.

1948 was a year of great shortage of food (bad weather and the potato blight), and governmental economic crises in part due to the cost of food for starving people. The industrial revolution was sweeping the countries most involved in the revolutionary movements and craft workers saw their livelihoods threatened. There had been more than half a century for the revolutions in the United States, France, and Latin America to influence political thought in Europe. I suspect that new economic classes had grown who wanted to compete for power with the landed aristocracy and monarchy. I suspect too that there were more educated people and that automated printing presses had greatly expanded the flow of information to the public. Then too, communications were improved as railroads were driving through Europe, steam ships were speeding ocean and river transportation, and the telegraph was emerging. These factors helped to synchronize revolution in several states, as well as the counter-revolutionary movements that occurred in response to the violence in the streets.

I came to the conclusion that the book would be better suited to a graduate seminar on modern European history, rather than to an informal book club. Most of our members found the book was too detailed, masking the big themes we were most interested in by masses of names of people and places, many of which we did not know well.

It may be trite, but I was struck by how much the map of world has changed in 162 years. Germany and Italy unified, the Ottoman and the Austro-Hungarian Empires gone, and the British and French colonial empires are gone. There are scores of countries formed from the former colonies of these empires. Russia was incorporated into the USSR, then became the Russian Federation. On the other hand most of the countries in Europe have given up some sovereignty to the European Union, and there are many other common markets.

One of the themes of the book was the rise of nationalism in the 19th century, in the sense of groups with a degree of language and cultural homogeneity seeking to establish states with territory coterminous with the geographical distribution of the group. In 1848 the political unrest had roots in such aspirations by Italians, Germans, Magyars, Poles, etc.

The question was asked as to whether we would ever solve the problem of states and nations. It seems to be significant in India, China and Africa, as well as Europe and other parts of Asia. The rate at which states come and go over the centuries suggests we may not have solved the problem yet.

I suppose federalism is a part of the solution, with some functions of government centralized and other delegated to smaller governmental units more responsive to local residents. We seem largely to have accepted constitutional government, and perhaps to a lesser extent representative government.

Still, there seems to be a lot of work left to society to figure out the best institutions to accomplish the functions of "government". Even if we knew now the best form of institution for the world today, I suspect that the needs for governments will change in the future.

significance tests may be a bad way of studying small influences on outcomes

There is an article in The New York Times making a surprisingly sophisticated point on the interpretation of statistical information:
For decades, some statisticians have argued that the standard technique used to analyze data in much of social science and medicine overstates many study findings — often by a lot. As a result, these experts say, the literature is littered with positive findings that do not pan out: “effective” therapies that are no better than a placebo; slight biases that do not affect behavior; brain-imaging correlations that are meaningless.....

The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent......

“But if the true effect of what you are measuring is small,” said Andrew Gelman, a professor of statistics and political science at Columbia University, “then by necessity anything you discover is going to be an overestimate” of that effect.....

In the 1960s, a team of statisticians led by Leonard Savage at the University of Michigan showed that the classical approach could overstate the significance of the finding by a factor of 10 or more. By that time, a growing number of statisticians were developing methods based on the ideas of the 18th-century English mathematician Thomas Bayes.
The article also notes the value of looking at the odds ratio as an alternative to significance testing.

Think about areas such as the genetic and environmental effects on behavior. Assume that there are many interacting genes and environmental factors, each of which has a small impact on the probability of a certain kind of behavior, and which together may not completely determine that behavior. It would seem that it would be very hard to do convincing science on the impact of each of these factors.

Incidentally, when I was doing statistics in the 1960s there used to be a saying that the field of statistics was built on shifting sands until Savage came along and rebuilt it on empty space.

Kathryn Schulz: Being Wrong

The Poptech description of this presentation is:

Kathryn Schulz is an expert on being wrong. The journalist and author of “Being Wrong: Adventures in the Margins of Error,” says we make mistakes all the time. The trouble is that often times being wrong feels like being right. What’s more, we’re usually wrong about what it even means to make mistakes—and how it can lead to better ideas.
This is a very good presentation, worth your attention.

I would add that one can get external negative evidence about a belief not only from people but from objective facts. Indeed, she gives the example of someone who was convinced for years that he had been listening to a baseball game when he heard the announcer break in with the news of the Pearl Harbor attack on December 7, 1941 -- until he realized that they did not broadcast baseball games in December, but only in the summer.

The scientific method depends fundamentally of generating hypotheses that can be disproved by objective measurements. Think of Einstein waiting for years for the measurement of the shift in the apparent position of stars close to the sun, predicted by his theory of relativity but not be Newton. We should similarly make predictions that can be demonstrated to be false in order to test the credibility of our beliefs. Ideally the test should not be hugely destructive. (Think about learning that there were no weapons of mass destruction in Iraq only after we had invaded the country and destroyed its government and economy.)

I agree with Schulz that the way we feel about something we believe that is wrong is usually the same way we feel about something we believe that is right. Indeed, that is a good reason to believe probabilistically. Say to yourself, I believe that is probably true. The next step is to quantify your degree of credence. It is more credible that the sun will rise tomorrow than to believe that the price of bread will be the same tomorrow, although we can hope for both.

Schulz refers to the recognition by Thomas Kuhn that history has shown again and again that a scientific paradigm that everyone believed to be correct, was not correct and should be replaced by a better one. Over time, experiments accumulate with results conflicting with the predictions of the existing paradigm. Since people make mistakes all the time in planning, conducting and interpreting experiments, the normal response to the first such experiments showing anomalous results is to check and double check the experiment. However, when a number of anomalies accrue, some people begin to question the paradigm.

In that moment, the questioners still believe the paradigm to be right, but suspect that it may be wrong. In our everyday lives, we too can have increasingly strong suspicions that something we once believed may be better replaced by an alternative belief. Sometimes that happens quickly, as when you thought you had turned off your cell phone and it rings in your pocket. Sometimes it happens slowly, as when you find over a period of years that a job you once thought fit you perfectly, you now suspect would not be as good a fit as something else.


"To embrace our own falibility is to embrace 'the permanent possibility of someone having a better idea'."
Kathryn Schulz herself quoting Richard Rorty
Philosophy and the mirror of nature

Quotation: Good guidance for political leaders today as in the past

The ideal chief is "a wise, dispassionate man (who) thinks much & thinks slowly, with great caution & deliberation, before he speaks his whole mind."
Sayenqueraghta, a Seneca chief
quoted in The Divided Ground: Indians, Settlers, and the Northern Borderland of the American Revolution by Alan Taylor

Tuesday, January 11, 2011

A new polio threat may be growing in the Congo

The world was close to eradication of polio, which would have been only the second human endemic disease to be eradicated (after smallpox). Unfortunately, superstitious people in some remote areas objected to immunizations based on some kind of conspiracy theory, and unfortunately some of the people infected in those areas went on Hadj to Mecca and spread the disease to people from areas where polio had long been eliminated. Now there is major effort to get back on top of the disease.

However, Science magazine reports:
Polio is a horrendous disease, but it is seldom fatal—except now. An explosive outbreak in the Republic of Congo is writing another chapter in the book on how this ancient scourge behaves.

Polio usually strikes children under age 5, paralyzing one in 200 of those infected and killing at most 5%, occasionally up to 10% in developing countries. The new outbreak tearing through this West African country has so far killed an estimated 42% of its victims, who, in another unusual twist, are mostly males between the ages of 15 and 25. Since it began in early October, the outbreak has paralyzed more than 476 people and killed at least 179, according to World Health Organization (WHO) estimates from early December, making this one of the largest and deadliest polio outbreaks in recent history.

Caught in the net Why dictators are going digital

The Economist this week has a review of The Net Delusion: The Dark Side of Internet Freedom by Evgeny Morozov. The book makes a point that I have been making for years (although probably better than I have). The information and communications technology revolution is producing a variety of very powerful technologies. While some people may appropriate the power in ICT to promote democracy, coercive governments are likely to try to do so to suppress democratic movements and strengthen coercion.
In fact, authoritarian regimes can use the internet, as well as greater access to other kinds of media, such as television, to their advantage. Allowing East Germans to watch American soap operas on West German television, for example, seems to have acted as a form of pacification that actually reduced people’s interest in politics. Surveys found that East Germans with access to Western television were less likely to express dissatisfaction with the regime. As one East German dissident lamented, “the whole people could leave the country and move to the West as a man at 8pm, via television.”

Mr Morozov catalogues many similar examples of the internet being used with similarly pacifying consequences today, as authoritarian regimes make an implicit deal with their populations: help yourselves to pirated films, silly video clips and online pornography, but stay away from politics. “The internet”, Mr Morozov argues, “has provided so many cheap and easily available entertainment fixes to those living under authoritarianism that it has become considerably harder to get people to care about politics at all.”

Social networks offer a cheaper and easier way to identify dissidents than other, more traditional forms of surveillance. Despite talk of a “dictator’s dilemma”, censorship technology is sophisticated enough to block politically sensitive material without impeding economic activity, as China’s example shows. The internet can be used to spread propaganda very effectively, which is why Hugo Chávez is on Twitter. The web can also be effective in supporting the government line, or at least casting doubt on critics’ position (China has an army of pro-government bloggers). Indeed, under regimes where nobody believes the official media, pro-government propaganda spread via the internet is actually perceived by many to be more credible by comparison.

Think about the ability of governments to tap into land lines and cell phones and to use computers to screen traffic in order to identify people with opinions that they don't like and to track networks of people sharing such opinions.

In the democratic countries CCTV networks and high powered informatics are used to prevent crime, but coercive governments may define thought that they don't like as criminal and appropriate the technology for their own purposes.

The capital of one of the richest nations in the world is beset by an HIV epidemic that rivals those seen in some developing countries

Source: Regina McEnery, IAVI Report, Vol. 14 (6), Nov.-Dec. 2010

I quote:
The capital of one of the richest countries in the world has an HIV prevalence rate comparable to those in developing countries (Health Affairs 28, 1677, 2009). In 2007, an estimated 3% of the District of Columbia’s adult population was infected with HIV, a higher HIV prevalence rate than Rwanda, Angola, and Ethiopia, and just slightly lower than Nigeria and the Democratic Republic of Congo.

Other estimates suggest the HIV prevalence in Washington, D.C. may be even higher. A George Washington University (GWU) study based on data collected from December 2006 to October 2007 for the National HIV Behavioral Surveillance (NHBS)—a community-based study funded by the CDC and the District of Columbia Department of Health—estimates that the HIV prevalence rate in the district among a particular subset of heterosexuals at high risk for HIV infection is as high as 5.2% (AIDS 23, 1277, 2009). The study’s authors said this was “the first estimate of HIV and risk behaviors among urban, low income, and African Americans in the nation’s capital.” The HIV prevalence among women in this study was 6.3%, similar to the prevalence among women in Tanzania (7.0%) and Uganda (7.1%).

Sunday, January 09, 2011

Understanding Secession and the start of the Civil War

I have been wondering about the causes of the U.S. Civil War as we enter the 150th anniversary of its start.

What did the power elite think about slavery? These were the male property owners -- essentially an aristocracy of plantation owners. The large majority of them had been raised by slaves, many had slave mistresses, and many had children by those mistresses who were also slaves. Slaves were very differential to them. Somehow I assume that the slaves in their employ were regarded as less than members of their nuclear families, but as closer than the local community. The station in life of such a man was graced by the respect not only of his wife, children, and slaves but also that of less affluent whites and the community of his aristocratic neighbors.

It is, I think, impossible to empathize with these people from our modern vantage. I think about Jane Austin characters, also of a petty aristocracy, whose station in life was based on the land that they owned, or that was owned by their immediate relatives. Reading Austin one can not but be struck how normal they found the institutions in which they lived, and how an Austin heroin and her family were so completely focused on making the right marriage which will bring happiness and maintain her station in life.

I used Google NGram Viewer to trace the frequency of the word "honor" over the last two centuries, showing that the term was used frequently in the run up to the Civil War, and has trailed off ever since.

The southern aristocracy were very concerned with honor. That is they were very concerned with retaining the good opinion of themselves and their peers by living up to a code of honor. That code could demand that the person fight a dual or attack a person of lesser status with cane or whip if the person perceived himself to be insulted. There was special sensitivity to perceived insults to ones wife or children. It would seem that an attack on the institution of slavery was seen also as a personal insult to one's honor, as it insulted the plantation community in which the person lived, as well as the plantation communities of his peers. It was not only a challenge to the economic basis of his income and wealth, but more fundamentally an insult to his relations with those around him and indeed to his ancestors who had similarly lived in slave holding society.

Think of how the more conservative members of our modern society are responding to the proposal to change the institution of marriage to allow same-sex marriage. That change seems to be received by many as an insult to their own commitment to the marriage vows in their heterosexual marriages.

The Constitution had not only recognized the institution of slavery in the states, but had required states to recognize the property rights of the owners of fugitive slaves and return them to those owners. Southerners felt that the northern states were at fault in not living up to that constitutional obligation. Perhaps more to the point was that the a violent response was defined by the honor code to what was perceived to be an insult, and the anti-slavery people were perceived as insulting slave holders and their communities very personally.

It has been estimated that 80 percent of southerners were pro-Union when Lincoln was elected. The popular support for secession increased after Lincoln called for troops in order to militarily take back federal property in the states that had seceded. Here I suppose our modern experience helps to understand southern attitudes. Think of the American response to Pearl Harbor or to 9/11. We are still warlike when we feel threatened by force.

I was wondering about Robert E Lee, who famously anguished over the choice between staying with the Union and joining the Confederacy. He saw, I suppose a duty to his state and his community in Virginia, but also to the Army in which he had served and to the Union to which he had pledged. In both cases, honor came into play. A soldier's honor is surely to follow his flag, to serve with his fellow soldiers, and to face enemy fire. Lee then would have been seeking to find a course between two competing codes of honor.

I suppose that it was easier for southerners to go to war in 1860, when they had no idea how terrible the war was to be. During the war, as defeat became more visibly possible and the cost of the war became more apparent, apparently some southerners began to rethink their commitment to the war.

Even if I can not empathize with the southern power elite as they took their states to war, perhaps I can form an intellectual understanding of why doing so seemed appropriate to them. Perhaps too, that exercise can help me to form a similar intellectual understanding of why Al Qaeda and other groups with which I can not empathize are committing us to the defense against terrorism/


I was wrong in a previous post. The book I was thinking of Blur: How to Know What's True in the Age of Information Overload that mentions the frequency with which society has faced what appeared to be a sudden overabundance of sources of information.

Saturday, January 08, 2011

Two quotations from José Ortega y Gasset

From "La Rebelion de las Masas" which translates to "The Revolt of the Masses:"

  • "There is no culture where there are no standards to which our fellow men can have recourse. There is not culture where there are no principles of legality to which to appeal. There is no culture where there is no acceptance of certain final intellectual positions to which a dispute may be referred... When all these things are lacking there is no culture; there is in the strictest sense of the word, barbarism... Barbarism is the absence of standards to which appeal can be made."
  • "To have an idea means believing one is in possession of the reasons for having it, and consequently means there is such a thing as reason, a world of intelligible truths. To have ideas, to form opinions, is identical with appealing to such authority, submitting oneself to it, accepting its code and decisions, and therefore believing that the highest form of intercommunication is the dialogue in which the reason for our ideas is discussed."

Are we in a temporary quandary about how to deal with the glut of information?

Someone, I am not sure who but it may have been Alex Wright, said that we are not the first society to face a glut of information, in the sense that suddenly much more information is available to one generation than had been available to its parents. The development of printing, of the motorized printing press, the telegraph, the telephone, radio, and television (?) may all have had that impact. It was suggested that one way of dealing with that glut is to select the information one wants to hear rather than the information one should want to hear -- to attend to the information that agrees with what one already believes rather than to attend to that which tends to change one's beliefs.

This seems to be a problem today. People often listen to the talk radio hosts with whom they agree rather than to the radio shows most likely to provide high quality information. The cable news channels get an audience by broadcasting to the converted often, rather than broadcasting the most valid information about novel events that may have future importance to the world and the nation. On the Internet we link on social networking sites with our friends, presumably with people that we generally agree with, rather than with the most knowledgeable people. On Twitter, are Ashton Kutcher and Demi Moore, the most connected tweeters really the most informative?

Tim Wu writes in his book, The Master Switch: The Rise and Fall of Information Empires, about the history of new information technologies. In each there seems to be a cycle in which there is a cacophony of expression. Take the telephone, in which the Bell system sought to monopolize the business telephone services in large cities, especially in the East of the United States. Lots of local telephone services were developed serving rural markets that were of little interest to AT&T. Not only did these have community lines in which people could listen in on each other's conversations, but some provided community broadcasts of news, gossip and entertainment marked by a special signal (e.g. eight rings). Wu cites case after case in which ownership of the medium was centralized, as by AT&T monopolizing the telephone system and NBC and CBS creating networks of owned local radio stations. He leaves open the question of what will happen with the Internet.

The mass media -- radio, television, newspapers and movie news -- as they achieved market dominations tended to develop also an ethical commitment to provide news of high quality. While the FCC provided some oversight for the broadcast media, there seems to have been a fairly serious effort in many media to provide high quality information services. The BBC in the UK and public radio and television in the United States may have been leaders in this effort, but the national newspapers such as the Times of London, the New York Times, and the Washington Post also developed reputations for covering news broadly and well.

As Derek de Sola Price pointed out, the Internet is the first information infrastructure that makes point to point communication comparable in cost to one-to-many mass media communication. I wonder whether, as a result, it will be harder to centralize control over the Internet. Of course, the fiber optic cables on which the Internet digital packets flow will tend to be under central control, but with Net Neutrality laws we may be able to assure many voices will be heard via the Internet.

One may hope that many sources of information publishing on the Internet will have the discipline to provide high quality information. One may hope that the general population will develop information literacy, learning to choose among Internet sources for quality of information. Perhaps too there will be a role for civil society organizations to provide information to help the public to judge the quality of information, as the League of Women Voters helps voters judge the quality of assertions of political candidates and their supporters. Perhaps too there will be a role for government in regulating to block the most egregious misinformers.

Elinor Ostrom points out that a commons may be sustainably productive, avoiding "the tragedy of the commons", if society develops institutions for that common property that work. Can we develop such institutions for the information commons of the Internet that allow good information to drive out bad, that give voice to many who are now voiceless, and that keep us from being drowned in spam, phishing, and hate?

Wednesday, January 05, 2011

The Information Revolution as seen in Google NGram Viewer

The work "telegraph" had a long run in English language books, albeit always at a relatively low frequency.
"Telephone", of course, started later in the literature but rose faster and stayed higher. "Television" raised to a peak in frequency during World War II, then trailed off but to a higher plateau than "radio". "Computer" peaked in the 1980s, but has continued at quite a high plateau. "Internet" was still on the rise by 2000. Google NGram Viewer provides a means for quantifying our otherwise impressionistic view of the history of the Information Revolution.
I add the word "print" to go back to the pre-electronic era of the information revolution.

Tuesday, January 04, 2011

Quotation: most books are bad!

Matthew Battles in his book, Library: An Unquiet History, writes:
Reading the library we quickly come to an obvious conclusion: most books are bad, very bad in fact. Worst of all, the're normal: they fail to rise above the confusions and contradictions of their times.
He writes that Harvard's libraries contain some 14 million books, and the Library of Congress holds more than 100 million books, adding some 7000 per day. I find it comforting that most of those books are bad and that I am not missing much by not reading them. I also find comfort in the Google's making its ngram viewer available so I can see trends in those bad books without reading them all or even a representative sample.

The problem is that there is so much that I don't know, and so many books that inform even if bad, not to mention that there are so many good books still unread!

Some interesting facts about US Philanthropy

  • 89 percent of American households contribute to charities or religious institutions
  • According to the Charity Navigator blog, total giving to charitable organizations was $303.74 billion in 2009 (about 2% of GDP). The US official aid budget it less than one-tenth of this (0.2% of GDP).
  • Religious institutions receive the most charitable contributions (33% of all donations) followed by the educational sector (13%)
  • Every time the Standard and Poor’s 500 stock index drops 100 points, charitable giving declines by $1.85 billion
  • Warren Buffet became the biggest philanthropist when he donated $31 billion to the Bill and Melinda Gates Foundation (initial value of the gift)
  • The U.S. is one of the few countries to allow givers a tax deduction for charitable donations

We could do a lot of good if we didn't spend so much on war!

I quote from The Cost of War Calculator via Zunia:
The Cost of a Single B-2 Stealth Bomber Is $1,000,000,000. This could provide 'Any One' of the following resources:

  • 2,564,102,564 Meals For Starving People.
  • 1,150,510 Clean Water Wells.
  • 31,446,541 Adult Cataract Operations. Restoring sight to the blind.
  • 53,504,548 Children supplied with school books for a whole year.
  • 1,000,000 Landmines removed from the ground.
  • 89,126,560 Water Filters. Poor families in places like Cambodia, have no choice but to drink water full of bacteria and parasites. Water filters saves lives by screening out small but deadly bugs.
  • 100,000,000,000 Chlorine Tablets to make water safe to drink.

Monday, January 03, 2011

Books Closed on the 111th Congress

I quote the following from Thomas Mann of Brookings:
The 111th Congress closed its books with a flurry of significant action during the post-election lame duck session, including a major food safety bill; huge tax cut and unemployment benefit extension; repeal of the Don’t Ask Don’t Tell policy on gays and lesbians in the military; ratification of the new START treaty with Russia; and approval of medical benefits for 9/11 rescue workers. This close was consistent with a two-year legislative record whose productivity ranks with the Congresses empowered by the landslide elections of FDR and LBJ in the 1932 and 1964 respectively. The trio of mega bills enacted during 2009 and 2010—the initial economic stimulus, health reform, and financial regulation—have had or will have far-reaching effects on our economy and society. At least a dozen other significant pieces of legislation found their way into law. Among these notable accomplishments were bills on fair pay; student loans; new regulation of the credit card and tobacco industries; national service; stem cell research; land protection; and a major expansion of the FDA.

The 111th Congress and the first two years of the Obama administration, however, were beset by striking public skepticism of the value of these accomplishments, punctuated by the lowest rating of Congress by the public in polling history and the “shellacking” the president’s party took in the November midterm elections.
I suppose that the continuing high level of unemployment, the problems in housing (mortgages foreclosed or in danger of foreclosure, loss of value of housing), the reductions in the value of people's savings, the reduction in government services, and the continuing wars with no apparent reduction in the threat of terrorism it is not surprising that people are unhappy with government.

Blaming all these problems on the 111th Congress and the Obama administration makes about as much sense as blaming them for the cold weather we had before Christmas. The government, together with other governments, seems to have saved us from a global depression while making a lot of long-needed reforms. My guess is that the Tea Party folk will make things worse, and would make things much worse if they had more power in government. I surely hope the next election will be more supportive of Obama and the Democrats who have done so much good work over the past two years!

Saturday, January 01, 2011

The Great Escape: Nine Jews Who Fled Hitler and Changed the World

I just finished reading The Great Escape: Nine Jews Who Fled Hitler and Changed the World by Kati Marton. The book is primarily profiles of these people:
  • Leo Szilard, who is credited with the conception of the nuclear chain reaction in 1933, who was central to the creation of the first nuclear reactor, who is perhaps the person most responsible for convincing the U.S. government to develop the atomic bomb during World War II, and who was an important peace activists including a founding member of the Pugwash Movement (which eventually was awarded the Nobel Peace Prize).
  • Eugene Wigner, Nobel Prize winning physicist who also played an important role in the development of the atom bomb by the United States.
  • Edward Teller, who played a role in the development of the atomic bomb, a central role in the development of the hydrogen bomb, and who was a key person in supporting the development of missile defense systems. He was also very influential in atomic energy and national security policy for many years. A co-founder and director of the Lawrence Livermore Laboratory.
  • John von Neumann, one of the greatest mathematicians who ever lived, who contributed also to physics, statistics, and economics (game theory), who played a key role in the invention of the modern digital computer and pioneered in developing programs for the use of the computer, and who played an important role in the development of the atom bomb. He was a founding member of the Princeton Institute of Advanced Study, a member of the Atomic Energy Commission and a consultant with numerous governmental and business organizations.
  • Alexander Korda, a leading figure in the British film industry who founded London Films. His best known films were The Private Life of Henry VIII, The Four Feathers, and The Third Man. Merton describes his film, That Hamilton Woman, as effective in its effort to encourage the American public to support entry into World War II by sugar coating the message in a strong and popular film.
  • Michael Curtiz, the director of 173 films, who also produced and acted in a few films. One of Hollywood's greatest film makers who is perhaps best known for Casablanca, Yankee Doodle Dandy, and Mildred Pierce.
  • Arthur Koestler, a prolific author best known for Darkness at Noon, a novel credited with being one of the most influential anti-Soviet books ever written.
  • Robert Capa, perhaps the greatest combat photographer who ever lived, who covered the Spanish Civil War, the Second Sino-Japanese War, World War II across Europe, the 1948 Arab-Israeli War, and the First Indochina War (in which he was the first photographer to be killed). He was a cofounder of Magnum Photos, the first cooperative agency for worldwide freelance photographers.
  • Andre Kertesz, a photographer known for his groundbreaking contributions to photographic composition and the photo essay.
In an epilogue, Merton adds three more recent Hungarian emigres: Imre Kertesz, Nobel laureate author; Andy Grove, founder and president of Intel; and George Soros, financier and philanthropist.

Capa's iconic Falling Soldier from the Spanish Civil War

In 1867 Hungary was granted a degree of autonomy within the Austro-Hungarian Empire and a law was passed emancipating Jews. Budapest, the new capital grew from three towns to a city of a million people, and Jews streamed in from other regions representing 1/5th of the city's population by 1900. An exceptional educational system existed and was open to Jewish children. However, in the aftermath of World War I, on which Hungary had been on the losing side, economic conditions worsened, a proto-fascist regime took power, and anti-Semitism became general. Some of the more cosmopolitan Jews had the good sense or good luck to emigrate west to Austria, Germany, France and England. As the Nazi's rose to power in the 1930s, some moved on to the United States.

A central point to the book is the success that its protagonists had and the huge contributions that they made to the countries that offered them refuge. Of course, these were geniuses, and their gifts were nurtured by great training. Merton points out that they learned survival skills facing discrimination and surviving conflict that many were able to transfer to their civilian roles.

The background to the success of the protagonists in this book is the holocaust, and more generally the economic disaster of the 1930s, the rise of Communism and Fascism, and the devastation caused by the wars between democratic and authoritarian governments. The long dark period in Hungary after its short period in the sun was especially hard on those who lived in that country.

My parents both immigrated to the United States, albeit from Ireland and England so that they did not have to learn a new language as adults. (Indeed, both my parents, my wife, my son and I have each lived in three countries during our lives.) I can identify with the emotional impact both of being an outsider in the culture in which one lives, and of missing the place in which you grew up and your family and friends. My parents, not nearly as great as Merton's subjects, were among the lucky ones who were drawn more by the attraction of the place that they moved to than by treats of remaining in the place from which they had left. Merton's characters range from a happy immigrant to a dissatisfied, all-but stateless refugee.

I grew up in Los Angeles and I had some experience of the brilliance of the Jewish community in that city and of the brilliance of the European refugee community after World War II. As a child I knew people who knew and associated with Igor Stravinsky, Arnold Schoenberg and Thomas Mann. The films by and staring European immigrants were an important part of my life. Still, Marton's book was a useful reminder of how much I personally owe these people.

I have long been aware of the contributions of Hungarian refugees, but the book made me wonder how many other nations have sent comparable numbers of their most brilliant people into exile. How much did Hungary and Europe lose by driving its most productive citizens away while killing millions of others.

The book is organized by epoch, pre-WWI, the inter war period, WWII, and the post war period. In each period, it follows the events in the lives of each of its characters. It works in part because these are such well known figures. The organization brings out the driving force of world events on the lives of these Hungarian refugees.

Merton writes well, and this short book is an easy read, worth the effort.

André Kertész, “Elizabeth and I, Paris”, 1931