I have previously posted elsewhere about how similar the failures in indigenous policy and development (particularly foreign aid) policy have been. Remarkably similar, indeed. They also show some distinct similarities to the more unfortunate effects of welfare provision. (By ‘welfare provision’ I do not mean the aged pension or health or education services; I am talking about income and other transfers to working-age-but-not-working people.)
That indigenous policy has a remarkably consistent dismal story of failure is its most striking feature. Whether it is Australian aborigines, Amerindians
or Lapps, the record of broken communities mired in violence and sexual, drug, alcohol and child abuse, entrenched poverty and lower life expectancies is dismally similar. [Though comparison between Australian aborigines and Amerindians is complicated by the much greater range of the latter in their pre-conquest ways of life--from foragers through horse-peoples to agrarians--and that some Amerindian communities have parlayed treaty rights into profitable assets, such as casinos, which gets them "out from under" the more problematic features of indigenous policy.] No matter how much money is thrown at the issue, the problems never seem to get much better [within communities primarily dependant on indigenous policy]. Indeed, in Australia, outback Aboriginal communities were often more dysfunctional by the time of the Commonwealth Intervention than they were in the 1960s and early 1970s. The difficulties for indigenous peoples tend to be notably worse for former foragers (i.e. hunter-gatherer cultures) than farming peoples, but the latter are not exactly shining beacons of success.
Development policy–specifically foreign aid–has much the same record of failure as indigenous policy for farming peoples. Not the level of disaster that indigenous policy has been for former foragers, but still remarkably little return for huge expenditures (in total sums over time).
Welfare recipients also show patterns of entrenched problems–broken homes, violence and crime, entrenched poverty. There is a cause-or-effect problem here, in that social dysfunction makes one more likely to qualify for welfare assistance. Still, being on welfare does not seem to do much to improve such patterns and there is some evidence it makes them worse. Though, in all this, one has to be careful to not to mis-characterise behaviour which may turn out to be rational responses to the constraints of poverty (pdf) that have little to do with the above forms of public policy.
Finally, the worst of all worlds may be one where external authorities impose rules but are only able to achieve weak monitoring and sanctioning, cooperation is enforced without any need for internal norms to develop. In a world of no external rules or monitoring, norms can evolve to support cooperation. But, in an in-between case, the mild degree of external monitoring discourages the formation of social norms, while also making it attractive for some players to deceive and defect and take the relatively low risk of being caught (Pp 147-8; Pp 12-3 in the pdf).
Clearly, welfarism, indigenous policy and foreign aid involve “weak monitoring and sanctioning” and a “mild degree of external monitoring”. But the epiphany went deeper than that.
Ostrom starts her paper by putting the problem of collective action in context. The problem being that, on the assumption that folk are rational egoists, then there is a massive free riding problem for collective action which should make cooperative action effectively impossible. Yet, both in the wider world and in experimental economics, we observe a wide range of cooperative behaviour.
From the experimental and empirical evidence, Ostrom distills three basic types of human agents–rational egoists, conditional cooperators (presume cooperation and then respond to others actions) and willing punishers (keen on sanctioning those who free ride). This is a similar framing as founder of cliodynamics Peter Turchin‘s division of people into knaves (always self-interested), saints (always cooperate) and moralists (moral with punishment). Turchin characterisers his moralists as conditional cooperators, but he draws out folk at the high end of cooperation spectrum to form his saints category and puts those more willing to withdraw if cooperation is not forthcoming in with Ostrom’s willing punishers: these are different ways of cutting up the same underlying patterns, they are inferring from the same general body of evidence. (I discuss Turchin’s book War and Peace and War here; he also has a useful website.)
We homo sapiens did the key part of our evolving in small foraging bands. So, results from game theory experiments that:
Only the trustworthy type would survive in an evolutionary process with complete information. … Where a player’s type is common knowledge, rational egoists would not survive. Full and accurate information about all players’ types, however, is a very strong assumption and unlikely to be met in most real world settings (p.145; p.10 of the pdf)
suggest that there might have been strong evolutionary pressure for cooperators (saints and moralisers) and against rational egoists (knaves). More worrying for more complex social settings is that:
If there is no information about player types for a relatively large population, preferences will evolve so that only rational egoists survive (p.145; p.10 of the pdf).
This implies that, in a game where people know only their own payoffs and not the payoffs of others, they are more likely to behave like rational egoists. McCabe and Smith (1999) show that players tend to evolve towards the predicted, subgame perfect outcomes in experiments where they only have private information of their own payoffs and to cooperative outcomes when they have information about payoffs and the moves made by other players (n8, p.145; p.10 of the pdf).
Different social settings affect which norms evolve and some may favour rational egoism. While most people start off as cooperators (saints and moralisers):
… preferences based on these norms can be altered by bad experiences. … In this setting, the norms supporting cooperation and reciprocity were diminished, but not eliminated, by experience (p.146; p.11 of pdf).
Somewhat more reassuringly:
… if there is a noisy signal about a player’s type that is at least more accurate than random, trustworthy types will survive as a substantial proportion of the population. Noisy signals may result from seeing one another, face-to-face communication, and various mechanisms that humans have designed to monitor each other’s behaviour (p.145; p.10 of the pdf).
So talking is good.
The big, take-away point here is that social settings affect which norms evolve and public policy can create or influence social settings.
If we consider the evolution of norms, it is not surprising that people from foraging cultures do worse than people from farming cultures in modern industrial (and post-industrial) society. Foragers do not develop norms encouraging long time-horizons since there is little time-lapse between acquisition of food and its consumption. Farmers have to develop such norms, or they starve, since they live off delayed consumption. This is particularly so for farmers in colder climates (like, say, Northern Europe, Northern China and Japan) where surviving winter takes extra preparation.
In societies where asset accumulation is crucial (starting with one’s own human capital), a failure to develop longer time-horizons, and the associated and reinforcing norms, is going to be a bit of a problem. A recipe for social failure, indeed.
Nice middle class folk from farming-cum-industrial-cum-post-industrial societies may not “get” what a big deal this is, as they have so internalised longer time-horizons–and their supporting norms–that they do not “see” them, and so do not seriously consider their possible lack and the implications thereof. Consider yours, mine and ours. In a foraging culture, food storage is not much of an option. All food is fresh food and has to be consumed pretty rapidly. So sharing is simply what you do (within your not-very-big band). Particularly with “big” items such as meat. There is an awful lot of ours, not so much yours and mine.
This does not work with farming. You have to engage in an annual cycle of effort for considerably delayed payoffs where food is stored to get you through the year and production of sufficient seed grain is crucial. If one is expected to share everything, the free rider problems become enormous. A farming community based on continuation of forager sharing typically collapses in starvation. (This experiment has been run repeatedly; notably in the very early North American colonies who attempted Christian sharing of all food: this was a complete disaster until the new governor re-imposed the evolved farming norm of your land, your effort, your food. It also makes me wonder about the very high violence level of non-state farming communities, which might represent difficult transitions from foraging to farming norms.)
Forager sharing also does not work with, say, housing. Asset-use massively disconnected from asset-care is not good for asset preservation. And housing paid for by others does not encourage the development of asset-care norms. Are the notorious problems of indigenous public housing making more sense now?
But it gets worse. How do prospects for “human capital formation” (i.e. doing well at school) look if everything has to be shared, so quiet space to do homework rarely or never happens? Or anything which evolves delayed pay-offs in cultures which have never had much cause to evolve such norms? Now add in free, no-effort-required, “sit down money” to the free, no-effort-required, housing. Not only are the evolution of norms appropriate to an industrial/post-industrial society frustrated, it is worse than that. The original pay-off-for-effort norms of foraging society are undermined too.
What would one predict from that? Massive social dysfunction, tending to get worse as time marches on. A tendency for norms to (probably rapidly) evolve to rational egoism with short-time horizons since income and housing comes without cooperation or effort.
What we have is massive norm failure. Public policy creating social settings where the evolution of norms appropriate to modern society is blocked and existing effort-and-cooperation norms are undermined or directed to profoundly parasitical behaviour.
Even more basically, prosperous lives in functioning communities with stable families and good life expectancies in decent polities are not the result of material things. They are results of patterns of behaviour, and the underlying cognitive framings, that produce the behaviours which create those things. Said behaviours, and supporting norms, do not just “happen” if income-and-material-things are provided. Dropping the physical consequences of such patterns of behaviour on people do not induce the patterns of behaviour. Worse, since provision of such things creates incentives, the resulting incentives can actively militate against beneficial norms and behaviours; they can actively undermine their development. This has been expressed as Reynolds’ Law:
Subsidizing the markers of status doesn’t produce the character traits that result in that status; it undermines them
The government decides to try to increase the middle class by subsidizing things that middle class people have: If middle-class people go to college and own homes, then surely if more people go to college and own homes, we’ll have more middle-class people. But homeownership and college aren’t causes of middle-class status, they’re markers for possessing the kinds of traits — self-discipline, the ability to defer gratification, etc. — that let you enter, and stay, in the middle class. Subsidizing the markers doesn’t produce the traits; if anything, it undermines them.
Referring to the matter as a problem of “character traits” may be great for conservative, middle class righteousness but it lacks social science support. Not to mention any sense of historical change or cultural evolution. Looking at the issue as one of which norms are likely to evolve, or not, and why, is much more productive and does have good social science behind it.
Ironically, talking in terms of “character traits” rather than norms actually tends to underestimate the damage that can be done by badly structured public policy. If it is a matter of character traits, then, at worse, poor public policy is subsidising poor traits and weakening the return from good ones. If, however, it is matter of norms, then the potential exists for public policy to both block things getting better and make things worse through its effect on cognitive framings and the behaviour that flow therefrom. Hence some calls from African economists to stop foreign aid to Africa.
Welfare, indigenous and foreign aid are rather bedevilled by the politics of righteousness, largely due to being high-signalling-but-devalued-consequence policy areas. That is, in these areas good intentions have high salience but the policies are typically directed towards people with low participation in public debate. (Public policy generally has a problem with signalling–which is clear and simple–having greater public debate salience than consequences–which are often indirect and complex–but the imbalance is particularly intense in these issues; as it is in public policy areas affecting groups with diminished or no moral standing.) This makes welfare, indigenous and foreign aid excellent policies for signalling righteousness (or moral vanity, or conspicuous compassion) since the intentions of the policies are very public but their effects much less so. This creates an attractive simplicity: if you are in favour of (preferably) more spending on welfare, indigenous policy and foreign aid, you are a good (that is, righteous) person. If you criticise such policies as ineffectual or counterproductive you are a bad (that is unrighteous) person. After all, you are threatening to take the conspicuously compassionate’s signalling toys away from them.
Or else, if you are playing to a different righteousness game, it is just subsidising laziness and other “bad character traits” and, as the late Jesse Helms‘ famously dismissed foreign aid, pouring money “down a rat hole“.
But if we proceed on the basis that people actually matter, so the actual consequences matter, then we can ignore the squeals of conspicuous outrage (or other posturing) and move on.
A further complication is that welfare, indigenous policy and aid bureaucracies have little incentive to explore options that make them redundant. Worse, politicians can have an incentive to keep people welfare-dependent or otherwise limited in opportunities if improved circumstances would increase the likelihood they would vote for their opponents. Or, in the case of indigenous political entrepreneurs, lessen their appeal. Part of the so-called Curley effect (pdf), using taxes and redistributive policies to shape the electorate. The ALP and other centre-left political parties, for example, have an incentive to expand public housing that concentrates their voters, and put them in marginal seats, as a way of increasing their chances in such electorates.
Foreign aid is in even a worse situation than welfare or indigenous policy, given that the recipients are not even potential voters in the donating polities. So, one would expect fairly low (share of total) expenditures with little effective benefit to the alleged recipients, since the main political benefit domestically is the signalling one gets from the intentions, which actively militates against paying serious attention to any negative consequences. Any policy area where there is active pressure against facing failure is going to tend to produce a lot of it.
In other words, these are areas where there are sadly good grounds to have low expectations about policy effectiveness (defined as good for the recipients).
Including not even asking the right questions.
Evolved norms or imposed constraints
Cooperative behaviour (both active–doing things together–and passive–not blocking others) is crucial to achieving positive social outcomes. A society of rational egoists would be poor and nasty. (Postively Hobbesian indeed.)
In the paper on collective action and the evolution of social norms (pdf) first cited above, Elinor Ostrom pointed out that imposing outside rules turns out to be somewhat fraught in encouraging cooperation:
… experimental (as well as field) evidence has accumulated that externally imposed rules tend to “crowd out” endogenous cooperative behaviour … To the surprise of experimenters, a higher level of cooperation occurred occurred in the control groups [that had not experienced imposed rules], especially for those who communicated on a face-to-face basis. The greater cooperation that had occurred due to the exogenously created incentive-compatible mechanism appeared to be transient. As the authors put it … the removal of the external mechanism “seemed to undermine subsequent cooperation and leave the control group worse off than in the control group who had played a regular … prisoner’s dilemma.”
Several other recent experimental studies have confirmed the notion that external rules and monitoring can crowd out cooperative behaviour. These studies typically find that a social norm, especially in a setting where there is communication between the parties, can work as well or nearly as well at generating cooperative behaviour as an externally imposed set of rules and system of monitoring and sanctioning. Moreover, norms seem to have a certain staying power in encouraging a growth of the desire for cooperative behaviour over time, while cooperation enforced be externally imposed rules can disappear very quickly (p.147; p.12 of the pdf).
Public policy, it’s difficult. Particularly when it is effectively attempting to shift people in a single generation through processes of social evolution that took millennia elsewhere. Europeans and East Asians are the fortunate heirs of said millennia of evolution, so are not in a great position to sneer at those who are not.
Even in the case of industrialisation in farming society, we are still talking taking a generation to traverse what the originators took a couple of centuries to work through. Rather more centuries if we mean developing rule of law and responsible government. A point, btw, which also applies to migrants. So policies which undermine the development of productive norms by migrants and their children are particularly problematic, for both them and the host society.
Regarding indigenous policies, one of the more tragic stories I have been told is of a group of indigenous kids from Arnhem Land who were taken to Singapore; apparently the take-away point the organisers were after was that you did not have to be white to be successful. The kids came back to their broken communities and got straight into the glue-sniffing etc, because they were not white and so had no excuse. Realising how very different things can look to the person standing next to you is one of the hardest things in life, let alone public policy. Lots of folk pontificate about cultural difference; rather fewer seriously consider their implications.
Morality, custom and law
A point Ostrom makes in the above paper and elsewhere, supported by lots of field research (particularly in irrigation systems, were locally generated rules tend to manage water resources much more sustainably than those with externally imposed rules), is that the government advantage in rule provision and enforcement is much more limited than is generally realised. Particularly for common resources used by longstanding inter-actors. Worse, attempts to manage common resources–such as fisheries–have been sabotaged by central governments refusing to recognise the local rules and property rights that have evolved to manage the common resource. This has been particularly true of forests, local streams, grazing areas and inshore fisheries in the developing world, often out of a misplaced environmental concern and an inability to recognise the difference between open-access and common-property regimes. Ironically:
When resources that were previously controlled by local participants have been nationalized, state control has usually proved to be less effective and efficient than control by those directly affected, if not disastrous in its consequences (…). The harmful effects of nationalizing forests that had earlier been governed by local user groups have been well documented for Thailand (…), Niger (…), Nepal (…) and India (…). Similar results have occurred in regard to inshore fisheries taken over by the state or national agencies from local control by the inshore fishermen themselves (…).
In looking at norms, how they work, how they arise, what factors effect their evolution, we are in the area where morality shades into custom shades into law. Indeed, in much of the medieval period–particularly early in the medieval period–law basically meant the custom of the area (or “what we remember doing last time this came up”). Nor were laws simply territorial; If a matter came to trial, a traveller might well be asked which set of laws applied to them. We are so used to thinking of law as something that emanates from a central authority, we now longer recognise it when it emerges out of custom, such as local use regimes. (Alternatively, states are determined to defend their monopoly privileges.) Though, in the mid-C19th, the California Supreme Court had the sense to legally recognise what the gold miners themselves had worked out. Native title is also legal recognition of existing custom.
Both of which were the common law returning to its roots. Henry II‘s attempt to provide royal judges for a kingdom that had Anglo-Saxon law, Danelaw and Norman law, and regional variations thereof, is what led to the development of the common law. Send us all your local laws was the royal request, and his chancery distilled the common bits–hence the common law. Henry’s royal justice was competing with both the ecclesiastical courts enforcing canon law (which led to some Beckett unpleasantness) and local manorial and baronial courts. His travelling judges were such a success that the only part of Magna Carta which called for more royal government was the clause insisting on more frequent visits by royal judges.
To call Henry II a law-giver is to underestimate his achievement. He oversaw the creation of a system for generating law; for taking custom, precedent and current experience and producing law. Appropriately, for centuries, the end of his reign was the beginning of time immemorial. About one-third of humanity now lives under full or part common law systems; a system which began operating in a knightly society now copes fine with the space age.
The barons were not so keen on the royal competition with their own courts, but they became quite keen on being able to sue other folk; provided, of course, that no royal judge could do anything to them unless they were found guilty by their peers. Possibly the most famous of the trade-offs in Magna Carta. (The list of trials of peers before the House of Lords makes for racy reading. A highlight being Earl Ferrers who pleaded his own defence on grounds of insanity; a paradoxical approach–he conducted his defence with sufficient ability as to fatally undermine it–that did not save him from the gallows for murder of his steward. Allegedly, he was hung with a silken rope, out of deference for his rank.)
That, in English law, only the holder of the title was noble–so all the rest of their family were legally commoners–affected both the laws and norms of England, since it gave the peerage a strong vested interest in how the law treated commoners. Not the case in, for example, France were all members of a noble house were nobles so lacked incentives to attend to the legal (or other) treatment of commoners. They did, however have strong incentives to insist on their status and privileges, these being increasingly unanchored in genuine responsibilities. So a significant number of the peers of France ended up guillotined while the House of Lords is still with us. (It is even still inspiring emulators; it is a pity that those responsible for post-invasion Iraq and Afghanisgtan did not consider something similar to the Somaliland House of Elders, thereby anchoring their new governments in existing social structures.)
Norms matter; so the incentives that affect their evolution matter. Aid, indigenous and other welfare transfers typically do not generate productive norms. Worse, unearned transfers encourage rational egoism with short-time horizons since they provide payoffs without cooperation or effort. The failures of indigenous policy, of foreign aid, are typically norm failures. They will continue to be failures until they are structured to encourage the evolution of norms that encourage healthy and productive lives, families and communities. Successful lives, families, communities and economies are built on patterns of behaviour and supporting norms; so failure to attend to what does, or does not, encourage the evolution of such norms simply leads to failure. Including making things worse.
[Cross-posted at Skepticlawyer.]
 Speaking from personal experience, a narcissist–rational egoists with added self-delusion–can be very much more revealing in email; presumably because they are only channelling themselves, they are not getting any audience feedback, so their-convenience-as-reality-principle gets freer reign. Companies may also want to consider whether private payment structures are such a good idea.