Saturday, May 24, 2014

Gerald M. Edelman, Nobel Prize-Winning Scientist, Dies at 84


Edelman in 2009

I was sad to read this - and I feel fortunate to have seen Dr. Edelman speak in a keynote address at the 2013 Evolution of Psychotherapy Conference in December. I have been very influenced by his theory of consciousness and his tendency toward an interdisciplinary approach to science.

Edelman was awarded the 1972 Nobel Prize in Physiology or Medicine (shared with Rodney Robert Porter) for work on the immune system - in which he mapped a key immunological structure — the antibody — that had previously been uncharted.

He later showed that the way the components of the immune system evolve over the life of the individual is analogous to the way the components of the brain evolve in a lifetime. He is also credited with discovery of the neural cell adhesion molecule (NCAM), which allows nerve cells to bind to one another (a kind of "cellular glue") and form the circuits of the nervous system.

His many books for a popular audience include The Remembered Present: A Biological Theory of Consciousness (1990), Bright Air, Brilliant Fire: On The Matter Of The Mind (1992), A Universe Of Consciousness: How Matter Becomes Imagination (2001, with Giulio Tononi), Wider Than the Sky: The Phenomenal Gift of Consciousness (2004) and Second Nature: Brain Science and Human Knowledge (2007). Dr. Edelman was the founder and director of The Neurosciences Institute and was on the scientific board of the World Knowledge Dialogue project.

Here is an obituary from The Washington Post:

Gerald M. Edelman, Nobel Prize-winning scientist, dies at 84

By Emily Langer, Published: May 22

Gerald M. Edelman, a Nobel Prize-winning scientist who was credited with unlocking mysteries of the immune and nervous systems and later ventured into ambitious studies of the human mind, died May 17 at his home in La Jolla, Calif. He was 84.

His son David Edelman confirmed the death and said his father had Parkinson’s disease.



(AP) - Dr. Edelman, shown here in 1972 at his laboratory at Rockefeller University, shared the Nobel Prize in physiology or medicine for his discoveries related to the chemical structure of antibodies.

Once an aspiring violinist, Dr. Edelman ultimately pursued a scientific career that spanned decades and defied categorization. His Nobel Prize in physiology or medicine, which he shared in 1972 with the British scientist Rodney R. Porter, recognized his discoveries related to the chemical structure of antibodies.

Antibodies are agents used by the immune system to attack bacteria, viruses and other intruders in the body. But Dr. Edelman did not consider himself an immunologist.

He later embraced neuroscience, and particularly the study of how the nervous system is constructed beginning in the embryonic stage.

He was credited with leading the seminal discovery of a sort of cellular glue, called the neural cell adhesion molecule (NCAM), which allows nerve cells to bind to one another and form the circuits of the nervous system. But he concluded that such biochemical discoveries, however significant, could not fully elucidate the workings of the brain.

Dr. Edelman was associated for many years with Rockefeller University in New York City, where he directed the Neurosciences Institute that today is located in La Jolla. He delved into questions on the vanguard of neuroscience, including the study of human consciousness, and developed a theory of brain function called neural Darwinism.

Some scientists regarded his later work as unverifiable or muddled. The late Francis H.C. Crick, co-discoverer of the double-helix structure of DNA, was said to have dismissed neural Darwinism as “neural Edelmanism.” Others admired Dr. Edelman for daring to broach one of the most vexing questions in science.

He “straddled . . . frontier fields in biology and biomedical science in the last century,” said Anthony-Samuel LaMantia, the director of the Washington-based George Washington Institute for Neuroscience, describing Dr. Edelman as “one of the major intellects in science.”

In his earliest noted work, Dr. Edelman essentially mapped a key immunological structure — the antibody — that had previously been uncharted. “Never before has a molecule approaching this complexity been deciphered,” the New York Times reported in 1969, when the extent of Dr. Edelman’s findings were announced.

Dr. Edelman also was credited with recasting scientific understanding of how antibodies operate. When an antigen, or foreign agent, enters the body, a healthy immune system produces antibodies to attack it. Before Dr. Edelman’s studies, many scientists accepted the notion that antibodies altered their characteristics in order to match the features of the antigen.

Dr. Edelman’s research, which relied on the laboratory examination of quickly reproducing cancer cells, revealed another system in place, said Patricia Maness, a former colleague of Dr. Edelman’s who today is a professor of biochemistry at the University of North Carolina at Chapel Hill.

Research led by Dr. Edelman showed that the body has a “repertoire” of antibody-producing cells, she said. When a foreign agent enters the body, the immune system recognizes the intruder and begins producing in quantity the antibody best equipped for the battle. It was a process of selection, Maness explained, not adaptation or instruction, as had previously been thought.

“There is an enormous storehouse of lymphoid cells which are locks but don’t know they are locks — like a character in a Pirandello play — until the key finds them,” Dr. Edelman once told an interviewer.

The announcement of the Nobel Prize credited Dr. Edelman and his co-recipient with having made “a break-through that immediately incited a fervent research activity the whole world over, in all fields of immunological science, yielding results of practical value for clinical diagnostics and therapy.”

Dr. Edelman led a similarly groundbreaking discovery in neuroscience. Before his work, scientists did not know with certainty how nerve cells combine to form the nervous system. Through his work, Dr. Edelman showed that nerve cells do not affix themselves to each another like Velcro, LaMantia explained.

Rather, two nerve cells connect when surface molecules — NCAMs — recognize each other, setting off a chemical reaction that links the cells and in time forms a system.

Later in his career, Dr. Edelman went beyond chemistry to develop his theory of brain function. He postulated that the brain is not like a computer, hard-wired for certain capacities, but rather is sculpted over time through experiences that strengthen neuronal connections.

Some scientists who seemingly should have been able to understand the theory said simply that they did not. Others regarded Dr. Edelman as a pioneer. Oliver Sacks, the noted neurologist and writer, credited him with having offered “the first truly global theory of mind and consciousness, the first biological theory of individuality and autonomy.”

Gerald Maurice Edelman was born July 1, 1929, in Queens. His father was a general physician in the era when doctors made house calls.

After pursuing classical music training, Dr. Edelman shifted to the sciences, receiving a bachelor’s degree in chemistry in 1950 from Ursinus College in Collegeville, Pa., and a medical degree in 1954 from the University of Pennsylvania.

Following service as an Army doctor in France — it was his F. Scott Fitz-Edelman period, he told the New Yorker magazine — he received a PhD in chemistry in 1960.

In addition to his other appointments, Dr. Edelman was a professor at the Scripps Research Institute in La Jolla. He wrote prolifically for academic audiences and general readers. His volumes included “Neural Darwinism: The Theory of Neuronal Group Selection” (1987), “Bright Air, Brilliant Fire: On the Matter of the Mind”(1992) and “Wider Than the Sky: The Phenomenal Gift of Consciousness” (2004).

Survivors include his wife of 64 years, Maxine Morrison Edelman of La Jolla; and three children, Eric Edelman of New York City, David Edelman of Bennington, Vt., and Judith Edelman of La Jolla.

“I know that people have tried to reduce human beings to machines,” Dr. Edelman once told the Times, seeking to explain the limits he saw in some prevailing notions of science, “but then they are not left with much that we consider truly human, are they?’’

An Asymmetric Inhibition Model of Hemispheric Differences in Emotional Processing


Over the last few decades, there has been considerable research into the function of the prefrontal cortex, especially as it relates to emotional processing (and affect regulation). Dr. Dan Siegel's work has emphasized this brain region in both attachment patterns and in mindfulness practice, which led to his Mindsight approach to healing developmental traumas and attachment failures through a combination of mindfulness and self-compassion practices.

[For more on Mindsight, see Dr. Siegel's book, Mindsight: The New Science of Personal Transformation (2010)]

In this new "perspectives" article from Frontiers in Psychology: Cognition, Grimshaw and Carmel examine hemispheric differences in emotional processing by the prefrontal cortex. They propose an asymmetric model of activation and inhibition:
The asymmetric inhibition model proposes that right-lateralized executive control inhibits processing of positive or approach-related distractors, and left-lateralized control inhibits negative or withdrawal-related distractors. These complementary processes allow us to maintain and achieve current goals in the face of emotional distraction.
This is an interesting approach that may help us better understand how modalities such as EMDR (eye-movement desensitization and reprocessing), a form of bilateral stimulation, can reduce the emotional impact of trauma. This may also lead to ways to refine EMDR protocols to make them more effective for more people.

[NOTE: The image at the top of this post is from another, somewhat related Frontiers article, Hierarchical brain networks active in approach and avoidance goal pursuit (Frontiers in Human Neuroscience; 7:284, 2013, Jun 17).]


Full Citation:
Grimshaw, G.M. and Carmel, D. (2014, May 23). An asymmetric inhibition model of hemispheric differences in emotional processing. Frontiers in Psychology: Cognition; 5:489. doi: 10.3389/fpsyg.2014.00489

An asymmetric inhibition model of hemispheric differences in emotional processing

Gina M. Grimshaw [1] and David Carmel [2]
1. School of Psychology, Victoria University of Wellington, Wellington, New Zealand
2. Psychology Department, University of Edinburgh, Edinburgh, UK

Abstract

Two relatively independent lines of research have addressed the role of the prefrontal cortex in emotional processing. The first examines hemispheric asymmetries in frontal function; the second focuses on prefrontal interactions between cognition and emotion. We briefly review each perspective and highlight inconsistencies between them. We go on to describe an alternative model that integrates approaches by focusing on hemispheric asymmetry in inhibitory executive control processes. The asymmetric inhibition model proposes that right-lateralized executive control inhibits processing of positive or approach-related distractors, and left-lateralized control inhibits negative or withdrawal-related distractors. These complementary processes allow us to maintain and achieve current goals in the face of emotional distraction. We conclude with a research agenda that uses the model to generate novel experiments that will advance our understanding of both hemispheric asymmetries and cognition-emotion interactions.


Hemispheric Asymmetries in Emotional Processing


Prefrontal cortex (PFC) plays a critical role in emotion, but we are just starting to understand how complex interactions within the PFC give rise to emotional experience. One productive line of research examines hemispheric differences in emotional processing, focusing primarily on electroencephalography (EEG) studies of individual differences in frontal asymmetry as indexed by alpha oscillations. Alpha power has long been assumed to be negatively correlated with cortical activity (Pfurtscheller et al., 1996; Klimesch, 1999; Coan and Allen, 2004); this has led to the convention of describing left and right frontal activity as inverse of left and right frontal alpha power. Commonly, frontal asymmetry is measured as a trait (usually in the resting state) and is associated with a number of clinical, personality, and emotional factors, sometimes collectively called affective style (Davidson, 1992, 1998; Wheeler et al., 1993). Relatively low left (compared to right) frontal activity is associated with withdrawal-related traits including depression and anxiety (Thibodeau et al., 2006), shy temperament (Fox et al., 1995), dispositional negative affect (Tomarken and Davidson, 1994), and poor regulation of negative emotions (Jackson et al., 2003). In contrast, relatively low right (compared to left) frontal activity is associated with approach-related traits including dispositional positive affect (Tomarken and Davidson, 1994), trait anger (Harmon-Jones and Allen, 1998), sensation-seeking (Santesso et al., 2008), and high reward sensitivity (Harmon-Jones and Allen, 1997; Pizzagalli et al., 2005).

Frontal asymmetry does not, in general, correlate with current mood state, but with vulnerability or propensity to experience a particular state. For example, relatively low left frontal activity is observed in remitted depression (Henriques and Davidson, 1990; Gotlib et al., 1998), in the infants of depressed mothers (Field and Diego, 2008), and in those with genetic or familial risk of the disorder (Bismark et al., 2010; Feng et al., 2012). It also predicts future depression in healthy individuals (Nusslock et al., 2011). The predictive strength of frontal asymmetry led Davidson (1992, 1998) to propose that it reflects a diathesis – a characteristic way of processing emotional information which, when combined with sufficient stress, leads to disorder.

Two models have tried to capture the fundamental difference between hemispheres. The valence hypothesis (Tomarken et al., 1992; Heller, 1993; Heller et al., 1998; Berntson et al., 2011) grounds emotional asymmetry in affect, and associates left frontal cortex with positive emotion and right frontal cortex with negative emotion. The alternative motivational direction hypothesis (Harmon-Jones and Allen, 1997; Sutton and Davidson, 1997; Harmon-Jones, 2003) grounds emotional asymmetry in action, and associates left frontal areas with motivation to approach, and right frontal areas with motivation to withdraw. These models have sparked decades of research and produced a catalog of traits, behaviors, and biomarkers that are correlated with different patterns of asymmetry (for reviews, see Coan and Allen, 2004; Harmon-Jones et al., 2010; Rutherford and Lindell, 2011).

We see two limitations with both models. The first is that they are premised on the assumption that there is a fundamental frontal asymmetry that should explain all findings. Given the diverse functions of prefrontal cortex and the complex nature of emotional processing, that assumption seems unlikely to hold (see also Miller et al., 2013). It is useful here to consider a potential analogy with language asymmetries, which exist at the levels of phonology, syntax, semantics, and prosody; each subserved by separate neural systems. Although there are overarching principles of hemispheric organization for language, the asymmetries themselves are at least partially dissociable. A second limitation is that both models are largely descriptive. Neither specifies the mechanisms that are lateralized, or explains how they give rise to either emotion or motivation. We again see precedent established in language research, where progress was made when researchers focused on hemispheric asymmetries in the component processes of language instead of global language function. In this perspective, we draw on emerging understanding of cognition-emotion interactions within prefrontal cortex to propose the asymmetric inhibition model, which focuses on asymmetries in executive control mechanisms that allow us to control our emotions so that we can meet current goals.


Cognition-Emotion Interactions in Prefrontal Cortex


The past decade has seen much progress in describing the complex interplay among brain networks that subserve emotion (for reviews, see Lindquist et al., 2012; Ochsner et al., 2012; Pessoa, 2013). To summarize, the generation of an emotional response begins with subcortical structures (including amygdala and ventral striatum) that are sensitive to the presence of behaviourally relevant stimuli. These structures modulate attention to the stimulus (Padmala et al., 2010; Pourtois et al., 2013), and activate a sequence of physiological responses that prepare us to approach or withdraw (Lang and Bradley, 2010). Orbito-frontal cortex (OFC) receives input from subcortical structures and sensory cortex, and computes emotional appraisal, tagging the stimulus as either punishment or reward in the context of one’s current needs (Rolls, 2004; Kringelbach, 2005). Anterior insula (AI) integrates this information with afferent projections from the body, giving rise to emotional awareness or feeling (Craig, 2009; Gu et al., 2013). Ventro-medial PFC (vmPFC) is closely associated with emotional experience and evaluation of emotional relevance for the self (Ochsner et al., 2004).

Lateral regions of PFC, together with anterior cingulate cortex (ACC), have traditionally been linked to cognitive functions, but contemporary models include these as core aspects of emotional processing (Gray et al., 2002; Ochsner and Gross, 2005; Pessoa, 2008, 2013; Dolcos et al., 2011). Ventro-lateral regions (vlPFC) support response selection and inhibition, and are part of the bottom–up ventral attention network that orients attention to behaviourally-relevant (including emotional) stimuli (Corbetta and Shulman, 2002; Viviani, 2013). Dorso-lateral regions (dlPFC) are involved in processes that provide top–down cognitive control, including working memory and the executive functions of updating, shifting, and inhibition (Kane and Engle, 2002; Miyake and Friedman, 2012). They are also part of the top–down dorsal attention network that directs attention in goal-relevant ways (Corbetta and Shulman, 2002; Vossel et al., 2014). Both dlPFC and vlPFC are active during forms of emotion regulation that are cognitively mediated, including cognitive reappraisal (Ochsner et al., 2012), and attentional control over emotional distraction (Bishop et al., 2004; Hester and Garavan, 2009). Sometimes dorsal and ventral regions act reciprocally, reflecting a trade-off between the ventral emotion system and the dorsal executive system (Dolcos and McCarthy, 2006; Dolcos et al., 2011; Iordan et al., 2013). However, the regions sometimes act in concert, as during cognitive reappraisal (Ochsner et al., 2012) and attentional control (e.g., Bishop et al., 2004). The exact pattern of interaction may depend on task demands and the ways in which emotional distractors compete with goal-relevant information for executive control (Pessoa, 2013). Generally, increased activation in dlPFC is associated with decreased activation in amygdala and ventral striatum (Beauregard et al., 2001; Davidson, 2002; Bishop et al., 2004; Ochsner et al., 2012), although these regions are not directly connected (Ray and Zald, 2012). Rather, dlPFC likely achieves its regulatory effects either via connections to vlPFC (Wager et al., 2008), or indirectly through control of attentional and semantic processes (Banich, 2009; Banich et al., 2009) that alter how emotional stimuli are perceived and interpreted (Ochsner et al., 2012; Vossel et al., 2014).

Hemispheric asymmetry does not figure prominently in current theories of prefrontal function in emotion. One reason might be methodological; most data come from fMRI studies that are rarely designed to assess asymmetry. When asymmetries are reported, they are often incidental to the experimental design and based on findings of significant activation in one hemisphere but not the other. However, to determine if the hemispheres differ from each other it is necessary to directly compare activation in homologous regions (Jansen et al., 2006). Such analyzes are common in studies of language asymmetries (e.g., Jansen et al., 2006; Cai et al., 2013), but rare in studies of emotion. A second issue is that there are far more studies of negative than positive emotional processing, meaning that meta-analyzes are dominated by negative studies (e.g., Phan et al., 2002; Wager et al., 2003; Ochsner et al., 2012) and individual studies rarely include both positive and negative stimuli. Unless both valences are represented, it is impossible to determine whether any observed hemispheric differences are related to valence or to emotional processes more generally.

Even given these caveats, there is little compelling evidence for asymmetries related to the generation of emotional experience. Amygdala activity is asymmetric; however, the asymmetry is related to stimulus properties, with the left more active for verbal and the right for visual representations (Costafreda et al., 2008; McMenamin and Marsolek, 2013). OFC is organized along a lateral gradient, with rewards represented in medial areas and punishers in lateral areas (Kringelbach, 2005), but again with no reliable hemispheric asymmetries related to either valence or motivational direction. Studies in which emotions are induced show bilateral activation of medial PFC regardless of valence (Phan et al., 2002; Wager et al., 2003). Multivoxel pattern analysis (e.g., Kassam et al., 2013; Kragel and LaBar, 2014), shows that there are distinct patterns of activity associated with positive and negative emotional experience, but these are broadly and bilaterally distributed across ventro-medial and orbito-frontal regions. There is, however, some evidence for asymmetries in the cognitive control of emotion associated with lateral PFC (Wager et al., 2003; Ochsner et al., 2012). We return to this below.


The Asymmetric Inhibition Model


The absence of consistent asymmetries in fMRI studies stands in contrast to robust findings of emotion-related asymmetries in EEG studies. How can we reconcile these findings? We start with an important observation – that EEG asymmetries are seen in alpha power. The assumption underlying all EEG asymmetry research is that alpha is inversely correlated with cortical activity. Therefore, asymmetric alpha levels are taken to reflect greater cortical activity in the hemisphere with lower alpha (Coan and Allen, 2004). This assumption is overly simplistic and does not reflect current knowledge of either the differentiation of prefrontal networks or the functional role of alpha oscillations. Few studies of EEG asymmetry use source localisation procedures, but those that have done so localize alpha asymmetries to dlPFC (Pizzagalli et al., 2005; Koslov et al., 2011). More generally, studies that measure simultaneous EEG and resting state fMRI find alpha to be inversely correlated with activity in the dorsal fronto-parietal network that coordinates activity between dlPFC and posterior parietal cortex (Laufs et al., 2003; Mantini et al., 2007) and plays an important role in the top–down executive control of attention (Corbetta and Shulman, 2002), primarily through modulations of sensory processing (for review, see Vossel et al., 2014). Functionally, alpha oscillations play a key role in attentional control and gating of perceptual awareness (Hanslmayr et al., 2011; Mazaheri et al., 2013).

The strong association between alpha and the fronto-parietal network leads us to propose that EEG asymmetries reflect the integrity of executive control mechanisms that inhibit interference from irrelevant emotional distractors. Executive control holds goal-relevant information in working memory in order to prioritize attention to relevant (over irrelevant) information (Desimone and Duncan, 1995; Kane and Engle, 2002; Lavie, 2005). Emotional stimuli are strong competitors for processing resources – this is adaptive, because they have such high behavioral relevance. But sometimes success depends on our ability to ignore the emotional stimulus and get on with the task at hand. With the Asymmetric Inhibition Model, we propose that mechanisms in left dlPFC inhibit negative distractors, and those in right dlPFC inhibit positive distractors. As we detail below, the model both accounts for much existing data and yields specific, testable predictions about how manipulations of executive control should affect hemispheric asymmetry.


Existing Evidence for the Model


Our goal here is not to systematically review all research on emotional asymmetry (see comprehensive reviews by Coan and Allen, 2004; Harmon-Jones et al., 2010; Rutherford and Lindell, 2011). Rather, we provide examples to demonstrate that many existing asymmetries can be interpreted in terms of executive control. In the clinical literature, for example, trait EEG asymmetries predict vulnerability to several emotional disorders that are also characterized by difficulties with executive control. Those that are associated with relatively low left frontal activity (such as depression and anxious arousal) entail difficulty in disengaging attention from negative information (Eysenck et al., 2007; Cisler and Koster, 2010; De Raedt and Koster, 2010; Gotlib and Joormann, 2010). Poor self-regulation and addiction, both associated with relatively low right frontal activity, entail difficulty in inhibiting positive distractions (Bechara, 2005; Garavan and Hester, 2007; Goldstein and Volkow, 2011).

In experimental contexts, the model predicts that EEG asymmetries should be correlated with ability to control emotional distractions. Although most EEG studies focus on personality traits or emotional responses, a few recent studies have tested relationships between trait asymmetry and attention. In all studies, emotional faces were used as cues, but the facial expressions themselves were task-irrelevant. In a spatial cueing task, people with low left frontal activity showed difficulty disengaging from angry (but not happy) faces (Miskovic and Schmidt, 2010). In our own lab (Grimshaw et al., under review) we found similar results using a dot-probe task, which can be used to indicate the capture of attention by an emotional stimulus. Participants with low left frontal activity had difficulty shifting attention away from angry (but not happy) faces, but those with high left frontal activity were unaffected by the faces. Pérez-Edgar et al. (2013) had participants perform the same dot-probe task after an emotional stressor. Those who responded to the stress by increasing left frontal activity showed no attentional biases in the dot-probe task, but those who failed to do so showed biases to angry (but not happy) faces. All these studies are consistent with the idea that left frontal activity, as measured in EEG, reflects of the ability to recruit executive control processes that inhibit negative distractions when they are contrary to current goals.

Neuroimaging studies provide some evidence consistent with the model, if we are mindful of the caveats identified in Section ”Cognition-Emotion Interactions in Prefrontal Cortex”. We focus on studies in which the emotional stimulus or dimension is task-irrelevent and must be ignored (e.g., emotional Stroop, irrelevant emotional flankers). These tasks consistently produce greater activation for emotional than neutral distractors in dlPFC, and often in vlPFC. Compton et al. (2003) found increased activation in left dlPFC during presentation of negative words in an emotional Stroop task. Failure to recruit left dlPFC in the face of negative distraction has been associated with depression (Engels et al., 2010; Herrington et al., 2010), anxiety (Bishop et al., 2004), trait negative affect (Crocker et al., 2012) and schizotypy (Mohanty et al., 2005). Positive stimuli (including erotica, foods, and addiction-related cues) can also tax executive control processes (Pourtois et al., 2013). Control over positive distractions is commonly associated with activity in right vlPFC (Beauregard et al., 2001; Hester and Garavan, 2009; Meyer et al., 2011) and sometimes in right dlPFC (Beauregard et al., 2001).

Across these EEG and neuroimaging studies, there is stronger support for left lateralization in the inhibition of negative stimuli than right lateralization in the inhibition of positive stimuli, even in studies that used both positive and negative stimuli (e.g., Compton et al., 2003; Pérez-Edgar et al., 2013). This is problematic for our model, because support depends critically on the hemisphere by valence interaction. One possible explanation for this imbalance is that most studies of emotional distraction have used emotional faces or words as stimuli. Although these stimuli can be matched on subjective ratings of arousal, negative words and faces typically produce more behavioral interference than positive stimuli (Pratto and John, 1991; Horstmann et al., 2006), suggesting that they are more taxing for executive control systems. A better test of the model would use positive and negative stimuli such as pictures of scenes, which have equivalent potential to attract and hold attention (e.g., Schimmack, 2005; Vogt et al., 2008). Consistent with this speculation, the studies that associate inhibition of positive distraction with right lateral PFC all use emotional pictures as stimuli.

As correlational methods, EEG and fMRI cannot establish causal relationships between neural activity and function. However, brain stimulation methods, including transcranial magnetic stimuluation (TMS) and transcranial direct current stimulation (tDCS) can directly alter neural function and so establish causality. In clinical research, activation of left dlPFC with both TMS and tDCS is effective in the treatment of depression (Kalu et al., 2012). Consistent with the asymmetric inhibition model, treatment appears not to alter mood directly, but to improve executive control so that patients are better able to control negative biases (Moser et al., 2002). Conversely, right-sided stimulation affects motivation to approach positive stimuli. For example, activation of right dlPFC leads to reductions in both craving (Boggio et al., 2008; Fregni et al., 2008) and risky decision-making (Fecteau et al., 2007).


An Agenda for Future Research


We are not the first to suggest that emotional asymmetries reflect inhibitory processes (see Terzian, 1964; Jackson et al., 2003; Davidson, 2004; Coan et al., 2006, for explicit statements about asymmetries in inhibitory or regulatory functions). We extend this tradition by specifying a neurologically and cognitively plausible mechanism through which hemispheric differences in emotional processing might emerge. The asymmetric inhibition model draws on our increasingly sophisticated understanding of prefrontal function. In doing so, it not only provides explanation of many existing findings, but also suggests new experimental approaches that will move our conceptualization of emotional asymmetry beyond its current descriptive level.

The model argues for a shift in focus from the study of emotion per se toward the study of executive processes that are subserved by lateral PFC and the dorsal fronto-parietal network. Experiments should draw on the rich literature in cognitive psychology that has identified ways to target specific components of executive control. A simple but useful paradigm involves use of irrelevant distractors (e.g., Forster and Lavie, 2008). The “goal” is an emotionally neutral task, such as finding a target letter in a display that is flanked by irrelevant distractor images, which can be either emotional or neutral. One can then manipulate the availability of executive control in order to assess its role in inhibition. For example, increasing working memory load decreases the availability of executive control and its ability to inhibit irrelevant distractors (Lavie et al., 2004; Hester and Garavan, 2005; Carmel et al., 2012). Conversely, motivational manipulations enhance relevance of the goal and increase ability to inhibit distractors (Pessoa, 2009; Hu et al., 2013). These paradigms can be used in combination with fMRI and EEG recordings to determine whether positive and negative distractions are controlled by dissociable mechanisms, and whether those are differentially lateralized.

Because of inherent limitations in EEG and fMRI approaches, stimulation studies using TMS and tDCS are important for establishing causal relationships between prefrontal function and emotional inhibition. Brain stimulation may be particularly useful in hemispheric asymmetry studies, because it provides access to higher order frontal processes that are not as amenable to experimental manipulations (such as lateralized perceptual input) that have been used to study asymmetries in other domains. The asymmetric inhibition model makes specific predictions about the effects of lateralized stimulation on inhibition. Activation of left dlPFC should improve ability to inhibit negative (but not positive) distractions; activation of right dlPFC should improve ability to inhibit positive (but not negative) distractions.

The asymmetric inhibition model differs from other accounts of emotional asymmetry in two ways. First, it does not associate an entire hemisphere with a specific emotional or motivational state; rather it focuses on one asymmetry in a single mechanism, allowing it to generate specific and testable predictions. Second, the model turns conventional wisdom on its head; associating left PFC with the inhibition of withdrawal (instead of the support of approach), and right PFC with the inhibition of approach (instead of the support of withdrawal). The model is therefore consistent with current work on cognition-emotion interactions that emphasizes the role of lateral PFC in inhibitory executive control. Although we have shown here the value of incorporating cognition-emotion interactions into models of hemispheric asymmetry, we also think that models of cognition-emotion interaction would benefit from more careful consideration of hemispheric differences. Integration of perspectives should yield richer understanding of emotional processes.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Our research was supported by a grant from the Royal Society of New Zealand Marsden Fund. We thank Laura Kranz for assistance with manuscript preparation.


Vape Pens and Budder - Marijuana's Drift from Plant to Ultra-Potent Drug

Users of marijuana have long known and sometimes favored a marijuana extract called hashish (hash for short). It tends to be cleaner and more potent than the plant - the best marijuana buds are averaging 25-30% THC, while hash can be in the neighborhood of 45-55%. [1]

Apparently these already highly potent options are not enough (for reference, the best marijuana in the 1960s and 1970s averaged around 3-8% THC). New butane extraction methods are creating a drug that can be as high as 99% cannabinoids (of which 80-90% is THC), often known as budder.
"The top Budder sample was 99.6% pure," Dr Paul Hornby [a chemist and plant analyst] explained, "which means if you had an ounce of it, only a tiny fraction of a gram would be anything other than cannabinoids. We also tested Budder for toxins, solvents, molds, diseases, heavy metals and other contaminants. There were none. It's essentially just pure cannabinoids. I've tested a lot of cannabis materials, but this is the most impressive."
Hornby's tests also found Budder contains 80 to 90% of its cannabinoids as THC. It contains much smaller percentages of two other cannabinoids: cannabidiol and cannabinol. Of these two, cannabidiol (CBD) is most important because it has medicinal effects and moderates the stimulative effects of THC. [1]
This seems to create a drug with much higher chance of adverse effects. Cannabidiol (CBD), which has no psychotropic effects by itself [2], attenuates, or reduces [3] the higher anxiety levels caused by THC alone [4]. Consequently, the plant material used to create budder (or other extract forms, including the more mainstream use of "dab" with vape pens), will greatly impact the type of high the extract creates. Cannabis sativa has a much higher THC:CBD ration, and causes more of a "high, including the stimulation of hunger and a more energetic feeling. On the other hand, Cannabis indica has a higher CBD:THC ratio, producing more of a "stoned" or meditative feeling [5].

The mainstream media seems not to be aware of "budder" at this point, but the lower quality extracts (often produced at home by amateur chemists - two words which should never go together) are beginning to register with the media over the last year or two.

In December, 2013, The Daily Beast ran an article called "Hey Buddy, Wanna Dab? Inside The Mainstream Explosion of Cannabis Concentrates," which examined the rise of dab and the lack of purity in most street products (along with info on how to know if it's a clean product or not).

In March, 2014, Mother Jones ran a more in-depth article (produced below) on how these new extracts may impact legalization efforts around the country. Below that article is another from Slate, from February, 2014.

References
  1. Brady, P. (5005, Jan 19). "Beautiful budder". Cannabis Culture Magazine.
  2. Ahrens, J., Demir, R., Leuwer, M., et al. (2009). The nonpsychotropic cannabinoid cannabidiol modulates and directly activates alpha-1 and alpha-1-Beta glycine receptor function. Pharmacology 83 (4): 217–222. doi:10.1159/000201556. PMID 19204413.
  3. Zuardi, A.W., Shirakawa, I., Finkelfarb, E., Karniol, I.G. (1982). Action of cannabidiol on the anxiety and other effects produced by ?9-THC in normal subjects. Psychopharmacology 76 (3): 245–50. doi:10.1007/BF00432554. PMID 6285406.
  4. Fusar-Poli, P., Crippa, J.A., Bhattacharyya, S., Borgwardt, S,J., Allen, P., Martin-Santos, R., et al. (2009). Distinct Effects of Δ9-Tetrahydrocannabinol and Cannabidiol on Neural Activation During Emotional Processing. Archives of General Psychiatry 66 (1): 95–105. doi:10.1001/archgenpsychiatry.2008.519. PMID 19124693.
  5. Holtzman, A.L. (2011, Mar 28). Cannabis Indica vs Sativa: A response to Continued cannabis use and risk of incidence and persistence of psychotic symptoms: 10 year follow-up cohort study. British Medical Journal, 342:d738. doi: http://dx.doi.org/10.1136/bmj.d738
* * * * * 

How Vape Pens Could Threaten the Pot Legalization Movement

Not everyone is going to welcome an innovation that facilitates getting high in public places—like high school hallways.

—By Josh Harkinson | Thu Mar. 20, 2014



One of many models of vape pens that can be used to discretely smoke marijuana concentrates. [SIK-photo]/Flickr
Last year, I joined some parents from my son's preschool for their semiregular "Dad's Night Out." We were at a crowded bar in Oakland, and somehow it emerged that I'd done some stories about marijuana. A dad immediately asked if I'd written about hash oil. Within a few minutes (for the sake of journalism, of course), I was trying a hit of nearly odorless vapor from what looked like a miniature flashlight. A single puff, and I was too high to order a second beer.

It might be an understatement to say that marijuana concentrates smoked from so-called vape pens—the pot version of e-cigarettes—accomplish for stoners what flasks full of moonshine do for lushes: Portable, discreet, and fantastically potent, they're revolutionizing the logistics of getting high, and minimizing the risk of discovery. Stories abound of people using vape pens to blaze away undetected at baseball games, city council meetings, kids' soccer matches, and, of most concern to parents and educators, high schools. Even if pot brownies have been around forever, this is probably not what your average Colorado or Washington voter had in mind when they cast a ballot to legalize recreational marijuana.

The concentrates typically used in vape pens are made by extracting THC from pot with water ("bubble hash"), transferring it into butter ("budder"), or refining it into what's known as butane hash oil (BHO, or "errrl," since stoners need a slang term for everything pot-related). From there, it can be refined further into a wax or an amber-like solid ("shatter"). These products are up to three times stronger than the most mind-bending buds. In short, it ain't your father's schwag, and its snowballing popularity among young people is reshaping the culture of the pot scene: One customarily smokes (or "dabs") BHO from specially designed bongs known as "oil rigs," and not at the designated hour of 4:20, but rather at 7:10—which, in case you're wondering, is "OIL" upside down and backwards.

"Baking Bad," the headline of a recent Slate piece on the concentrates scene, aptly sums up how the trend could become a PR nightmare for the legalization movement. As the name implies, making butane hash oil involves extracting THC from cannabis using butane—you know, lighter fluid. The growing rash of butane lab fires and explosions could suggest that potheads are going the way of meth tweakers. And when BHO is improperly made, it can be tainted with toxins.

But perhaps the biggest emerging concern with concentrates is how they may enable minors to abuse pot. Though many high schoolers use vape pens to inhale candy-flavored oils that don't contain psychoactive substances, a study by the Centers for Disease Control and Prevention found that 10 percent had used the devices in 2012 to consume nicotine concentrates (i.e., they'd tried "e-cigarettes"), double the number from the previous year—and that number is likely an underestimate. Emily Anne McDonald, an anthropologist at the University of California-San Francisco, told me her interviews with teens and young adults in New York suggest that the use of vape pens for pot is gaining steam—"especially for getting around the rules and smoking marijuana in places that are more public." She's currently applying for a grant to study the use of pot-concentrate vape pens by young people in Colorado.

Not surprisingly, some cities and states that allow medical marijuana don't look kindly on concentrates. In July, an appeals court in Michigan, where pot is legal for medical use and decriminalized for recreational use in many cities, ruled that concentrates aren't allowed under the state's medical marijuana law. In 2012, the Department of Public Health in pot-friendly San Francisco asked the city's dispensaries to stop carrying concentrates. (It later reversed itself in the face of a backlash.) A recently introduced California bill supported by law enforcement interests would revise its medical pot rules to ban pot concentrates statewide.

The rising popularity of BHO "certainly is a safety issue," acknowledges Bill Panzer, a member of the board of directors of the California chapter of the National Organization for the Reform of Marijuana Laws (NORML). Yet Panzer doesn't see prohibition as the solution. "You can either tell people to stop using concentrates, which they won't," he says, "or you can say, 'Let's regulate it and make sure it's done safely."

After some fierce debates, lawmakers in Colorado and Washington have ultimately decided to permit and regulate concentrates. Colorado requires anyone who makes BHO to operate out of a facility that is separate from a grow operation and that has been certified by an industrial hygienist or professional engineer. Washington state's Legislature last week passed a bill allowing state-licensed pot shops to sell concentrates, as long as the amount sold to any one customer doesn't exceed seven grams. But there are plenty of do-it-yourself recipes online.

Although more states may decide to regulate the production and sale of concentrates (see our maps of the pot regulation landscape), they'll have a much harder time preventing people from toking from vape pens on the sly. NORML's Panzer isn't worried. He brings up the example of an obnoxiously drunk baseball fan who sat next to his son at a recent Oakland A's game. "I have never seen anybody on weed doing that," he says. "Anytime you are replacing alcohol with cannabis, that's positive."

* * * * *

Here is another article, this one from Slate:

Baking Bad

How dabbing—smoking potent, highly processed hash oil—could blow up Colorado’s legalization experiment.

By Sam Kamin and Joel Warner
February 5, 2014


Darkside shatter dab, made by TC Labs for Natural Remedies in Denver. Courtesy of Ry Prichard/CannabisEncyclopedia.com

Brad Melshenker, owner of the Boulder, Colo.-based 710 Labs, knows his operation, with its extensive ventilation systems, industrial hygienist–approved extraction machine, vacuum ovens, and workers wearing respirator masks looks like something out of a marijuana version of Breaking Bad. It’s why he calls his lab manager, Wade Sanders, “Walter,” after the show’s protagonist, Walter White.

And like the famously pure and powerful blue meth White cooked up on Breaking Bad, the product produced by 710 Labs’ fancy equipment is extremely concentrated, powerful, and coveted: butane-extracted hash oil (BHO). The lab’s finished BHO might not look like much—a thin, hard, and shiny brown slab, like peanut brittle without the peanuts—but when a piece of this “shatter,” as it’s called, is placed on the nail of a specially designed pipe that’s been superheated by a blowtorch, it vaporizes and delivers a direct hit of 70 to 90 percent THC, three times the potency of the strongest marijuana strains. As Melshenker puts it, if smoking regular pot is like drinking a beer, “dabbing,” as this process is known, is a shot of hard liquor. Vice calls the result, “The smoothest slow-motion smack in the face of clean, serene stonedness that you’ve ever experienced.” Rolling Stone reports, “Your head spins, your eyes get fluttery, a few beads of sweat surface on your forehead and, suddenly, you're cosmically baked.” Some pot aficionados vow to never smoke the old way again.


Gucci Earwax, a butane extraction, made by Mahatma Extreme Concentrates for Karmaceuticals in Denver. It won the first-place medical concentrate trophy at the High Times 2013 Denver U.S. Cannabis Cup. Courtesy of Ry Prichard / CannabisEncyclopedia.com

Hash, in other words, is no longer just a way to make use of leftover marijuana trim. It’s now becoming the main attraction. (Butane isn’t the only way to extract hash oil from marijuana, either; some concentrate-makers use carbon dioxide– or water-based extraction methods.) At Greenest Green, Melshenker’s Boulder dispensary, the inventory used to be 60 percent marijuana flower, 30 percent BHO, and 10 percent edibles. Now it’s the opposite: 60 percent BHO, 30 percent flower, and 10 percent edibles. And roughly 40 dispensaries statewide contract with 710 Labs to turn their marijuana into shatter or “budder,” a gloopier version. (Because of delays in Boulder’s regulation process, 710 Labs won’t be able to produce recreational BHO until Feb. 17.)

Hash oil is even fueling its own subculture. Forget 4:20; “dab heads” or “oil kids” light up at 7:10. (Turn the digits upside down and you have “OIL.”) Connoisseurs sport specially designed blowtorches and incredibly pricey “oil rig” pipes; a top-of-the-line rig from Melshenker’s Faulty Pelican glass company sets you back $14,000. There’s even dab gear, made by companies like Grassroots.

“There’s a whole industry here,” says Melshenker, whose business card doubles as a stainless-steel dabber, the tool used to apply BHO to an oil rig’s superheated nail.

Colorado’s thriving dabbing scene could just be one more bit of proof that the state is becoming a global mecca for marijuana. After all, the state’s legalized marijuana experiment has so far been an unqualified success. Despite the surprisingly limited number of recreational pot shops that opened their doors on Jan. 1—and the hefty crowds waiting in line to patronize them—the state hasn’t experienced widespread product shortages or weed prices high enough to trigger an Uber-style backlash. Yes, there was that story about 37 deadly marijuana overdoses on the first day of sales, but it turned out to be an obvious hoax. The few pundits who’ve complained about Colorado’s legalized pot, like David Brooks and Nancy Grace, have found their arguments blasted full of holes, not to mention lambasted on Saturday Night Live. The Justice Department is looking into ways to help banks play nice with marijuana businessesa very serious problem—and even President Obama in a recent New Yorker profile conceded it’s important for the experiment to go forward.

Soon enough, then, Colorado’s small-scale experiment should spread far and wide, with controversial drug laws getting the boot, millions of clandestine tokers coming out of the closet, and governments reaping the benefits in taxes and fees. That is unless something goes terribly wrong, derailing the whole legalization movement.

Such a gloomy outcome isn’t out of the question. The only reason that Colorado is enjoying fame as the first place to legalize pot is thanks to a combination of fortunate timing, plucky advocates, forward-thinking lawmakers, and a remarkable lack of snafus. Colorado’s 2012 legalization attempt very well could have floundered if the effort hadn’t enjoyed remarkably positive media coverage. Considering the precipitous rise of the state’s medical marijuana industry and lawmakers’ keen efforts to moderate it, all it could have taken was the right bad headline—a high-profile crime or a boneheaded political move—to set the endeavor back considerably. Recall that alcohol prohibition was built on the temperance movement’s carefully crafted tales of woe and violence. As Salvation Army Commander Evangeline Booth once put it:
Drink has drained more blood …
Dishonored more womanhood,
Broken more hearts,
Blasted more lives,
Driven more to suicide, and
Dug more graves than any other poisoned scourge that ever swept its death-dealing waves across the world.

Mixed shatter slab by TC Labs. The product is broken prior to packaging to fit into the 1 gram or less packaging requirements. Courtesy of Ry Prichard / CannabisEncyclopedia.com

In Colorado, however, there have been very few sordid marijuana tales that could be used to demonize the drug—so far. Weed-fueled horror stories could still emerge in the state—and with the world watching, such calamities could have an international impact. So what are the biggest potential risks? A major concern is diversion, taking Colorado’s legal pot and offloading it to the black market or selling it out of state. While Colorado has established an extensive tracking system to prevent this from happening, there will always be tourists trying to take home a pot-infused souvenir. Beyond diversion, there’s the menace of crime—not just the threat of burglaries and organized crime in a largely cash-based industry, but also the distant possibility of banks or other financial institutions getting slapped with federal money laundering charges if they accept any of that free-flowing marijuana cash. Finally, there’s the prospective collateral damage, such as kids accidentally eating pot brownies—something that’s already in the news—or a violent pot-related car crash.

If any of these calamities do occur, Colorado’s red-hot dabbing scene could in fact be the source of the problem. Dabbing certainly appears on the surface to be dangerous: Kids are freebasing marijuana! It looks like they’re smoking crack! But it’s important to remember that there’s no evidence that it’s possible to overdose on pot. (Compared to say, acetaminophen, overdoses of which killed more than 1,500 Americans during the past decade.) So you can smoke the strongest dab imaginable—or even, if you’re a showboat, smoke 50 dabs in a row—and science says it won’t kill you. It will just get you really, really high.


Mars OG ISO dab, an isopropyl alcohol extraction made by Pink House Labs in Denver. Courtesy of Ry Prichard/CannabisEncyclopedia.com

But just because something won’t poison you the way alcohol can doesn’t mean it can’t lead you to do something stupid enough that will kill you. And there seem to be enough disconcerting variables associated with dabbing culture—a production process laden with volatile chemicals; a highly concentrated, easily transportable final product; and incredibly stoned kids with blowtorches—it seems only a matter of time until somebody in the scene does something very stupid and possibly fatal.

Yes, dabbing might not be as inherently dangerous as, say, a bar full of binge-drinkers. But it’s important to remember that recreational marijuana isn’t necessarily replacing alcohol use—it’s just adding a new legal vice to the options people already have. While some researchers predict legalized marijuana will decrease alcohol use, others predict it could lead to “heavy drinking” and “carnage on our highways.” So will folks really reach for a dabbing pipe instead of a shot glass—or will they reach for both?

Questions like this have led California and Washington to outlaw the production of smokeable marijuana concentrates. Colorado, however, has gone the opposite route: In November it released a draft of proposed concentrate production rules, positioning itself to become the only place in the world where marijuana concentrate production is both legal and regulated. The idea is to police the blooming subculture, to stay on top of it, so it ends up more akin to tattooing than meth. “If we outlaw concentrates, people will make them in their basements and blow themselves up,” says Norton Arbelaez, co-owner of the Denver dispensary RiverRock Wellness, which operates a concentrate production facility. But just because a concentrate extraction system is certified by a third-party industrial hygienist, as will likely be required by Colorado’s concentrate rules, doesn’t mean that system can’t still accidentally blow up.

It makes sense that Colorado is at the vanguard of legalized dabbing. It’s made a habit of taking risks when it comes to marijuana. Colorado can’t regulate away the chance that dabbing or some other marijuana-related endeavor will lead to a spectacular accident, either industrial or personal. But so far its legalization effort has taken pains to thoughtfully minimize such risks—and so far, it’s working.

~ Sam Kamin is professor and director of the Constitutional Rights and Remedies program at the University of Denver Sturm College of Law.

~ Joel Warner is a former Westword staff writer.

Friday, May 23, 2014

The Self is Not an Illusion - Philosopher Mary Midgley at The RSA


British philosopher Mary Midgley stopped by The RSA recently to talk about her new book, Are you an illusion? Her book addresses what she sees as a disconnect between our experience of having a self and the neuroscience that suggests we only think we have a self.

The Self is Not an Illusion

22nd May 2014

Listen to the audio
(full recording including audience Q&A)

RSA Replay is now a featured playlist on our Youtube channel, it is the full recording of the event including audience Q&A.

Are we our brains?

For the last 50 years, the idea of the self has dramatically fallen out of favour. The incredible discoveries of neuroscience have prompted us to largely dispense with our gut instincts about our subjective selves, and in their place many of us have adopted the materialistic ‘we are our brains’ thesis.

But is the self really an elaborate illusion created by our brain cells and processes, and what do we have to sacrifice in order to hold that view? How do our subjective experiences and thoughts contribute to our selfhood, and is there an inherent contradiction at the heart of a physical answer to a moral problem?

Britain’s leading moral philosopher Mary Midgley, visits the RSA to investigate the breach between our understanding of our sense of our ‘self’, and today's scientific orthodoxy that claims the self to be nothing more than an elaborate illusion.

In conversation with Rob Newman, writer, political activist and comedian.

Speakers
Books
Are you an illusion? by Mary Midgley (Acumen, 2014)

Marco Stier - Normative Preconditions for the Assessment of Mental Disorder - And a Commentary by Bettina Schoene-Seifert


From the open access journal, Frontiers in Psychology: Theoretical and Philosophical Psychology, these two articles represent a conversation on how best to understand and define mental illness.

In the first article, Marco Stier lays out an extremely comprehensive model for how we define mental illness from its physical foundations, the normative structures (rationality, morality, harm and distress, culture), as well as causes, diagnosis, experience, evaluation, and routes of explanation. In essence he is arguing for a non-reductionist model.

In the second article, Bettina Schoene-Seifert offers a commentary on Stier's piece, basically arguing that there should be some caveats to anti-reductionism.

Full Citation: 
Stier, M. (2013, Sep 9). Normative preconditions for the assessment of mental disorder. Frontiers in  Psychology: Theoretical and Philosophical Psychology; 4:611. doi: 10.3389/fpsyg.2013.00611

Normative preconditions for the assessment of mental disorder

Marco Stier
Institute for Medical Ethics, History and Philosophy of Medicine, University of Muenster, Muenster, Germany
The debate about the relevance of values for the concept of a mental disorder has quite a long history. In the light of newer insights into neuroscience and molecular biology it is necessary to re-evaluate this issue. Since the medical model in previous decades was more of a confession rather than evidence based, one could assume that it is—due to scientific progress—currently becoming the one and only bedrock of psychiatry. This article argues that this would be a misapprehension of the normative constitution of the assessment of human behavior. The claim made here is twofold: First, whether something is a mental disease can only be determined on the mental level. This is so because we can only call behavior deviant by comparing it to non-deviant behavior, i.e., by using norms regarding behavior. Second, from this it follows that psychiatric disorders cannot be completely reduced to the physical level even if mental processes and states as such might be completely reducible to brain functions.

Introduction

In the course of the “molecular turn” (Rudnick, 2002) in psychiatry, researchers purport to “provide more objective diagnoses” (Akil et al., 2010, p. 1581) with the help of biological markers. Our traditional diagnoses, they claim, are not only unhelpful but actually a handicap for causal research (Holsboer, 2010, p. 1308). This is why “psychiatric disorders should be reclassified as disorders of the (central) nervous system” (White et al., 2012, p. 1). Even the neurosciences seem to have lost their leading position and appear to have gotten diminished to merely heuristic value since the “real” discoveries are to be expected on the molecular level (Bickle, 2006). While the adherents of the disease (or medical) model of mental1 disorder purport that psychiatry is at least as value free as all the other sciences, critics claim that psychiatry rests on norms and values over and above those being present in, say, physics or chemistry, since it deals with the mental, i.e., the experiences, emotions, and behaviors of persons, and therefore always includes norms in respect to these phenomena.

It would be trivial claiming that even the criteria for something being a brain defect rest on norms and that, hence, the criteria for a mental disorder cannot be norm-independent either because they rest upon brain defects. The claim made here is twofold: First, whether something is a mental disorder can only be determined on the mental level. This is so because we can only call a behavior deviant by comparing it to non-deviant behavior, i.e., by using norms regarding behavior, which simply are not applicable to neurons. The brain alone cannot give us the evidence necessary2. Second, from this it follows that psychiatric disorders cannot be completely reduced to the physical level, may it be neuronal or molecular. The classification of something as a mental disorder cannot even in principle be free of values and norms and can be “objective” only insofar as norms and values can be seen as objective. This is the case even if mental processes and states might—in principle as well—be completely reducible to brain functions. Hence, for the sake of the argument I will take the latter for granted: there is no behavior or experience, I assume, that does not come from the brain, and there is nothing in the mental realm that could not be reduced to the brain's processes. Nonetheless, whether a certain kind of behavior or experience should be seen as disordered, is not reducible to the brain's functions.

Thomas Szasz once stated: “It is not by accident that, in all the psychiatric literature, there is not a single account of voices that command a schizophrenic to be especially kind to his wife” and he continued, “[t]his is because being kind to one's wife is not the sort of behavior to which we want to assign a causal (psychiatric) explanation” (Szasz, 2001, p. 300). Even if we are not devoted adherents of Szasz, this quote should give us pause. There seems to be something peculiar about behavior that is beyond purely physical explanation because the difference between, say, acting kindly and unkindly can hardly be grasped in physical, non-normative terms.

In this paper I neither intend to offer another definition of mental disorder nor do I claim an incompleteness of some sort of neuroscience. Above all, I want to stress at the very beginning that I do not doubt the existence of mental disorders. If you have ever seen a deeply depressed person, or a schizophrenic desperately asserting his responsibility for the destruction of the WTC twin towers, you will not have any doubt about the existence of mental disorders. All I want to show is that mental disorders cannot be determined in a purely physical way.

In the following section I will explain my claim that psychiatric diseases are irreducible to the brain even if the mental as such may in principle be reducible. In the main part of the paper I will first show that psychiatry is embedded in several normative frames of reference, and then refer to five particularly relevant normative dimensions of psychiatry. These are the concept of rationality, moral assumptions, the notions of harm and distress, several cultural norms and influences, and finally the relevance of—equally normative—routes of explanation.

The Physical Foundation of the Mental

There is no behavior that does not arise from the brain. Neither is there something like a Cartesian soul, nor is there full-fledged mental causation. How can one nonetheless regard mental disorders as irreducible to neurobiology? Doesn't this look like wanting to have one's cake and eat it too? It might, at first glance, but things are not that simple.

If biological psychiatry was nothing but an ideology, as some authors claim (Cohen, 1993; Berger, 2001; McLaren, 2010), one would just have to show the irreducibility on this level. But we do not need to make such a principled assumption.

Let's assume every single aspect of our mental and behavioral life could be explained in purely physical terms. In this case it could not only be shown that our brains, together with our genetic endowment, are responsible for the way we are, but also how this happens, and which mechanisms are involved in producing this or that kind of thought or behavior. Let's further suppose the neurosciences could even explain the so-called phenomenal qualities—the “what it is like” to see red or to be depressed. Since what we call “mental disorder” is without doubt part of people's mental and behavioral lives, it would be explicable in purely physical terms as well. So it seems. To give an example: It would be possible to explain which of the brain's functions and properties make a person feel “depressed.” To make the claim even stronger, let's take for granted that environmental influences, too, are explicable mechanistically and that “[e]xploring the mechanisms of gene-environment interactions for depression is not substantially different from understanding how environmental toxins contribute to cancer or how diet influences cardiovascular diseases” as Thomas Insel and Remi Quirion assume (Insel and Quirion, 2005, p. 2221). Would we be able to determine what a mental disorder is by physical means alone? We wouldn't.

This is due to the fact that no behavior or inner feeling has a sticker on it that reads “I'm a disorder!” We have to write those stickers ourselves and attach them to certain feelings and behaviors. It is completely right when Matthew Broome and Paolo Fusar-Poli write:
“It is by observing how the person behaves with respect to her beliefs, and by witnessing such behavior in the process of the giving and asking of reasons that one suspects delusions, not in viewing a brain scan or a genetic sequence. In other words, the diagnosis of delusions is based on the observation of behavior that violates accepted norms (e.g., of rationality for belief reports).” (Broome and Fusar-Poli, 2012, p. 598)
In short, whether something is a mental disorder has to be evaluated, not be discovered. This seems to be a purely Szaszian account, but it is not. According to Szasz, mental disorders are evaluated on a normative basis and not, as it is the case with physical diseases, discovered on the basis of functional or structural lesions. Psychiatric diagnoses “are driven by non-medical, that is, economic, personal, legal, political, or social considerations and incentives” (Szasz, 1994, p. 37). Up to this point I agree with Szasz. But while he claims that mental illnesses cannot be treated by medical means for this reason, I neither maintain this, nor do I dispute their existence. His argument seems to be something like this: (i) only medically discoverable conditions can be treated medically; (ii) mental illness is not medically discovered but normatively evaluated; (iii) mental illness cannot be treated medically. The argument fails because premise (i) is problematic. If we reformulate it into “only physically based conditions can be treated medically” the problem becomes obvious: Szasz confounds the epistemological and ontological side of the issue. All that can be inferred from the fact that mental illness is evaluated and not discovered is—at best—that there are no natural kinds of mental illness. We draw the line between normal and allegedly deviant behavior somewhat arbitrarily. But the question of how we can and should categorize forms (and norms) of behavior is different in kind from the further question of whether mental disorders exist. The first one is an epistemological question, the second one is ontological. Moreover, it is obvious that we can even “treat” completely normal behavior. Psychological enhancement gives the best evidence. This follows not at least from the assumption that no behavior or experience can exist without a brain producing it. Change the brain and you change the mind3.

While Szasz asserts mental illness does not exist because of its evaluative nature, my weaker claim is that it will never be possible to determine in a purely physical way which of the countless variants of behavior and thinking are disorders, even if we might discover all the physical causes of each and every thought and form of behavior one day. Hence, the irreducibility of mental disorders is not due to the mind-brain problem. But where exactly does the irreducibility come from? In the following section I will give an outline of the main normative aspects that prevent mental disorders from being explained purely physically.

Normative Bedrocks of Mental Disease

Stating that everything is normative insofar as we have to decide what kind of evidence we want to count as proof for something or what we are willing to accept as an explanation in science would be trivial. It would not be very shocking to claim that, e.g., neuroscientists have to use normative concepts such as the “correct functioning” of certain brain areas. Nearly everything in the world—including psychiatry—is normative in this sense. A much more provocative claim is that psychiatry is guided by social, moral, cultural and other norms. If this is true, and if it is also true that these kinds of norms are relative to time and place, then psychiatry cannot claim to know what a mental disease is “in itself,” where normality ends and mental disorder begins. Again, if the boundary between normality and mental disorder is a social construction such that the question of whether a certain kind of behavior is a disorder can only be judged against the background of this very convention, then the “disorderness” of a condition cannot be found on—and hence not be reduced to—the neuronal level. Psychiatry would have to admit that it serves—to a certain degree at least—not only the needs of patients but those of society as well.

Normative Frames of Reference

Judgments of psychiatric disorder always need a background of psychiatric order without which no diagnoses could be made. A relatively easy way of finding such a background or “frame of reference” is to take a set of diagnostic criteria and turn them (back) into behavioral imperatives. Leising and colleagues have made visible the normative assumptions inherent in the DSM-IV criteria for personality disorders (PDs) in this way (Leising et al., 2009). To give just one example: On the basis of criterion one of Borderline and criteria seven and eight of Dependent PD they formulated the underlying norm “be able to tolerate real and imagined separation4.” If a person is not able to conform to this and other social standards she may be a candidate for a PD. It may be objected that this only refers to some single criteria while in the case of, e.g., Borderline PD seven out of nine criteria have to be met. This is true, of course. But what about the normativity of the other criteria? What do “unstable and intense interpersonal relationships” (DSM-IV-TR, 301.38, 2), or an “unstable self-image” (DSM-IV-TR, 301.38, 3) mean?

A principled objection against the normativity assumption could go like this: The current diagnostic manuals are indeed deeply misguided, but once we have found the real and appropriate criteria for psychiatric disorders, we will get rid of the normativity problem. But again, on the basis of what background or reference frame will such an ideal manual function? Since it is always experience and behavior that have to be judged as pathological, we will always have to draw on “average people” to tell apart mental and/or behavioral deviance on the one hand and “normality” on the other.

In particular, four such normative frames of reference can be distinguished (cf. Leising et al., 2009 for the following)5.
(1) The personal values of a given diagnostician: In the absence of a strong theoretical foundation it is more likely than not that the criteria follow the values and worldview of those who establish them.
(2) Cultural expectations: Diagnoses might not primarily refer to the person but to the mismatch between her patterns of culturally primed behavior and the expectations of her current social environment. For instance, western-style behavior of a girl in rural areas of Turkey may become a candidate for a PD. Conversely, rural Turkish behavior patterns may be seen as an indicator of a psychiatric disorder in the west.
(3) Generalized assumptions about human nature: While it may be possible to determine something like “normal functioning” of the body, e.g., in respect to heart, liver, or the hormonal system, it is quite difficult, if not impossible, to find universal human mental and behavioral patterns. Even if there is a species-typical behavioral setup, it is questionable whether the thresholds to pathological behavior and thinking similarly follow species-typical patterns6.
(4) Harm and disturbance: What constitutes harm for one person does not need to constitute harm for another. In particular, the thresholds to harm and the kinds of issues that are regarded as harmful differ from one culture to another. Therefore, harmfulness is always judged against the background of varying, contingent frameworks.
While these frames of reference are situated on a more general level, Sadler and Fulford have indicated seven normative judgments that are “nested” in the individual diagnostic act (Sadler and Fulford, 2006, p. 171 f.). These concern:
(i) a match of the criterion's semantic content against the patient's phenomenal clinical presentation;
(ii) a judgment by the examiner about the appropriate approach to the solicitation of relevant data from a patient;
(iii) an examiner judgment about the prevailing sociocultural norms relevant to a particular criterion;
(iv) an appraisal of the patient's performance (behavior, interview discourse) relevant to said sociocultural norms;
(v) a comparison between the patient's performance and the specific sociocultural norms in determining whether the patient's performance substantively deviates from them;
(vi) the determination of whether such deviance is substantive enough, qualitatively (e.g., idiosyncratic deviance, as in “bizarre delusions”) or quantitatively (e.g., as in “excessive” need for reassurance in dependent PD), to constitute psychopathology; and, finally,
(vii) a judgment about whether the criterion-driven behavior and experience is disvalued or for the worse.
Apart from the respective diagnostic manual the diagnostician in a clinical setting cannot but make a whole range of normative judgments in individual cases. It is in principle impossible to get rid of this normative aspect of the task, even if the underlying biological mechanisms of a particular behavior or experience were completely known.

In the following I will discuss five normative dimensions that are present in psychiatry to varying degrees. The first is “rationality,” the role of which is somewhat underestimated in the discussion of the normative preconditions of psychiatry (section Rationality); the second refers to the special case of PDs which seem to be particularly dependent on moral expectations (section Morality); third, there is the problematic notion of “harm and distress” that has already been mentioned above (section Harm and Distress); fourth, we have to ask to what extend the concept of psychiatric disorder is relative to different cultural backgrounds (section Culture); the fifth normative dimension pertains to the relativity of scientific explanatory routes which are no less normative in character (section Routes of Explanation).

Rationality

Even though “irrationality” and corresponding terms are not explicitly mentioned as criteria in the current versions of DSM or ICD, Marie Crowe has pointed out that there are several features to be found in the DSM with which a person's perception of reality must be consistent in order for the person to be attributed with rationality. These include notions such as “impairment in reality testing,” “magical thinking,” “suspects without sufficient basis, that others are exploiting, harming or deceiving him or her,” or “worry about everyday, routine life circumstances” (Crowe, 2000, p. 75). Yet, this does not say what kind of reality is at stake.

There are several concepts of rationality (Bunge, 2007), two7 of which are of particular interest in psychiatry: The first one is theoretical or linguistic in nature (logical rationality) while the second one is practical in the sense of means-end rationality (practical rationality). When someone concludes from (i) human beings are mortal, and (ii) Socrates is a human being, that (iii) Socrates is immortal, his theoretical rationality has failed. If mental disorder could be characterized by a lack of theoretical rationality, things would be quite easy. Unfortunately, this is not the case. A couple of years ago a study was conducted showing schizophrenic people to be even more theoretically rational than average persons (Owen et al., 2007). Practical rationality, on the other hand, comes in degrees and is not always judged by the same standards. If a person who has become convinced by advertisement that a certain kind of caffeinated drink makes you popular and henceforth consumes it for this reason, we would probably attest a lack of practical rationality. If someone seeks a cure for cancer in prayer, this would be (at least in the eyes of many) a grave lack of practical rationality, too. Now think of a person who washes her hands every 10 min in order not to catch an infection. There are, of course, other forms of practical non-rationality which leave hardly any doubt that something must be wrong with a person. But we have to set the cut-off ourselves, and there is no other way than doing this somewhat arbitrarily.

The problem already begins with the assessment of capacity and competence to make treatment choices. While it could be argued that there is an objective way of assessing patients' capacity by testing their cognitive abilities to understand, retain and weigh up information, it is often overlooked that this is accompanied by a number of inherently normative judgments in clinical practice (Banner, 2012). Hence, it is not only the capacity of the patient that can be put into doubt, but also the way she makes use of it. And this aspect, the way of using information, cannot be assessed but on normative grounds. One of the most well-known examples in this regard is anorexia nervosa, where patients usually completely understand the relevant information and consequences but nevertheless make choices that other people would regard as problematic (see, e.g., Craigie, 2011).

The assessment of rationality in people's choices is normative in two respects. First, it is not always a precondition for recognizing the autonomy of a person; in some circumstances it is, in some it is not. Let's call this the “Switching-Standard-Thesis” (SST). Second, and connected to the first, the threshold beyond which a certain kind of irrational behavior can be seen as pathologic varies considerably. Call this the “Switching-Threshold-Thesis” (STT).

The switching-standard-thesis

According to SST the standard of rationality to which a person is expected to conform is the higher, the more she is suspected of having a psychiatric condition. As long as someone is regarded as “normal” her decisions may completely unreasonable in the eyes of others. As judge Lord Donaldson pointed out in an often quoted decision, the “right of choice is not limited to decisions which others might regard as sensible. It exists notwithstanding that the reasons for making the choice are rational, irrational, unknown or even non-existent” (Re “T.”, 1992). In a similar vein, Craig Edwards underscores that if someone ruins his reputation due to mental illness he may end up having to undergo involuntary psychiatric treatment, but if he does so without mental problems, it is his own business and he will not experience (strong) interventions (Edwards, 2009). While ordinary people are allowed to make irrational decisions even in highly important matters without being deemed incompetent (just think of decisions regarding the termination of treatment), patients with a suspected mental problem are at greater risk of being judged incompetent because of the very same “irrationality” (Banner, 2012). It is, therefore, a matter of normative choice and not one of objective judgment whether rationality is regarded as a component of mental health or not. It is usually being judged on normative grounds whether to examine someone's rationality further or not. If a mental disorder is suspected, we do; otherwise we don't. Irrationality is not the indicator of a mental problem. The dependency relationship runs the other way round: a suspected mental disorder is the reason why we take a closer look at someone's rationality and possibly regard a decision as irrational and incompetent that we otherwise would have accepted as competent.

The switching-threshold-thesis

Here it is not asked whether someone's rationality should be subjected to deeper scrutiny or not, but whether irrational behavior should be seen as indicating a mental problem. We all constantly behave irrationally in everyday life. It therefore has to be decided whether the irrationality of a person should count as part of a mental problem. Edwards lists a whole series of conditions such as greed, jealousy, hatred, or racial prejudice that impair our rationality and that “are sometimes considered to negative impact our well-being and that fall outside of our ability to control as rational agents, yet are not usually considered mental illnesses” (Edwards, 2009, p. 80). The threshold of rationality beyond which someone is being seen as having a psychiatric disorder is varying.

Both cases look very similar, and they indeed point to the same problem from different angles. According to SST, a mental disorder is diagnosed first, and subsequently a standard of rationality is applied that is higher than in everyday life. According to STT, irrational behavior that is judged to be normal on the background of one framework may be seen as indicating a mental disorder in other cases. The assessment of rationality is deeply normative.

Morality

I should stress once more that my claim is not that all psychiatric disorders are moral in kind. What I do claim is, nevertheless, that many conditions—or conditions in many circumstances—at least involve (morally) normative elements and thus cannot be purely value free, non-normative (objective) medical kinds. The moral side of ascriptions of psychiatric disorders is most obvious in Cluster B PDs. Louis Charland uses two arguments to show this (Charland, 2006): The “argument from identification” and the “argument from treatment.” According to the first one, Cluster B disorders are identified in the DSM through explicit moral terms and notions such as “lying,” “lack of empathy,” or “conning others.” It would be hard to explain why a condition that is defined this way should not be moral in nature. His second point is only partly an argument on its own since it relies on the validity of the first one. What he has in mind seems to be that there is an important difference between, say, ceasing to be depressed on the one hand and ceasing to be a liar on the other. The difference is that the first case can be seen as a cure while the second case is “tantamount to a moral conversion” (Charland, 2006, p. 122).

Possible counterarguments to this account are not far to seek. First, one could argue that it is not the morally questionable behavior as such that defines the disorder but the respective person's inability to change it, her irresponsiveness to reasons. Even if this sounds comprehensible, on a closer look it becomes obvious that an immutability criterion like this one only makes sense in connection with a presupposed moral judgment. There is hardly any person in the world that can change her character traits from one moment or week to the other. Character traits which we would not even think of as pathologic can be as “hardwired” as a full-fledged “PD.” Think of a particularly polite and attentive man who has become this way through his genetic endowment and parental upbringing. Every morning he tells himself to be a bit more selfish—but he just can't help it. He cannot change his style of behavior, but hardly anybody would suspect a psychiatric problem here. Both character traits as well as dysfunctions cannot be overcome just by choosing to do so. Second, the availability of therapeutic help or treatment that could be seen as a distinguishing factor is not a good candidate criterion either. Edwards emphasizes this pointedly when he states that the “need for, or availability of, treatment does not make something an illness any more than plastic surgery makes a crooked nose an illness” (Edwards, 2009, p. 81). Third, neither are character and dysfunction discernible through underlying causes since wicked behavior is equally due to internal and external biological influences and environmental conditions as mental disorder is. With the appropriate chemicals (or even brainwashing methods) you can “treat” grandma's joy, little Johnny's nosiness, or Martha's politeness as effectively as Bill's full-fledged depression.

Edwards, who regards the concept of psychiatric disorder as morally based, realizes this very tension. His way out is a catalogue of five criteria, each of which is necessary but not sufficient, together with the assumption that there is genuine moral truth in the world. His criteria, formulated as questions, are the following: (a) Is the condition harmful for the person who has it? (b) Is there any reason for legitimizing the condition as a character trait that one can choose to develop or maintain? (c) Is the condition one that can be discouraged through the inculcation of appropriate moral values during childhood? (d) Will applying moral responsibility to the condition help to uphold broader moral values in one's ethical system? (e) Can one have insight into the condition's effect upon oneself and if so, how difficult is it to take an active role in seeking treatment for oneself? (Edwards, 2009, p. 83 f.) As one can see, all five questions can indeed help only if they have answers that are not themselves contestable and/or relative to society, culture, and underlying moral creeds. With his reference to ethical truths Edwards may at least avoid the lurking diagnostic arbitrariness, even if that makes psychiatric diagnostics no less moral. Those however, who do not belief in objective moral truths, are still lost in the wilderness of psychiatric relativity.

In a strictly religious society being an atheist may be seen as a dysfunction of personhood; when our western societies still were (regarded as) strictly heterosexual, homosexuality was regarded as dysfunctional and, hence, a mental disorder; since productivity is highly valued in our busy and buzzing western societies, lack of productivity has become a part of the definition of mental disorders (Crowe, 2000, p. 73).

Harm and Distress

One could assume that harm is not a normative concept: if a person suffers she suffers, period. In the context of psychiatric diagnosis things are more complicated, however. A first crucial point that illuminates the normativity of harm has been emphasized by Fulford (2002). We just don't realize the value-ladenness of physical harm because most people regard, say, a broken leg as something bad and painful. Values that are shared by most people tend to hide themselves behind their commonness. When it comes to mental suffering our values diverge to a certain degree. Hence, it is not that bodily diseases are value-free whereas psychiatric disorders are value-laden. Both rest on normative assumptions. In one field we simply share them, in the other we don't. As Fulford writes:
“Thus, the criteria for good and bad heart functioning, for example, paralleling ‘good strawberries,’ are largely settled and agreed upon, and this is true by and large of all the areas with which (acute) bodily medicine is primarily concerned. By contrast, however, the areas with which psychiatry is primarily concerned—emotion, desire, belief, motivation, sexuality and so forth—are all areas in which our values, paralleling ‘good pictures,’ are highly diverse.” (Fulford, 2011, p. 3 f.)
The most prominent author to have included the concept of harm in his theory of disorder is probably Wakefield. According to his “harmful dysfunction analysis” (Wakefield, 1992) we first have a function of a certain mechanism that turns into a dysfunction if the mechanism does not properly perform the tasks it was designed for by evolution; and if this dysfunction is furthermore harmful for the respective person, then it becomes a disorder. It is therefore not enough to state a (physical or mental) mechanism's dysfunction, since there are lots of dysfunctions that are not seen as disorders8. On the other hand, we all experience many harmful things in life without regarding them as mental disorders. Harm, he rightly assumes, is a value concept because it is relative to cultural assumptions. While this is plausible, turning Wakefield's idea upside-down is plausible, too: It may well be that we first disvalue a condition as harmful and only then search—and find—a mechanism of some sort that has a dysfunction of some sort. This would only be impossible if we could have a look into God's (or the evolution's) model kit.

But there are even more normative aspects in the notion of harm. First, the harm criterion leaves open who has to judge whether a person feels harm and distress enough and whether it is pathologic in character. It is one thing to subjectively feel harm and distress, quite another is to judge whether distress is pathologic, and, if it is recognized as potentially pathologic, what degree someone's suffering must reach in order to warrant a psychiatric diagnosis. Second, particularly in the case of Cluster-B PDs it is often the social environment, i.e., other people, who experience harm due to the “patient's” condition while he himself feels fine. A successful, narcissistic person will probably feel no distress at all while the people around him may suffer considerably. Third, harm also can arise indirectly from one's acts and with a temporal delay. If someone in a manic phase makes highly risky and imprudent transactions, the “harm” will (a) be indirect because not the condition itself is harmful or distressing but its consequences may cause harm, (b) the harm caused may initially not represent a problem for the person in question but for his spouse or children, (c) whether a risky and imprudent financial transaction or its consequences should be seen as harmful is clearly nothing we can read off some diagnostic manual. Financial losses are to be judged economically, not medically. Even if the person later deeply regrets what she has done, it remains unclear what degree of regret will warrant a psychiatric diagnosis.

Culture

One of the most widely discussed issues in the philosophy of psychiatry is the impact of cultural varieties on the concept of psychiatric disorder. Do different cultures give rise to special forms of disorder experience? Are there mental disorders that are due to particular socio-cultural frameworks? These and other questions have been disputed for a long time. There is one tradition that takes cultural particularities into account. It is called the “emic” approach. In contrast, the “etic” account tries to explain human behavior independently of culture-specific features and to find general, universal traits (for a more detailed explanation of the terms see Morris et al., 1999). Even though human nature has some universal characteristics, there are underlying culture-relative assumptions that make the etic approach inappropriate for psychiatry.

The various normative elements implicit in the assessment of psychiatric disorder overlap, and much of what has been said above about the concept of harm, moral frameworks, and even the question of rationality could have its place in this section as well. Therefore, what I am going to do in this section is only to highlight the various cultural dimensions of psychiatry. These are assumptions and mechanisms regarding the causes of mental disorder, the impact of culture on diagnosis, specific differences in the individual experience of mental disorder, and last but not least the evaluation of behavior from the third-person perspective.

Causes

Culture or the character of a given society seems to influence the development and understanding of psychic problems both directly and indirectly; indirectly through the norms and social expectations the individual has to follow, directly through the expected ways of behavior which determine deviance. In an interesting article Catherine Caldwell-Harris and Ayse Ayçiçegi formulated a “personality-cultural clash hypothesis” according to which there is a correlation between personality-style, cultural character and mental health (Caldwell-Harris and Ayçiçegi, 2006). They state that “[p]ersonality traits associated with psychopathology will be most frequent in allocentrics living in an individualist society, and in idiocentrics living in a collectivist society.” In collectivist societies where strict rules of social behavior have to be followed and social harmony is highly valued, people with an idiocentric (extremely individualistic) personality tend to have poorer mental health with high scores in paranoid, schizoid, narcissistic, borderline, and antisocial PDs. In individualistic societies, by contrast, a distinct allocentric (extremely collectivist) personality is positively correlated with social anxiety, depression, obsessive-compulsive disorder, and dependent personality. In addition to this indirect influence on mental disorder, there is a more direct influence, too. This can best be illustrated by Wakefield's account of cultural relativity:
“Whereas social phobia is a real disorder in which people can sometimes not engage in the most routine social interaction, current criteria allow diagnosis when someone is, say, intensely anxious about public speaking in front of strangers. […] This diagnosis seems potentially an expression of American society's high need for people who can engage in occupations that require communicating to large groups.” (Wakefield, 2007, p. 154)
In sum, not only has the respective cultural setup an indirect influence on mental health, it also tends to dictate the boundary between the normal and the deviant on the basis of the expected values and virtues of its members. In this respect the impact of society on the concept of mental disorder is clearly normative. Whether the indirect influence, i.e., the personality-cultural clash, turns out to be directly normative under the surface after all remains an issue for further scrutiny.

Diagnosis

Culturally specific views on psychiatric problems are harder to detect in our era of mass migration and globalization than in earlier times with more stable national and cultural boundaries. Nonetheless, important cultural differences regarding mental disorders remain, to which I am only able to allude in the following. What is more, the culturally formed experiences of psychic problems are not only to be considered on the patient's side but also on that of the practitioner, as Laurence Kirmayer points out (Kirmayer, 2001). This has also been shown some years ago by a study that compared the diagnostic patterns of American and Japanese clinicians (Tseng et al., 1992).

Three points regarding psychiatric diagnoses should be stressed here. Firstly, many mental disorders indeed really “exist” in the sense that they are modes of experiencing oneself and the world which are extraordinarily burdensome. Secondly, experience and behavior can only be understood against the background of other people's behavior and experience. Social phobia, for instance, presupposes a social surrounding not only because it is the very object of the phobia but also because it constitutes the basis of comparison against which a person assesses her own experiences. Thirdly, since there are “real” disorders on the one hand and dynamic social expectations on the other, it follows that the boundary between average and deviant behavior cannot be but normative. This is not just due to epistemological limits. Those boundaries simply do not exist by nature. What should psychiatrists do who are in need of a boundary that does not exist? They have to define it themselves (with the help of their social community) and put up a sign that reads “Attention, you are leaving the normal sector!” Seen in this light it is hardly surprising that there appears to be an extreme variance of prevalence rates for, e.g., social anxiety disorder across cultures, ranging from 0.2% in China and 7.9% in the US to 44.2% in rural areas of Udmurtia, a Constituent Republic of the Russian Federation (Hofmann et al., 2010, p. 118). Even if this spectrum should be primarily due to differences in case finding methods and there is in actual fact no “real difference in major psychiatric disorders across cultures and societies” as Andrew Cheng assumes (Cheng, 2001), it nevertheless mirrors all the problems and dependencies of psychiatric diagnosis and, hence, the impact of cultural and other norms and values on it.

Experience


Are psychological problems all the same around the world? If they are, science may be in a position to explain them on a purely molecular level one day. Two very common examples shall suffice at this point for an illustration that this is a vain hope. First, it is well known—even though hotly debated—that depression in Asian societies is experienced more as bodily malaise by the persons affected. The western counterpart of this “somatization” is sometimes called a “psychologization” (cf. Kirmayer, 2001). The Vietnamese language, for example, does not even have words for psychiatry, schizophrenia, and depression (Phan and Silove, 1997). A similar striking cultural difference can be found in the case of social anxiety. While in the western cultural sphere this is connected with the fear of being harmed or offended, in Japan and Korea people are in fear of harming or offending others (taijin kyofusho). Admittedly, taijin kyofusho is—along with other culture-specific disorders—at least mentioned in the DSM as well as in the ICD, but whether it is the same social anxiety disorder as in the western world, maybe a cultural-specific expression of it, or a disorder in its own right, is still under debate (cf. Hofmann et al., 2010). If two psychological problems that are quite differently experienced by the patients in different cultures get explained with one and the same molecular configuration, does this not come down to a Procrustean bed into which diagnoses are forced? Both expressions of social anxiety arise from and are judged by social norms.

Evaluation

As repeatedly mentioned in this article, whether a certain kind of behavior or experience counts as deviant and (potentially) as a psychological problem is often (even though not always) due to specific socio-cultural expectations. Somebody who is “dynamic” in one cultural region may be regarded as offensive in another. Remember the above mentioned western girl in rural Turkey (or the other way round). Here, expectations of rationality, morality, harm and harming combine to a normative framework against the background of which behavior is assessed and disorders are diagnosed. That does not mean there are no culturally and normatively independent mental disorders at all. But it would nevertheless be a fallacy to deduce the thesis that norms do not play a significant role in the assessment of mental disorder from their undisputed existence.

Routes of Explanation

Three levels of observation are of particular relevance in psychiatry. These levels exist in other areas as well, but when it comes to mental health and the concept of mental disorder, they have particularly far-reaching implications. These are the explanatory level, the phenomenal level, and the interventional level. One might use “reflection” instead of “observation,” but since “reflection” is in some sense too ambitious a word, associated with deep scrutiny and deliberation, “observation” is more adequate, as will become clear in the following.

Let's begin with the explanatory level. Here we find all the traditional models of explanation such as the psychoanalytical (Freud), the sane reaction model (Laing), the labeling model (Rosenhan), the problems of living account (Szasz), the biopsychosocial model (Engel), or the currently dominating medical model. It will make a considerable difference if you claim with Szasz that mental diseases just do not exist, assume with Rosenhan that it is largely a matter of labels, or if you search for purely biological causes. Each of these models of mental disorder constitutes a basic explanatory norm since there just is no higher level of objectivity from which we could assess the validity of one explanatory account or the other. Admittedly, we can (and do) use the effectiveness of an explanation and its respective therapies as a criterion, but whether psychopharmacological means are the most effective ones is open to debate even today. Hence, everything depends on questions of the philosophy of science, ontology, causality and—on an even deeper level—on the question of what constitutes an explanation.

On the phenomenal level, what kind of behavior or experience indicates a mental disorder depends on all the factors discussed above. The phenomenal level is in itself independent of a particular mode of (causal) explanation. Often it is just a matter of tradition or even intuition. The important aspect is that pathologic behavioral deviance is assessed through its “being different.”

On the interventional level mental disorders are seen in the perspective of therapy, i.e., a successful cure is already part of the explanation of a particular disease.

The routes of explanation come into play when we ask where to start in order to understand the nature of mental disorders. It is an interesting phenomenon that we may come to quite different results, depending on where we start. If we begin at the explanatory level, psychiatric disorders may disappear if we are followers of Szasz, or turn out to be purely physical if we adhere to the medical model. In the first case mental disorders cease to be, in the second they cease to be mental. In the first case we do not need a therapy, in the latter the therapy will probably be a pharmacological one. We will get similar “start-dependent” results with the psychoanalytical or the biopsychosocial model. What is important here is that what we assume on the explanatory level defines what we believe on the other levels.

The same holds true for the other routes. If we start on the level of interventions and make use of pharmacological therapies, we will probably come to the conclusion that psychiatric disorders are indeed something physical. In this case we are even in danger of getting ourselves into a circle: Why are pharmacological therapies indicated? Because psychiatric disorders are brain defects. How can we know that psychiatric disorders are brain defects? We can conclude this from the effects of our pharmacological therapies (cf. Valenstein, 1998, p. 222). To give a third and last example: If we believe some behavior to be strange and pathologic, we will surely find a cause of it at the explanatory level. So we have come full circle: Remember the quote from Szazs at the beginning, that “being kind to one's wife is not the sort of behavior to which we want to assign a causal (psychiatric) explanation.”

Epilogue

The fact that our understanding of mental disorders is guided by several kinds of norms does not mean that these disorders do not exist. More precisely, on the one hand there is psychological suffering which can hardly be doubted in its existence, relevance, and “realness.” On the other hand there are several cases of mental “disorder” which clearly rest on direct and indirect, open and covert normative assumptions. This has at least two consequences. First, psychiatric disorders are not “out there” and not to be understood as objectively discoverable entities that can always be separated from each other. The boundaries between normal and non-normal behavior and those between one disease category and the other are floating. Second, because of the normative nature of psychiatry, mental disorders cannot be completely reduced to neuronal or molecular processes. Again, more precisely: A mental state as such may well be reducible to the brain, but determining whether this very mental state is (part of) a disorder or not is nothing the brain sciences can do. Something will always be lost in translation.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1 ^In the following I will use “mental,” “psychiatric,” and “psychological” disorder interchangeably. Likewise the term “behavior” is used as a placeholder that stands for “experience, emotion, and behavior.”
2 ^Imagine a neurologist tapping with her finger on your brain scan and telling you “Oh, look, you were quite depressed last week.”
3 ^Paquette and colleagues put it the other way round: “Change the mind and you change the brain” (Paquette et al., 2003). This is, of course, true as well, but not because of some spooky sort of mental causation, but rather because changing the mind just means changing the brain.
4 ^The original DSM-IV criteria are: 301.38 (1), “frantic efforts to avoid real or imagined abandonment”; 301.6 (7), “urgently seeks another relationship as a source of care and support when a close relationship ends”; 301.6 (8), “is unrealistic preoccupied with fears of being left to take care of himself or herself.”
5 ^The following four frames of reference are oriented toward those of Leising et al. (2009) but are not completely identical to them.
6 ^This holds notwithstanding the assumption of a set of ubiquitous virtues (courage, justice, humanity, temperance, wisdom, and transcendence) shared in all cultures (Dahlsgaard et al., 2005).
7 ^Bunge distinguishes seven concepts: conceptual, logical, methodological, epistemological, ontological, and valuational rationality (Bunge, 2007, p. 117 f.).
8 ^I am only mentioning Wakefield's concept of “dysfunction” here without having room for a discussion.



* * * * * *

Full Citation: 
Schoene-Seifert, B. (2014, Apr 29). Antireductionisms with regard to mental disorders: some caveats. A commentary on Marco Stier. Frontiers in Psychology: Theoretical and Philosophical Psychology; 5:350. doi: 10.3389/fpsyg.2014.00350

Antireductionisms with regard to mental disorders: some caveats. A commentary on Marco Stier

Bettina Schoene-Seifert
  • Institute for Ethics, History and Theory of Medicine, University of Münster, Münster, Germany

Introduction

With his article “Normative preconditions for the assessment of mental disorder” Stier (2013) is presenting a thought-provoking piece of work and I agree with many of his conclusions. This is certainly true of Stier's main thesis that the demarcation line between mental health and mental disorder cannot plausibly be gained on the level of neurobiology alone, but is in need of additional value judgments. However, I think that this specific “antireductionist claim” holds true also in somatic medicine. Hence, the “medical model,” rightly understood, seems to be fully appropriate for assessing mental disorder. Moreover, I suggest to be very restrictive in discussing the concept of psychiatric disease in the language of reductionism, since this might, contrary to Stier's own intentions, be easily misunderstood as water on the mills of methodological antireductionism in psychiatry.

Setting the Stage


Making use of Ayala's (1974) influential differentiations between reductionism (and the corresponding debates) on the levels of metaphysics (ontology), epistemology, and methodology, reductionist concerns vis-à-vis psychiatry primarily refer to the last level. As a practical science, psychiatry is mainly concerned with methods or strategies of preventing, ailing, or curing mental disorders. These strategies in turn are interrelated with methods of properly explaining and diagnosing such disorders. In contrast, whatever psychiatrists or their critics hold on the level of ontology or epistemology seems relevant to psychiatric work (only) in so far as it determines outlooks on methodology—especially in interacting with patients and in treating their disorders. When it comes to the latter, matters of causation play the crucial role. And here, I urge, one should distinguish between two questions: (i) how mental dysfunctions (e.g., delusions, depression, mania, decrease in cognitive functions etc.) is/can at all be “caused” by brain dysfunction; (ii) how relevant systemic brain dysfunction is caused by neurobiological processes on lower levels—e.g., on the levels of circuits, cells, or genes.

The first question points to the central and perennial problem of the mind-brain debate and from here cuts throughout psychiatry (so also Kendler, 2008, p. 9). For these problems and questions, it ultimately does not matter whether we talk about healthy or disordered minds and brains. I do not know whether psychiatry might make a genuine contribution to solve these problems. Likewise, we most often do not know what proponents or critics of biological psychiatry hold in these matters. Beyond the shared views that the “mental realm,” disordered or healthy, is (a) both very real and very important to ourselves and (b) brain-based, there exist many conflicting views and intuitions. Key problems seem to be the questions of mental causation, agent causality, and free will. In this paper, Stier does not address them in their own right, but he suggests assuming full explicability of the mental in “purely physical terms” (p. 2).

The second question lies at the bottom of what mainstream neuroscience, and in fact life science in general, is doing today. Here, scientists successfully try reductionist strategies to count for certain biological phenomena by explaining them on a relatively lower level (circuits, nerve cells, synaptic spaces) and by isolating them from as many relevant background conditions as seems fruitful1. Here again, Stier is ready to accept—if only for the argument's sake—“that environmental influences, too, are explicable mechanistically” (p. 2). Making such (non-eliminative) reductionist assumptions upon both questions, he rightly emphasizes that the truth of his “anti-reductionist claim” regarding the notion of mental disease does not depend on metaphysical or methodological anti-reductionism with regard to the mental.

A Partially Normative Concept of (Mental) Disease


Stier holds that “mental disorders cannot be completely reduced to neuronal or molecular processes” (p. 1). His justification for this “anti-reductionist claim” is the above stated “main thesis” which holds that in the field of neuropsychiatric disorders, the borderline between health and disease is value-laden. Unable to argue for this in any detail, I wholeheartedly agree with the view that the concept of mental disorder is partly normative. Being mentally diseased means (or should mean) to be in some or other dysfunctional and unwelcome mental state that thus should ideally be prevented or treated. Imprecise as these stipulated evaluative criteria and their originators are, I also agree with the view that individual and social value judgments cannot be read off from mere neurobiological facts2. We principally cannot tell from scratch whether some functional neurobiological state corresponds to a mental disease or not. Rather, we can only do so within a partly evaluative background frame.

However, do these insights not hold true for diseases in general, for disorders within and without psychiatry? For so-called somatic disorders, this might not always be as obvious as in the realm of psychiatric diseases. Take an infection that, if untreated, would rapidly lead to death without any other adverse symptoms. One might argue that premature death is a purely descriptive term independent of it's being unwelcome to most people. But the same could be said about neuro-psychiatric disorders that lead to permanent coma or benign delusions. Where single disorders, in the mental as well as in the non-mental sphere, seem to be explicable without recourse to values, the gist of the whole concept of disease refers to unwelcome malfunctioning (including the functions of living or being conscious) and can be traced, I think, in each of its sub-types. Unable to further argue in favor of a partly normative concept of disease at this occasion, let me at least emphasize that this is one of the standard views (often referred to as partial “constructivism”) in the contested field of theories of health and disease (see Murphy, 2008). The current tendency to blurr or to give up the distinction between psychiatry and neurology could, by the way, be seen as yet another indicator for the non-exceptionalist status of mental disorders (see Perring, 2010).

The Medical Model


Stier refers to the “medical model” (MM) without giving a complete explicit definition. In the literature, MM is indeed a commonly used paradigm; it is seen, however, to allow for “minimal and strong interpretations” (Murphy, 2010, pp. 3–13). Stier's understanding of MM comes in pieces. On a purely descriptive level it is said to stand in competition with psychoanalytical and other explanations of mental disorders (p. 7) and to substantially parallelize body-environment interaction in the genesis of cancer and brain-environment interaction in the genesis of depression (p. 2). Critically, MM is accused of inadequately explaining psychiatric disorders: “psychiatric disorders [… ] may turn out to be purely physical if we adhere to the medical model [… ] and cease to be mental” (p. 8). But why should this be the case?

One possible answer could be MM's alleged tie to a value-neutral concept of disease. However, this is not only contested by many and with good reasons (see above), but also by Stier himself. He clearly admits that “[… ] it is not that bodily diseases are value-free whereas psychiatric disorders are value-laden. Both rest on normative assumptions.” But then he continues: “In one field we simply share them, in the other we don't” (p. 5). Both observations of the last sentence seem questionable: Quite a number of “bodily” conditions are contested with regard to their “diseasedness”—e.g., limited reproductive or sexual functions, moderately decreased hearing, or moderately diminished memory capacities in “normal aging.” Arguably, it is normative aspects that will determine demarcations. In any case, MM does not seem committed either to value neutrality in the concept of disease, or to the indisputability of the underlying values. On the other hand, value dissent in the psychiatric domain is by no means ubiquitous. After all, delusions, anxiety disorders, depression, or addiction do not appear very attractive, neither from inside nor from outside.

Hence, contrary to Stier, MM should in my eyes be properly understood as rightly holding a thoroughgoing non-exceptionalist view toward the explicability of psychiatric disorders. This view indeed seems to be the main stream position in neuroscience. It implies optimism with regard to neuroscientific contributions to diagnostic and therapeutic progress in psychiatry. But, again, it does neither imply viewing the concept of psychiatric disorder as value-independent nor viewing the mental realm as eliminable by neurobiological approaches.

Psychiatric Diagnostics


Suppose, you diagnose an individual patient with certain symptoms as suffering from mental disorder Z. In an idealized nutshell this presupposes: (1) a multi-dimensional demarcation between mental sanity and mental diseasedness, where those symptoms indicate disorder; (2) a taxonomy of specific psychiatric diseases, one of them called Z; (3) valid indicators and tests for Z; (4) positive testing for indicators of Z in the concrete patient. Each of these steps has its problems. But only (1) seems value-dependent in the way described by Stier, i.e., relative to human flourishing and human interests. With regard to (2) there is malleability and ongoing change in both the bodily and the psychiatric dimension of medical practice: fine-tuning and re-tuning according to some symptoms or other, to locations, or to (assumed) underlying causal paths. The main values that reign nosology are coherence and therapeutic success, I think. (3) is, again, an ongoing process according to medical evidence, having repercussions to (2) and being reigned by the very same values of coherence and therapeutic effectiveness. Finally, diagnosing a given patient should involve testing her according to best available parameters, with results of presuppositions (1) to (3) in the back. Hence, in psychiatry, a patient showing up with certain behavioral symptoms could conceivably be tested for neurobiological indicators, resulting in the diagnosis Z—without loosing sight of the mental. Determining a mental disorder in this way is not guilty of any problematic reductionist credo.

The Inner Life of Psychiatric Patients


Granting potential causal relevance to a multitude of external influences, psychiatrists would finally be ill advised to look for brain function in isolation rather than in context. But turning external effects—e.g., psychologically stressful life events—into background conditions of pathogenesis, does not imply neglecting their causal role. Nor does it imply ignoring the importance of preventing such adverse factors in the first place, or excluding psychotherapy from the agenda of psychiatry. Likewise, nothing in a methodologically reductionist approach to psychiatric research compels scientists or doctors to ignore or belittle the enormous importance of patients' conscious experiences. If such unfortunate “practical reductionisms” nevertheless occur, they can neither legitimately be nobilized nor criticized as a sequel of biological psychiatry.

From all we know and foresee, detailed knowledge about one's inner mental life needs first-person experience or, as a weak approximate substitute, third-person encounter. Listening to psychiatric patients' directly or indirectly describing their subjective experiences thus seems irreplaceable for assessing the subjective impact of mental disease as well as for an understanding interaction with patients. Nevertheless, using neurobiological tools for diagnosing and monitoring treatment might in principle be possible and helpful.

Summing Up

Stier holds that the classification of certain mental states as disorders is value-dependent and therefore cannot be read off from neurobiology. Contra Stier, however, this plausible view does in no regard discredit the medical model (MM) as “the one and only bedrock of psychiatry” (p. 1). Rather, MM is uncommitted to a naturalist theory of disease. As Stier himself admits, values can be seen as indispensible also in demarcating bodily diseasedness. Some of these diseases and values might be as contested as in psychiatry. MM's upshot is a non-exceptionalist view on the explicability of psychiatric disorders—and subsequently on their diagnostic and therapeutic in-principle accessibility on a biological level.

Finally, framing and selling a partially constructivist position regarding (mental) disease as an anti-reductionist view, is both unusual and misleading. Affirming such constructivism should not get confounded with common and problematic objections that blame biologically oriented psychiatry as metaphysically or methodologically reductionist. Yet another distinct problem might be an unfortunate practical negligence of patients' inner life within modern psychiatry. Such “practical reductionism” can and should be defeated within a neurobiological orientated psychiatry.

Conflict of Interest Statement


The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^See Kaiser (2011) for a diligent analysis.
2. ^See Barker and Kitcher (2014), p. 70ff. for a defense of value invention.

References

Ayala, F. J. (1974). “Introduction,” in Studies in the Philosophy of Biology, eds Ayala, F. J., and T. Dobzhansky (Berkeley, CA: University of California Press), vii–xvi.
Barker, G., and Kitcher, P. (2014). Philosophy of Science: a New Introduction. New York, NY: Oxford University Press.
Kaiser, M. I. (2011). The limits of reductionism in life sciences. Hist. Philos. Life Sci. 33, 453–476. Pubmed Abstract | Pubmed Full Text

Kendler, K. S. (2008). “Introduction: why does psychiatry need philosophy?” in Philosophical Issues in Psychiatry: Explanation, Phenomenology, and Nosology, eds K. S. Kendler and J. Parnas (Baltimore, MD: Johns Hopkins University Press), 1–16.
Murphy, D. (2008). “Concepts of disease and health,” in The Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Available online at: http://plato.stanford.edu/entries/health-disease/ (Accessed March 25, 2014).
Murphy, D. (2010). “Philosophy of Psychiatry,” in The Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Available online at: http://plato.stanford.edu/entries/psychiatry/ (Accessed March 25, 2014).
Perring, C. (2010). “Mental illness,” in The Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Available online at: http://plato.stanford.edu/entries/mental-illness/ (Accessed March 25, 2014).
Stier, M. (2013). Normative preconditions for the assessment of mental disorder. Front. Psychol. 4:611. doi: 10.3389/fpsyg.2013.00611 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text