Tuesday, 9 December 2014

Compare and contrast views about life after death in Hinduism and Buddhism

Hinduism and Buddhism are perhaps two of the oldest religions the world has seen.  But just how their names might mislead us into thinking that they are two completely different religious systems; they do in fact share common ground.

Just how Judaic beliefs set the foundation for the subsequent development of early Christian thought, so too did the ancient Hindu ideas and beliefs influence the later philosophy and thought of early Buddhism.  However, when it comes to the concept of an after life, the similarities between the two philosophies end.
This essay will compare and contrast views about life after death between two closely related creeds of Hinduism, the Veda and Upanishad texts, and the Buddha’s own doctrine of Buddhism.  The first part of the essay will talk about Hinduism, while the second part will compare and contrast it with Buddhism.  It will conclude that although they share some fundamental similarities, they differ heavily on the exact nature of what life after death might mean.


Hindu ideas concerning life after death generally come from an ancient text encompassing a range of hymns and rituals.  This scripture is called the Vedas[1], a text whose source is said to have come from the Gods themselves.  However, a more likely theory is that the Hindu doctrine originated from the Aryan culture[2]

Much of the Hindu belief in the after life, as described by the Veda scripture, can be said to be influenced by the caste system.  A caste is a system of social stratification in which individuals are naturally born into a certain class.   Either they are born into a socially perceived low caste, such as the Shudra, a caste of service workers, or a higher one such as the caste of Brahmins, a caste of priests and teachers.  Once born into his caste, movement within a society becomes restricted to the chores and responsibilities of that caste.  Movement out of one caste and into another therefore becomes almost impossible.[*]

It can be said that the caste system has influenced Hindu thought on the after life by way in which the Veda’s offer a way out for those born into a lower caste.  By adhering to the rituals and offerings as stipulated by the text, provides one with a good chance of entering heaven – the world of the fathers.  If one doesn’t keep up with the offerings, then not only does this jeopardize one’s chance of entry into this heavenly realm, but it also jeopardizes the souls of the already departed.  According to scripture, ones duty is to appease both the gods and the souls of the dead by providing a constant stream of offerings to them at specific times and events.  This is to ensure that those in the after life remain happy and content in the world of the fathers, while also appeasing the gods and prevent them from inflicting the living with disease and natural disasters.  Therefore the Vedic view of the after life is seen as a permanent place where one’s soul resides after death.  Life here on earth is temporary and is seen as a place where one must make the correct preparations for their own death, while both appeasing the gods and ensuring that their ancestors remain in heaven.  Such offerings imply that the value of the after life is measured by a material standard.  This can be seen in description of the world of the fathers as a place of fine foods and wine, and by the fact that the living provide offerings to the recently deceased to aid their journey into the after life.

Since following rituals and making offerings according to the Vedas appear as a means to an end approach to salvation, the Upanishad text offers a more philosophical interpretation of the afterlife.  Introducing the concept of reincarnation, the idea that at death one is re-born as another human, or animal[†] in an endless cycle of birth and death.  Depending on our own our behaviour and action (karma) in this life, whether we act good or bad toward other human beings, or animals, determines what life we partake in the next life.  However, the aim is to break free from this eternal chain of birth and death, and the only way of doing so is by coming to an inner state of realization through meditation that our soul, atman, is part of a bigger universal reality, Brahman, the Absolute, or pure consciousness[3]. Once one recognizes this, they are then supposed to be led to a further understanding that “the self is indeed Brahman”[4].   It comes to pass that the self is indeed the creator – “he is the maker of everything”. “He is the world itself”[5].  The later texts go on to say that although, “a person consists of desires, the man who does not desire goes to Brahman”[6]  Hence the after life isn’t a heavenly realm like the world of the fathers; in fact, it isn’t a realm at all, but a view in which one merges with an absolute mind[‡] after death.  

The comparison and contrast with the Buddhism of the Buddha

In many ways, Buddhism can be described as a protest philosophy.  Its founder, Siddhartha Gautama, was not only born and raised within the Hindu traditions, which would have exposed him to the religious and philosophic sides of both Vedic and Upanishad scriptures; he was also nurtured and educated within an aristocratic, yet powerful, family.  Such exposure to Hindu text, coupled with his own experiences of living a rich lifestyle and observances of the ‘real’ world, would have a significant effect on his rebellious thought and ideas.

In many ways then, the spirit of the times played a large part of the development of Gautama’s religious beliefs.  Yet, it is his own personal experiences that influenced his beliefs about the world – especially those concerning the notion of an after life.  Taking on the Hindu ideas of desire and reincarnation from the scriptures, Gautama too saw life as an eternal cycle of birth, death and re-birth inside the world.  As long as one is reincarnated, one is forever trapped inside a life of constant struggle and suffering, since the world is essentially a place of suffering.  But whereas salvation is granted by way of a heavenly realm according to the Vedas, and according to the Upanishads, by way of coming to the realization that one is a part of an Absolute mind in God, Gautama instead contends that this cycle of suffering ceases when one reaches a higher state of consciousness called Nirvana, or enlightenment, by cutting out desires.  This state of consciousness is not a realm of residing gods like the world of the fathers - a realm where our soul journeys to after death.  Neither is it a place, or part of anything at all.  For Gautama, and subsequently later Buddhist traditions, especially that of the Japanese school of Zen, the concept of an after life as stipulated by both the Vedic and Upanishad texts is an inherent illusion.  Since the process of reincarnation shifts being from one life form to another, the notion that there is a soul that belongs to us, and has the ability to merge with God or travel to heaven, is a misconception.  Such a misconception leads us on to falsely believe that there exists a me, in the form of a self or otherwise, that survives death and travels to a heavenly after life. 

It can also be said then that the notion of even possessing a soul as something that is mine is also an inherent illusion.  Although Gautama doesn’t fully endorse the view that there is no reason to suppose that the self survives death, his belief that every state of existence is temporary – as stipulated by his doctrine of impermanence – leads him on to the belief that even if there is a heaven or God, such a place must also be temporary as well.  These ideas are in stark contrast to both Veda and Upanishad texts, which imply that the soul is permanent and something that contains a ‘me’ that has the ability to journey to a heavenly realm. 

The notion that Nirvana is a higher form of consciousness one achieves through the realization that cutting out desire ceases suffering, is very similar to the Upanishad texts when they say that one becomes Brahman when one realizes that he is part of an absolute consciousness.  But for Buddhists, enlightenment comes when one learns to detach their self from the world of suffering, not through any metaphysical notion that we are a part of something cosmic.  It is then a form of personal release from the world’s evils.  Just as how the Vedic texts stipulate that the world of the fathers is supposed to be a release from the world; Buddhists on the other hand take a self-conscious approach to release.  Life shouldn’t be lived in accordance to what scripture tells us to do.  Since Buddhists see their philosophy as a realistic doctrine – that is in terms of the world being a place of suffering, desire, as well as a place in which all living share a temporary existence, it makes no sense why one should live their life believing that after death they could be sharing a seat in heaven with the gods by their side.  Salvation doesn’t come from the prospect of giving offerings to the dead or to the gods, neither do Buddhist’s think that our actions on earth aid us and our ancestors journey into the after life.  But this is mainly because Buddhists reject any form of materialism on the ground of impermanence.

Where as the Hindu’s seem to care about their ancestor’s souls is another difference between the Hindu’s and Buddhists view of life after death.  Whereas the after life is seen as a cosmic union, or a place in which the living must tend to their dead ancestors, implies that there is a strong communal bond between the living and the dead.  However Buddhism is a very individual philosophy.  Although Buddhists remain sensitive to those who have passed away, they are not primarily concerned with those who have passed on.  It is the task of Buddhists to develop a state of mind that operates independently.  That means coming to a firm understanding that our own existence is an issue for itself, an existence that needs to be tamed, or controlled until detachment from ones self in achieved.

In conclusion then, although Buddhism borrows certain ideas from Hinduism, there remains some fundamental differences that make Buddhist belief in the after life different to that of Hinduism.  Firstly that the possibility of an after life in terms of heaven or a state of realization that we are a part of a supreme cosmic union, is rejected on the grounds that everything is temporary and hence an illusion.  Secondly, that the concept of the self is temporary, and hence doesn’t contain a ‘me’ that might survive my death.  Thirdly, that our actions in this life does not influence those who have already passed on, or aid them in the after life since this would be to endorse an after life parallel to a materialistic realm – something the Buddhists whole heartedly deny.  It seems then that where as the Hindu’s believe in some kind of ‘realm’ that the soul enters or enjoins to after death, the Buddhists, on the contrary, reject but seem to accept that an after life is a higher state of consciousness.  But whether we can interpret this as ‘life after death’ remains debatable.


[*] One exception is by marriage
[†] Or in some traditions, a tree, or plant
[‡] Perhaps this philosophy influence Hegel’s work, The Phenomenology of Spirit, where Hegel believes that human civilization is progressing toward the realization that we a part of an Absolute mind.

[1] Moreman, Christopher, Beyond the Threshold: Afterlife Beliefs and Experiences in World Religions, (Plymouth: Rowman & Littlefield Publishers, Inc, 2010), p .97
[2] ibid. p.98
[3] Swami Muni Narayana Prasad, Karma and Reincarnation (New Delhi: D. K Printworld, 1994), p.22
[4] Feibleman, J, Understanding Oriental Philosophy (New York: First Meridian Printing, 1984 ) p.15
[5] ibid. p.15
[6] ibid. p.15

What do theories of psychological development need to explain?

In her book, Theories of developmental psychology, Miller notes that an ideal theory of psychological development aims at forming a coherent story from the onset of infancy to old age (Miller, 2002).  In order to form such a story, Miller argues that four key questions influence how we ‘build’ theories of human development, these are: The question concerning the basic nature of humans; whether development is a qualitative or quantitative process; how nature and nurture combine and drive development; and finally the nature of what develops itself.  This essay will compare Freud’s theory of psycho-sexual development with evolutionary theory and analyse how both theories attempt to explain psychological development in light of Miller’s four points, and will conclude with a brief evaluation of them before suggesting what need to explain further.


Freud’s central thesis of human development hinges on what he called ‘a sort of economics of nervous energy’ (Jones, 1953).  For Freud, much of the developmental process is largely determined by the desires of the libido, the energy of Eros, or sexual instinct, that constitutes part of an inherent biological drive that inadvertently cries out to our psyche for immediate satisfaction.  These inner desires form the largest part of our unconscious mind known as the Id, which form the darkest ‘inaccessible part of our personality’ (Freud, 1933a).  Since young infants have not fully developed a conscious appreciation of the world, through maturity they learn to deal with their sexual energies by controlling the Id through parental training and subsequent development of the ego.  Freud proposed a stage theory of development to show how, at various stages in a human’s life, sexual energy is directed to a certain part of our body (mouth, anus, phallic, genital), and reasoned that human development depended on how we resolve conflict within these areas.  For example, Freud proposed that the libidinal energy first invested itself during infancy in the oral erogenous zone (Miller, 2011) and claimed that fixations can occur if the preferred object such as the nipple is either absent or withdrawn early which can result in detrimental development.  How infants learn to deal with conflicts such as these therefore determines the basis of personality expressed later through the unconscious. 

Freud’s theory attempts to explain development through a series of disturbances hinged on the notion of sexual energy that targets different body parts.  Personality development will occur whether or not we have successfully passed each conflict as determined by its corresponding stage, so we can only develop in ‘degrees’ of detriment.  There is no such thing as ‘normal’ development since, according to Freud, our unconscious mind still has the ability to leak out repressed thoughts and feelings from the past even if each stage of development has been successfully passed.  Problems only occur when we have not learned to handle them sufficiently, resulting in anxiety related problems, stress, and of course Freud’s favorite, neurotic and compulsive disorders.  Thus Freud sees the nature of human beings as an organism whose spends their lifetime trying to balance out conflicting thoughts of unconscious sexual drives from an early age.

Freud’s assumption that development is stage-like implies that human development is predominately qualitative since the nature of sexual energy changes from location to location within the body.  Indeed, the notion that infants come to learn to control these impulses implies that Freud’s conception of ego and superego strengthens over time.  This indicates a quantitative change.  So Freud’s theory explains that the nature of development is both qualitative and quantitative.

Freud’s position on the question concerning nature versus nurture is very much inline with modern day theories; namely, that their exists a complex interaction between our biological predispositions and the environment.  This interaction is expressed in the notion of biological sexual energy being tamed by external factors, predominately parental rules.  The result of this interaction is expressed through degrees of anxiety related behaviour such as neurosis and obsessive compulsive disorder.  Though Freud expressed interaction taking place between biology and the environment, his theory does little to explain the intricate mechanisms of how and why this interaction takes place.  This is no fault of Freud per se, since he took no real scientific interest in this area.

The question concerning the exact nature of what is developed in Freud’s theory is largely dependent on the ‘mind’.  Freud argued that over time, humans develop the Id, ego and superego (in that order) to tackle the inherent sexual conflicts driven by the Id and the Oedipus complex (infants desire for their mother).  Thus what is primarily developed according to Freud is ones emotional states and their associated thought patterns (Miller, 2011).


The ethological view of human development, like Freud’s theory, emphasizes the importance of genetic predispositions but unlike Freud’s theory, this view takes into consideration that such genetic predispositions forms the basis of behaviour that are a product of thousands of years worth of evolution.  As a result, certain behavioral traits observed in humans, as well as in many other animal species, are selected because such behaviours are assumed to promote the chances of that species survival.  In terms of development then, the theory aims to explain that behavioral traits, such as the attachment process between infant and mother, carries an intrinsic value that not only sets the groundwork for healthy development later, but suggests that such behaviours will increase the chances of survival and hence reproduction of the genes.  In light of the ethological approach, attachment theory (Bowlby, 1969) suggests that infants are wired, or pre-programmed, to seek out a secure attachment to their primary carer for general protection food and comfort.  The mother, in theory at least, is also programmed to respond to the child’s needs which results in a harmonious attachment between infant and mother that aides future development and promotes the infants chances of survival.

How the theory attempts to explain the basic nature of human beings is largely dependent on what aspect of evolutionary theory one approaches.  For example, if one takes Bowlby’s idea that humans are animals that constantly seeks a parent or mate, then this view implies that humans are an organismic entity (similar to Freud’s theory).  Where as on the other hand, Lorenz saw the basic nature of humans as more of an automatic response to stimuli as seen in his study of imprinting with goslings.  Despite these contrasting views, any evolutionary theory of human development must take into account the importance of our ancestral past.  Since what we are is the result of thousands or even millions of years of constant adaptation to our environment, one can conclude that our basic nature of humans is no different to the basic nature of other animals.

Whereas Freud postulated that human development rests on the successful completion of stages, evolutionary theory rejects any form of stage development.  But this doesn't imply that development is not a qualitative process, since one can argue that from our ancestral past to modern day man behaviours have certainly changed.  Whether this change is continuous in the sense of making progress, however, is impossible to answer.  Regardless of this, behavioral changes, in one sense or another, appears to have occurred for better or worse. 

As with Freud’s theory, evolutionary theory also explains behavioral development in terms of complex interactions between genetic predispositions and environmental stimuli.  What appears to be a complex issue, however, is how this theory explains such an intricate interaction.  Is it the case that environmental stimuli somehow select and dictate how genetic mutations occur?  Or does the theory place emphasis on the genotype as the entity that predominately uses the environment to create such change?  One might conclude that behavioral changes constitute an equal share of the variance between genotype and the environment, but the theory would still need to explain in detail how this share of the variance explains subsequent development.  However this interaction takes place, the result seems to indicate that change in both behaviour and thought has taken place over thousands of years.

There is great difficulty in determining what exactly develops according the evolutionary theory.  On the one hand, one can suggest that it is the genotype, with its propensity for interacting with the environment, that is the thing that develops.  This view would give us a very general view of development since it would account for all species (Miller, 2002).  However, on the other hand it would explain near enough nothing when it comes to specie-specific development.  In which case the question posed is: what develops in terms of the human species alone?  This can be a hard question to answer since the theory would need to take into account important individual differences within humans of all societies and cultural backgrounds.  Perhaps a rough answer is to suggest that what develops isn’t resolved to a single entity or thing, but rather a whole range of things that encompass our evolutionary past, our current environment, as well as genetic predispositions.

To conclude, both theories share a number of similarities – for example the genetic predispositions and view that nature and nurture is one of interaction.  Indeed, with Freud’s theory of biological drives and sex, one can argue that he too borrowed heavily from evolutionary theory.  However, if one is to ask what does Freud’s theory need to explain, the answer is that the theory needs a proper explanation how and why our so called primitive thoughts clash with social norms and parental teachings, and indeed, why this leads on to obsessive, neurotic, or anxious behaviour.  On the other hand, what evolutionary theory needs to explain is how and why such small intricate steps over thousands of years seem to generate behavioral change in the first place.  Indeed, one answer suggests an interaction between nature and nurture.  But, as previously mentioned, the theory would need to explain coherently the degree of variance between our genotypes and the environment.


Bowlby. J (1969). Attachment. Attachment and Loss. Vol. I. London: Hogarth

Sigmund Freud, New Introductory Lectures on Psychoanalysis[1933] (Penguin Freud Library 2) p. 105-6

Jones, E. (1953). The life and work of Sigmund Freud. Vol I, the formative years and the great discoveries 1856-1900. New York: Basic books

Miller, P. (2002).  Theories of developmental psychology. New York: Worth Publishers

Monday, 8 December 2014

What are the salient personality trait models and how do they contribute to the understanding of individual differences in personality and psychopathology?

Personality, like many areas in psychology, has been discussed as far back as the ancient Greeks.  Most notably, it was Hippocrates who devised the very first theory of personality by assuming that fluids within the body were responsible for differences in behaviour; for example, levels of phlegm indicated degrees of calmness.
This theory, although incomplete, has provided the foundations of modern research into personality. Contemporary personality trait models discussed in this paper are Costa & McCrae’s (1991) Five Factor model and Eysenck’s (1992) Giant Three model. 
This paper will first define personality and personality traits before describing each model.  This description will identify each trait, and discuss how it is defined and how it was initially discovered.  It will then attempt, as far as feasible, to describe their differences and evaluate their strengths and weaknesses in a manner which analyses their contribution to understanding individual differences and psychopathology.  Fundamentally, this paper will argue in favour of the Five Factor Model, but will also contend that it is a limited model for describing the totality of personality.

What is personality? There is a general consensus that personality may be defined as the sum total of behavioural and mental characteristics that persist over time (Colman 2006). A personality trait is therefore a single dimension of personality among a finite group of dimensions.  For example, Neuroticism is a single dimension, or trait, that explains how emotional that individual is.
A personality trait model is a psychological model that attempts to amalgamate all traits into a universal system of personality.

The first personality model is Eysenck’s (1992) Giant Three model.  This model states that there are three basic traits that everyone can be classified under.  These are Extraversion (E), Neuroticism (N) and Psychoticism (P). Extraversion determines how the individual relates to the outside world; for example, whether they are outgoing or social.  Neuroticism determines emotional stability; for example, how sad or embarrassed individuals are.  Lastly, Psychoticism reflects a dimension that stretches from psychological normality to psychotic disorders such as schizophrenia (Haslam, 2007).
Much of Eysenck’s model relies heavily on a biological approach to explaining personality, with research findings indicating that genetics (Plomin 1986) and twin studies across cultures (Eaves et al 1989, Martin and Jardine 1986) indicating levels of E, N and P, have a biological foundation, implying individual differences as being partly predetermined.  If such traits are biologically imprinted in our genes, it makes sense, Eysenck thinks, to assume that they are basic dimensions of personality.

The second model of personality is The Five Factor Model (Costa and McCrae 1991). This model of personality describes five basic dimensions of personality.  These five dimensions are: Openness (O), Co-consciousness (C), Extraversion (E), Agreeableness (A) and Neuroticism (N).
Openness reflects how open the individuals are to new experiences, which also indicates how intellectually curious they are.  Conscientiousness reflects how they approach certain tasks: for example, how motivated or organized they are.  This could be a very good indication of how one might approach deadlines at work or school.  Agreeableness reflects interpersonal skills: for example, how warm and cooperative they are.  Scores here can determine their willingness to help.  Extraversion is defined in the same way that Eysenck’s model defines it, but Neuroticism is defined as emotional stability and personal adjustment (Costa and McCrae 1992): that is, how well an individual adjusts to certain situations.
Most of the traits identified by Costa and McCrae were initially found using the lexical hypothesis: that is, words in the language can identify personality traits such as kind and considerate.  Initially, 4,500 traits were found from Allport and Odbert’s work; however, with the use of factor analysis all such traits were reduced to sixteen (Cattell 1970) and then, more recently, to five.

There is constant debate as to which model most accurately identifies a trait as being basic.  The main disagreement centers on the factor P.  Eysenck claims this to be a component of C and A whereas Costa and McCrae believe P to be a combination of A and C (Costa & McCrae 1992c) as they noted that psychoticism tended to be found in individuals with low A and low C.  It is very difficult to argue that both A and C are not important factors in understanding personality and individual differences; but there is a difference of opinion as to whether researchers consider P, or A and C, to be the basic factors. Eysenck’s model appears to be economical and parsimonious and, hence, a more scientific model for describing the basic factors of personality, although P remains its weakest link (Bishop 1977; Block 1977a, 1977b).  One difficulty is perhaps due to its very obscure nature and how it relates to other factors of personality (Zuckerman 1989; Claridge 1981). Splitting P into A and C, as Costa and McCrae do, allows a wider scope to be addressed.
In saying that, however, both models can only contribute a little towards our understanding of P.  This is mainly because the DSM-IV takes a medical approach to diagnosing psychotic abnormalities such as schizophrenia or depression.  Essentially, this is because such abnormalities have an underlying biological foundation that needs to be taken into consideration.  For example, there is a growing consensus that schizophrenia is a genetic disorder, while other categories of P, such as OCD, stress and depression, are heavily influenced by dopamine and other chemical imbalances in the brain.  Such strong biological evidence indicates that personality models cannot explain such abnormalities per se. Without medical examination it is impossible to make accurate diagnoses.  But The Five Factor Model has the advantage over Eysenck’s Giant Three as it can measure psychotic traits by correlating A and C; this is something Eysenck’s model cannot do.  Doing so allows researchers to assess psychotic criteria in a variety of contexts; for example, to examine prison inmates’ attitudes and tendencies in order to review their eligibility for release, or to assess psychiatric clients as part of a general check up, or as part of an admissions procedure (Holden & Troister 2009).
In this respect the model can contribute to our understanding of Psychopathology and help us to make important life changing decisions. 
Further studies have now shown that even Personality Disorder can be predicted from the Five-Factor Model of personality in clinical samples (Reynolds & Clark 2001, Ball et al 1997, Blais 1997).  Additionally, Miller et al 2001 found evidence for personality disorder as an extreme variant of the common dimensions of the Five Factor Model of personality, a claim that was previously hypothesized by Widiger & Lynam (1998).
If we are to take these results as conclusive, the evidence suggests that the Five Factor Model, combined with other forms of testing, is rapidly becoming a more accurate model for identifying psychopathological disorders and individual differences while making positive contributions in the field.
However, there is a problem.  The Five Factor Model doesn’t tell us why people might be more psychotic than others; all they tell us is that certain factors, such as A and C, and at times, N, contribute to psychopathologic behaviour. This is why the field of biology can, potentially, provide a deeper insight into the nature of P as a whole.  But the advantage of such a model is that it allows us to see potential triggers for a variety of psychological abnormalities as well as individual differences.  In this case a Diathesis-Stress Model has been suggested and might, in the near future, be able to explain the link between biological predispositions and environmental stressors via The Five Factor Model or variants of it.

In order to assess both psychopathological and individual differences effectively, both models adopt the questionnaire as a form of self-assessment. 
Eysenck’s model adopts the Eysenck Personality Questionaire (EPQ) while Costa and McCrae adopt the Revised NEO Personality Inventory (NEO PI-R).

The EPQ includes one hundred yes/no questions and also includes a lie scale to determine discrepancies between answers.  There is evidence that this questionnaire produced reliable results for two of the three traits: E and N (Francis et al 2006).  But they found unreliable results when measuring P. Commentators have also noted this unreliability (Maltby 2007). The indication is that if such results vary there might be discrepancies in Eysenck’s definition and measurement of P, which is partly due to its interpretation; for example, Eysenck believed that geniuses and psychotics share a divergent style of thinking when trying to find solutions to problems (Eysenck 1995), whereas others, such as Maslow and Rogers disagree, maintaining that such thinking is the result of optimal health (Simonton 1994).  With such difficulties of interpretation, coupled with a string of unreliable results, the inaccuracy in measuring P in the EPQ could well detract from its contribution to our general understanding of P.

Another weakness with the EPQ, including revised editions (EPQR-S), is that by asking ‘yes’ and ‘no’ questions the individual will feel they have less scope to answer appropriately.  For example, consider this question; “ I consider myself talkative, entertaining, and the “ life and soul of a party ”.  With only ‘yes’ or ‘no’ as possible answers how can one accurately respond?  It is possible to answer ‘yes’ to the first part of the question; that is, I consider myself talkative and entertaining, but I might not consider myself “ the life and soul of a party ”.  It can be equally said about this question: “ I am miserable at times although I cannot really explain the reason for my misery ”; sometimes we do know the reason, at other times we don’t.  It is therefore difficult to say, with any degree of accuracy, how well the EPQ contributes to our understanding of individual differences as participants’ responses are limited solely to agreement or disagreement.

The NEO PI-R questionnaire significantly differs from the EPQ.  The most fundamental differences are that there are no yes/no questions and that it measures five traits rather than three. 
Furthermore, the NEO PI-R adopts a rating scale from one to five, signifying varying degrees of intensity in a participant’s feelings toward a particular question.  This provides participants with a much wider scope in answering compared to the EPQ; because of this, it is possible to analyses the extent to which participants are open or neurotic.  This allows researchers to assess levels of individual differences between participants in a more accurate way than the EPQ. 
Firstly, therefore, the NEO PI-R is a more accurate form of self-assessment than the EPQ with the capacity to increase our understanding of both individual differences and psychopathology as it allows researchers to see how factors vary between individuals.  Secondly, as more and more self-assessments are carried out by the NEO PI-R, emerging patterns can determine which factors are mostly associated with psychopathology and which with personality.  For example, a measurement of N might tell us something about particular individuals and their susceptibility to depression.  Therefore the NEO PI-R provides a better indication of individual differences in psychopathology than the EPQ does.

Indeed, although the Five Factor Model is generally accepted among psychologists, this is not to say it is a perfect theory of personality. 
Some argue the model omits potentially important factors such as honesty and humility.  Cross-cultural studies have suggested evidence for these factors which should therefore be included in the Five Factor Model (Aston et al 2004).  On the other hand, Larsen & Buss (2002) argued that attractiveness should also be an important feature of the model.  This has led some researchers to devise a seven factor model for personality (Tellegen 1993).  Although such inclusions are a welcome improvement to the Five Factor Model, The Seven Factor Model lacks supporting evidence in comparison with the Five Factor Model at present.

To summarize, The Five Factor Model is a more accurate indicator of individual differences in psychopathology for the following reasons.  Firstly, ambiguity in the factor P is dealt with.  Breaking P down to A and C allows a wider scope of measurement which, in turn, provides us with a better understanding of which factors are contributing to certain types of behaviour.
Secondly, the general adoption of the NEO PI-R, with its differential rating scale, provides a more accurate way of determining individual differences in psychopathology.  Indeed, its accuracy of assessment has been such that the Five Factor Model has been useful in describing pathological personality conditions (Goldberg 1993) such as personality disorder in cases where the NEO PI-R significantly predicted 12 out of 13 personality disorders (Reynolds and Clark 2001).

Such strong evidence is an indication that the Five Factor Model, although incomplete, makes a significant contribution to our general understanding of individual differences in personality and psychopathology.

Explain Hume’s idea of causation: if causation is just in the mind, why does it seem to function in the real world?

This paper deals with Hume’s idea of causation and its function in the real world.  Section I will give a brief summary of Hume’s idea of causation.  In section II, I will analyse how this concept of causation functions in the real world, real world meaning a world that exists independent of the mind.

I will argue that if we accept causation to be just in the mind, it functions somewhat dualistically; that is, causation can only seem to function with the aid of this world.  However because ‘seems to function’ does not certify that causation functions consistently, this paper will conclude that there is a semantic and perhaps epistemological problem with the word ‘seem’ and that faced with the initial question, causation ‘can’ only seem to function.

Hume’s idea of causation comes as an attack to the traditional metaphysician’s account that causation is necessarily connected.  Necessity implies a certain power behind causal events; that is to say should A cause B then there is a necessary connection between them.  But Hume is not happy with this idea.

For Hume, there are no a priori knowledge claims of causation; we simply cannot experience necessity within events.  Instead we can only base our claims on empirical knowledge; that is, experience of the world around us.  There is nothing in the world that we cannot imagine or experience without the aid of the external world, ‘ideas are images of our impressions’. (T 1.I.23).

Our ideas can only be derived from the world around us, and are copied into our minds.  For how can one know the taste of a pineapple without actually tasting one? (T 1.I 40) is a powerful argument by Hume – we simply cannot.

So, by rejecting this idea that power; that is necessary connection, exists behind the objects, all we experience are the objects unfolding before our senses in what seems to be in constant conjunction.  We observe the succession of objects, but we cannot observe its power or energy;  we simply cannot see fire ‘causing’ wood to charcoal; all we do see are two separate events, the fire followed by the wood appearance should they come into contact.  But this connection is not something we come to know with a single observation.  It is only on multiple observations of fire coming into contact with wood that we learn by ‘custom and habit’ (E 5) that there be a connection between them.  So every time we observe fire and wood, we make the connection that fire causes wood to burn.  The cause then is in the mind:

‘the idea of the one determines the mind to form the idea of the other, and the impression of the one to form a more lively idea of the other’(T 1.III.14)

It is through objects regularly displaying a pattern of events that trains us to, the more we are acquainted with these events, infer a connection between them, and hence a cause.  It is on past instances of observing patterns of causation that we presume there to be the same connection the next time we observe similar objects.  Hence, Hume defines cause as a form of regularity[1]:

‘an object, followed by another, and where all the objects similar to the first, are followed by objects similar to the second’. (E 9)


If causation resides in the mind, then why does it seem to function in the real world?  Hume’s definition of cause is twofold; firstly it implies the world functions in ways that appear to be regular; for example, the observation of a rapidly moving car hitting a person standing on the street will knock them down.  Secondly, our mind forms a connection between these two events taking place, because we assume in the future that all rapidly moving cars will knock people down.  Causation then functions somewhat dualistically; that is, we form a belief that A causes B by looking at the regularities around us.  In this sense, without the external world functioning the way it seems, we would not be able to apply a causal relationship.

 But Hume’s idea of causation is problematic for a number of reasons.  Firstly, it implies that events which appear itself regular are causally linked because ‘it determines the mind’ to form the idea that they are necessarily connected.  However, there are such cases where events appear to be regular but not causally linked at all.  Thomas Reid’s point that day follows night and night follows day is a classic example; they are both regular in conjunction of space and time – but neither day causes night or night day (Reid, 1788).  Therefore certain events in the world can mislead us into believing one to be the cause of the other; in this case, our idea of causation is dysfunctional when applied to the real world because they are not causally linked[2].  But Hume’s definition tells us that should we observe regularity in constant conjunction, then we should apply this missing connection as something causal.

Secondly, if we learn by ‘custom and habit’ that similar events in the past have produce similar effects, then we can only infer a certain probability that the same effect will occur in future instances.  Grey clouds in the past are associated with the fall of rain, but this is not to say that this will always happen every time we see a grey cloud.  Therefore should causation function in the real world, it does so due to our propensity to experience something as most likely to happen.  Therefore we can only assume to a degree that just because something has occurred in the past it will inevitably follow in the future.  But our minds are basic machines compared to nature as a whole, we only assume subconsciously that the chance in certain things occurring are probabilistic; we innocently take a leap of faith when we come to judge the real world, and in many everyday type cases we are correct in our inferences.  In a sense then, causation functions relatively blindly in the real world, for we can never actually prove with certainty that a similar cause will produce a similar effect; but paradoxically, it happens to functions with a good deal of accuracy.

On reflection of these points, causation seems to function in the real world as a form of scientific explanation.  Regularities that appear in the real world provide, to a certain extent at least, rules of nature which only the mind can conceive and interpret as laws.  These laws can then be formulated into equations of some kind in order to explain the cause of something.  For example, Newton’s second law of motion, F = ma, is a law that states that the force of an object in motion is equal to its mass multiplied by its acceleration[3].  The crux here is that force cannot be obtained without the initial observations of mass and acceleration in the first place.  Multiplying the mass of the object by its acceleration gives us the force for most, if not all, objects in motion.  But it is only the mind that provides an explanation of such a relationship between force and its constituent parts (mass and acceleration). 

It is in this sense then, should causation reside in the mind, it functions because it attempts to explain how nature operates; likewise, it frame’s laws that can back up our claims with evidence from the real world.

But the question is then, do we really invent these laws, or are we discovering them instead?  When we say we are discovering laws, what we are saying is that we are finding these laws ‘inside’ nature. The rules that describe events taking place are intrinsically embedded within her, waiting for our minds to come across them, to discover them.  However, should causation only dwell in the mind, its function must be to that of creating laws based on nature.  It is because only the mind can ‘glue’ events together to a certain degree of accuracy that justifies its function.  Causation then seems to work because we infer these physical laws which are initially interpreted from the phenomenon of nature that are more or less accurate in explaining why certain events happen.

A key problem impinging on this issue then is a semantic one: in the initial question what is meant exactly by the word “seem”? The word, in context of causation, means that causation seems to function in most cases, but there are times when it doesn’t; for example, Reid’s argument about the regularity of day and night.  Furthermore in regard to science and explanation, the laws we apply to nature only ‘seem’ to function in this point in time until they are refuted or improved upon.  In a sense then we are trapped inside a world of seeming.

In analysing this point it could be argued that the question: “if causation is just in the mind why does it seem to function in the real world” is a misguided or misplaced question since it is precisely because causation “seems” to function in the real world that Hume maintains it to be in the mind.


G, Strawson (1989), The Secret Connexion, Causation, Realism, and David Hume, Oxford: Clarendon Press

Hume, D., A Treatise of Human Nature, ed. D.F Norton and M.J Norton, 2007, New York: Oxford University Press Inc,

Hume, D., An Enquiry Concerning Human Understanding, ed. E. Steinberg (1977), 2nd Edition,  Canada: Hackett Publishing Company, inc.

Rosenberg, A (2005), Philosophy of Science a contemporary introduction, 2nd  Edition, London: Routledge.

Reid, T. Essays on the Active Powers of Man, in Beanblossom and Lehrer (1983), in M.J Loux (2006), Metaphysics a contemporary introduction, 3rd Edition, Oxon: Routle

[1] However, some philosophers disagree that Hume advocates a regularity theory such as G. Strawson (1989)
[2] It could be argued however that day and night are not entirely separate events; that is, the connection between them is due to the sun rise, which is only something we observe as ‘rising’.  The real “cause” is owed to the motion of the earth.  It could be interpreted that day is just the appearance of light given by the sun, and night is due to its absence – day and night are conditioned by the sun’s light.  Therefore day causing night is not entirely false as these are controlled by the earth’s rotation and the sun.
[3] A similar argument is provided  by A. Rosenberg, chapter 2 p.26.

What is the nature, causes and treatments for Schizophrenia?

Schizophrenia is a complex illness; wide in its nature, debatable in its causes and generic in its symptoms.

In terms of nature, Psychologists attempt to explain its symptomatology, distributions among the population (male or female, rich or poor), and age of onset.   In terms of causes, factors fall into either environmental (expressed emotions in families) or biological explanations (genetic heritability).  Lastly, in terms of treatments, there are debates as to whether psychological treatments methods are more effective than the traditional pharmacological ones (drugs).

This paper will look at some of these arguments in each category, assess their strengths and weakness, and conclude, firstly, that there are no single underlying explanations accounting for the nature and treatments of the disorder.  And lastly, that there is a strong link suggesting Schizophrenia as being a genetic disorder.

There are two groups of symptoms which are generally identified when diagnosing Schizophrenia.    Positive and Negative.  Positive symptoms entail the presence of something unusual or not observed in people with a normal psychotic predisposition; such as hallucinations, either visual or auditory (Sartorius et al 1974).  Other symptoms include delusions, disorganised thought processes and, in some cases, catatonic behaviour (the holding of a strange statute position for hours).

Negative symptoms on the other hand correspond to the absence of something normal; a common observation associated with this is flat affect.  Flat affect is a reduction of, or absence of, a suitable facial expression.  For example, smiling.  Other types of negative symptoms range from the absence of social skills, to life skills in general, making relationships and jobs difficult to sustain.

For an official psychiatric diagnoses, DSM-IV-TR criteria state that at least two or more positive, or negative symptoms, followed by six months of disturbance are sufficient.  However, it is important to note that symptoms vary amongst individuals, making diagnoses often difficult.

Psychologists have come to a general consensus that the onset of Schizophrenia appear to be approximately during adolescence; however, research has found symptoms in infants as young as three years of age (Russell et al 1989), which may imply a genetic significance in its etiology.  But findings in this area of research are difficult to interpret for the reason being that Schizophrenia in early childhood is very rare (Burd and Kerbeshian, 1987) as findings cannot be accurately validated.

Further studies have found Schizophrenia to affect approximately 1% of the general population, or 1 in 125 (although this is not the case in all countries).  Other studies have found that male populations are more prone to Schizophrenia than female populations (Hambrecht et al 1993), leading to research in predominately male environments, with others focusing on the biological aspects of gender.

Psychologists and Psychiatrists have therefore endeavoured to find its primary cause.  There are no definite answers, but, broadly speaking, they can be generalized into either environmental or biological factors.

Environmental factors focus on the individuals’ world as a possible cause of Schizophrenia.  Such explanations have lead researchers to study parental attachment, emotions and communication styles amongst family members.  Gordon Parker studied the responses of patients’ mother and father in a simple questionnaire, and found that people with Schizophrenia tended to have high relapse rates amongst parents with parental styles as either neglectful or affectionless.

Another area Psychologists have been concerned with is the amount of expressed emotion (EE) and communication in families whose relatives have Schizophrenia.  Broadly speaking, high expressed emotion and poor inter-familial communication have been related to higher relapse rates (Brown et al 1972).  In support of this, Norton et al (1982) designed a longitudinal study assessing EE in 52 patients, by studying voice tone, and negative content in communication they found that high EE was very accurate in predicting relapse.   Furthermore, Doane et al (1982) discovered that parents with a pathologic style of communication and high levels of communication deviance, produced schizotypal behaviour in their offspring.

There are however, some difficulties with these studies.  Firstly, both samples contained people with Schizophrenia already.  Therefore these studies don’t explicitly explain cause, but rather the effect of living under stressful environments.  Secondly, there is evidence to suggest that high EE does not necessarily contribute to Schizophrenia (Parker et al 1988), implying individual differences in terms of reactions to stress.

In addition to these environmental factors, there appears to be stronger evidence from a biological perspective.  These factors attempt to define Schizophrenia as having an underlying biological cause.  For example, observations have been made during the menopause that an imbalance of certain chemicals can influence the symptoms of Schizophrenia (Mitchell 1974).  Other studies have focused on the brain and have noticed differences in ventricle size; with people with Schizophrenia tending to have larger ventricles (Nopolus, Flaum & Andreason, 1997), while other studies have noted excessive amounts of dopamine levels. However, explanations in the etiology of these factors remain controversial.  It is possible that a diathesis-stress model explains them, but on the other hand, there is the possible that there are innate biological predispositions.  This leads on to a very heavily researched area in the eitiology of Schizophrenia, genetic inheritance.

The most effective way in testing for genetic pre-dispositions has been by comparing non-identical twins and identical twins.  In a large study, Cannon et al (1998) used the Finish National Population Register and compared 9562 liked-sexed twins and found an 83% concordance rate in Schizophrenia amongst identical twins (Monozygotic) but only a 17% for non-identical twins (Dyzogotic).  In his book, Schizophrenia Genesis, Gottesman (1991) compared four twin studies across Europe between 1963 to 1987 and found an average of 48% concordance rates amongst monozygotic twins, in comparison to 17% for dyzygotic twin pairs.  Both studies add weight to the argument that Schizophrenia is influenced by genetic factors.  However, Psychologists point out that because concordance rates are not 100%, other explanations are responsible, for example environmental factors.  Alternatively, others may point out that Cannon’s participants were more prevalent to Schizophrenia, as he, rightly, pointed out that prevalence rates in Finland are 1.3% as opposed to 1% in the UK.

Despite this, Kety (1988) argued that we have more reason to believe that genetic factors are better explanations for the etiology of Schizophrenia than environmental ones.  His justification is based on his national study in Denmark where people with Schizophrenia were adopted away from their relatives.  He claimed that if these people went on to the develop Schizophrenia then this would strongly imply a genetic cause.  His findings proved significant as 5 cases of Schizophrenia were found compared to 0 in the control group (p=0.3).  More significant however was the finding that 11% of latent Schizophrenia were found in comparison to 0.9% of controls (p=.0004).

In light of these explanations, there are a variety of treatments available.  Again, we can group these into two main categories; pharmacological and psychological.

Pharmacological treatments that are commonly used are drugs. The most widely accepted are Antipsychotic drugs such as Clozapine.  These types of drugs can help alleviate the negative symptoms of the disorder with mixed results.  Drugs appear to be very particular.  Studies have shown that different Antipsychotic drugs have different effects.  For example Claus et al (1992) found Risperidone statistically significant in the overall psychopathology in comparison to other drugs, whereas, Peuskens (1995) did not find any statistical significance.  On the other hand, Olanzapine was found to be effective in reducing positive symptoms only, but failed to reduce the negative symptoms.

Drugs, however, are not the only solution.  A more evasive approach has been developed by utilization of a short electric current passed through the patient’s brain.  This process is known as ECT, and is used as a last resort. Despite the evasiveness of the treatment, Brandon et al (1985) argues against its use as lack of significant evidence has been found in reducing the symptoms of Schizophrenia, leaving this method risky as well as dangerous.

Other types of treatment available are Psychological treatments.  These attempt to treat the behaviour of someone suffering from Schizophrenia.  One approach developed is known as Social Skills Training.  This form of treatment provides the individual therapy, with the aim of improving life skills and reduces relapses.  More than 40 studies have found improvements in these areas, but a further study found with the combination of educating those close by to the person with Schizophrenia found a zero relapse rate in their first year of treatment (Corrigan 1991).

Herz (2000) on the other hand believes that Psychological treatments are more effective than Pharmacological ones with the finding that 22% of patients receiving Psychological treatments were re-hospitalized in comparison to 34% patients receiving Pharmacological ones.

The agreements and disagreements in Schizophrenia imply its complex nature,   Firstly, there is no single symptom that Psychiatrist can rely on in diagnoses; rather, there are a range of symptoms complicated by the fact that they vary within patients.   Secondly, there is no specific drug that can alleviates the symptoms of Schizophrenia, some work well for some people but for others not.  This has lead to a number of treatments available on the market.  Thirdly, in terms of etiology, environmental explanations do not explain the causes of Schizophrenia, but only its effect on relapse rates.  But there is however, a general consensus now that Schizophrenia is a predominately genetic disorder; this is evident in twin and adoption studies.  Although singling out the gene or group of genes responsible remains difficult, relatively recent research has found links between certain chromosomes associated with Schizotypal behaviour (Bassett et al 1988).  Once this gene has been located, more effective treatments can be engineered in the hope of developing a cure, and making Schizophrenia a thing of the past.


Bassett (1988), In Ming T. Tsuang (2001), Epidemiology in Neurobiological Research, The British Journal of Psychiatry, 178: 518-524

Brandon. S (1985), Leicester ECT trial: Results in schizophrenia. In Nathan, P.E., Gorman, J.M, (Eds)
A guide to treatments that work, New York: Oxford Uni Press (pp.172)

Brown et al (1972), Influence of the family life on the course of schizophrenic disorders: A replication.
In Walker, E, Schizophrenia A life-course Developmental Perspective (pp.231)

Burd and Kerbeshian (1987) A North Dakota prevalence model for Schizophrenia presenting in childhood.  In Walker, E, Schizophrenia A life-course Developmental Perspective (pp.96)

Claus et al (1992), Risperidone versus haloperidol in the treatment of chronic schizophrenic inpatients:
A multicentre double-blind comparative study. In Nathan, P.E., Gorman, J.M, (Eds) A guide to
treatments that work, New York: Oxford Uni Press (pp.172)

Corrigan 1991, Social skills training in adult psychiatric populations: A meta-analysis. In Nathan, P.E.,
Gorman, J.M, (Eds) A guide to treatments that work, New York: Oxford Uni Press (pp.200)

Doane, J A et al, (1981), Parental Communication Deviance and Affective Style, Arch Gen Psychiatry, 38, 679-685

Gottesman, I.I (1991). Schizophrenia Genesis The Origin of Madness, USA: W.H Freeman

Hambrecht et al. (1993). Evidence for gender bias in epidemiological studies of Schizophrenia, Schizophrenia Res, 8, 223-231.

Herz. M et al, (2000), A program for relapse prevention in Schizophrenia, Arch Gen Psychiatry, 57, 277-283

Kety, S. (1988). Schizophrenic Illness in the Families of Schizophrenic Adoptees: Findings From the Danish National Sample, Schizophrenia Bulletin, 14-2, 217-221

Mitchell, A.R.K., (1974) Schizophrenia The meanings of madness. New York: Tapling Publishing Co., Inc.

Nathan, P.E., Gorman, J.M, (Eds) A guide to treatments that work, New York: Offord Uni Press

Nopoulos, Flaum & Andreasen, 1997, Sex differences in brain morphology in schizophrenia. In Kring,
A., Davidson, G., Neale, J., Johnson, S. (2007). Abnormal Psychology. USA: Jay O’Callaghan (pp.365)

Norton, Pritchett, J, D.S.W. (1982) Expressed Emotion, affective style, voice tone and communication  deviance as predictors of offspring Schizophrenia Spectrum Disorders (Abstract Only)

Parker et al (1988), Parental “expressed emotion” as a predictor of Schizophrenic relapse. In Walker, E,
Schizophrenia A life-course Developmental Perspective (pp.231)

Peuskens. J (1995) Risperidone in the treatment of patients with chronic Schizophrenia: A multinational,
multi-centre, double-blind, parallel group study verus haloperidon. In Nathan, P.E., Gorman,
J.M, (Eds) A guide to treatments that work, New York: Offord Uni Press (pp.172)

Sartonis et al (1974), The international Pilot study of schizophrenia. In Kring, A., Davidson, G., Neale,
J., Johnson, S. (2007). Abnormal Psychology. USA: Jay O’Callaghan (351 – 352)

Tyrone D. Cannon et al., (1998). The Genetic Epidemiology of Schizophrenia in a Finnish Twin Cohort, Arch Gen Psychiatry, 55, 67-74

Why Wittgenstein’s theory of how language works might call into question the reality of the self.

For the vast majority of people, questioning the reality of the self is an absurdity.  It will almost always yield a common sense answer among the majority of people such as the self is my body; it is my physical existence.  Others may talk of a mental self; that is, a metaphysical self that nests within one’s brain.  Such common answers as these lead us down a dark alley with a picture of the self as a kind of mysterious form of mental cognition such as a process of thinking, or a mental swirl of energy that determine our volitions. 

Interpretations such as these go back to the time of Descartes with his idea of dualism.  Dualism simply states that there are things in the world that are either of a physical kind or a mental kind.  For Descartes, his Cogito ergo sum (I think, therefore I am) is an argument for the existence of a thinking self, a self in which he cannot come to doubt as this would be doubting his own internal thoughts and doubting one’s own thoughts is an impossibility since we are by nature thinking beings.  The idea that he cannot doubt is a declaration that the mental “self” can exist independent of the body as, for Descartes, we could be deceived in possessing such a body by an evil demon.

The problem Wittgenstein had with the self was one to do with language.  What do we mean when we talk of the self, and what words do we use to refer to the self?  For Wittgenstein language has a particular function for particular situations; to take language out of a specific context and apply it to other situations leads us to confusion and misunderstanding and thus creates what Wittgenstein believes are philosophical problems.

The first part of this essay will explain how language works according to Wittgenstein. The second part will explain his ideas about the self which will be followed by an evaluation where I will argue against the idea that the self is a nothing.


For Wittgenstein, every word in our language is like a chess piece: it has an individual function and plays a specific rĂ´le.  For example, in a game of tennis there are words that can only be used within the context of tennis; ‘love’, for example, can only mean a score of zero and nothing else.  Thus the speaker of such words can only use them when he understands the context in which they are used; that is, when he understands the game that is being played.  Likewise, other people can only understand the language used in tennis when they come to understand the game of tennis.

The notion that words have particular uses in certain contexts is known as a language game.  To understand the meaning of words, we must examine how they are used in different situations.  This can also apply to aspects of non-verbal language.  For example, in western societies, the wearing of black during a funeral procession conveys the notion of death. 

Central to Wittgenstein’s thought on language is the idea that language is used within a form of life (PI §23), or to be more specific, a culture.  In order to understand a language one must understand the life within which this particular language is used.  For example, to understand Chinese as fluently as a native speaker, one needs to be fully familiar with his life and culture.  Wittgenstein supports this thought with the analogy of a lion.  “If a lion could talk, we could not understand him” (PI §223).  Wittgenstein is telling us that even if a lion, cat or any other animal could speak our language, we would fail to understand them because we are not cognizant with the world they inhabit.

In a nutshell, for Wittgenstein, language is formed through interactions with the environment, and determined by the society we inhabit and the culture it adopts.  To find the meaning of language is to look at its various uses.  But how do we learn language in the first place?  Wittgenstein shows us that before we learn the use of a word we must first see its use in action.  Words such as “this” and “that” are learned through the action of pointing at objects, where the action of pointing reveals the meaning of the word “this” or “that”.  For example, the expression “ that chair ” followed by the pointing towards the chair forms the connection between the word “that” and the object at which the finger is pointing.  Of course, gestures such as these are all part of the language game we play and can vary across cultures.  But this is not to say that the language of such gestures and words exists outside of us; on the contrary, it is manifested within ourselves through our response to our external environment.  Therefore, for Wittgenstein at least, language can only be used as a way of describing things in the world and not as an explanation of those things because, for him, there is no essence lurking behind the world.
 “Since everything lies open to view there is nothing to explain. For what is hidden, for example, is of no interest to us.” (PI §126)


Before questioning the reality of the self in the light of Wittgenstein’s theory of how language works, it is helpful to explain what the reality of the self means.  For many people, the reality of the self is a kind of personal identity.  This was the view taken by the 16th century philosopher John Locke.  For Locke, one criterion for personal identity was psychological continuity; that is to say, that the same person, or self, can exist at different time intervals, as memories we have of the past can demonstrate (ECH II.XXVII 9-12). But a notable problem with this idea springs from the fact that we cannot remember everything; for example, we cannot fully recall our early months and years as a baby. Nevertheless, this does not necessarily imply that when our memory fails us our personal identity or the reality of ourselves ceases to exist.

What do we actually mean by personal identity? The use of such a term seems to imply that there is something specific inside us, a glue that is stuck to us from the moment we are born to the moment we die, and that no matter what physical characteristics we possess, whether it be our height or hair colour, this personal ‘thing’ remains the same and enables us to refer to ourselves as something that exists, whether it be mind or body.

For Wittgenstein, to think of a person as a metaphysical entity is a mistake, and philosophy’s job is to dispel the confusions which befog the concept of person or self. According to Wittgenstein we use many words to describe ourselves; for example, we can describe ourselves in terms of a psychological or physical being; but there are many definitions of these, and, strictly speaking, it would be wrong to attach a necessary or sufficient condition to define a self as something objective or subjective because 
no matter how many terms we can think of, none of them really explain what our “self” is.

We must acknowledge the fact that when we wish to refer to ourselves the most common word we use is the first person pronoun ‘I’.  But what exactly does this ‘I’ refer to?  Wittgenstein claims that it doesn't refer to anything at all; it doesn't point to anything within the mind which can deservedly be considered an object that is fundamentally me.  Furthermore, the word ‘I’ neither describes nor explains anything about me for when we use the word we use it in discourse with other people as an object that processes psychological states without ourselves being aware of it.  Even if we consider someone with total memory loss, their inability to remember events would in no way interfere with their ability to use the word ‘I’. This informal use of the first person pronoun implies that the word ‘I’ is unable to describe a state, or states, of internal mechanisms.  For Wittgenstein then, the problem of the self is a problem that arises from the language we use.  We think of the self as an object like a table or a chair and come to regard the self as a type of object in its own right.  Yet the first person pronoun ‘I’ not only fails to refer to an object or entity of some kind, but is also misguidedly used in parallel with other words such as ‘this’ and ‘that’.  These words are used to refer to something objective; for example, ‘this chair’ or ‘that lamp shade’.  But ‘I’ cannot be used in this way at all ; in fact, ‘I’ has a similar meaning to the word ‘it’s’ in the phrase “ it’s going to rain[1] ”.  ‘I’, then, is a human construction made from language that is used to refer to the self, but this self doesn’t actually exist; our language creates it.

If we accept the validity of Wittgenstein’s theory about how language works, then the problem of trying to discover our self should not really be a problem at all.  If the meaning of words is found through their uses, which, in turn, are shaped by society, culture, history and the like, then we can only use words that refer to the self by using them as we do in everyday life.  But ‘I’ is used as a subject term (Blue and Brown books, p.66-70), and should the subject try to look inside its own mind, it will see nothing.  Our mental states are inobservable by us; we can never see inside ourselves and reveal the content of the mind.  Authentic knowledge of ourselves is therefore impossible since to have such knowledge requires a process of verification[2]

Words cannot describe nor explain our inner experiences and sensations; they do not name private objects within us, the reason being that the words we use in the everyday sense are words which are primarily public in nature; for Wittgenstein, therefore, there is no private language.  To take a well known example of his, to say “I am in pain” is a linguistic way of expressing a sensation of pain, but I cannot know that I am actually in pain independently of my physical experiences.  In addition, and perhaps more controversially, Wittgenstein declares that even thinking of ourselves as self-conscious is fallacious.  There is no such thing as even thinking about thinking of being self-conscious because it is only through the use of language that we come to think of consciousness in the first place[3]; this is to say that we can only express our thoughts and feelings through the medium of language, but, essentially, these inner experiences and sensations remain independent of the words we use to express them.  Consider yourself in pain : the language we use to express this pain is public while the mental cognition of being in pain is private ; therefore we are using two different language games to express ‘pain’, one inner and the other outer, and the outer expression doesn't reveal the actual inner state of being in pain.

Even the word ‘consciousness’ itself doesn’t refer to anything within us because we cannot use it to ‘describe or explain’ our psychological states of mind, the volitions of which we can only express through language games.  In this sense then, the reality of consciousness is fundamentally a nothing; that is, a “no-thing” for Wittgenstein.

Wittgenstein thus delivers a blow to Descartes’ cogito.  To say “I think, therefore I am” is wrong primarily because self-consciousness as an entity or entities within our body cannot be located.  Secondly, the use of the first person pronoun “I” doesn't essentially refer to any object within us, and so it cannot refer to our self as the conscious being to which Descartes’ cogito refers.  Lastly, all language is public in nature and not private.  Descartes’ visualized the self as something personal, working from the inner to the outer, but Wittgenstein turned this centuries old model around from the outer to the inner.  

A problem arises when we try to define a relationship between the mind and behaviour.  Indeed, it is easy to read Wittgenstein as a behaviourist when he says: “An ‘inner process’ stands in need of outward criteria ” (PI, §580).  It is easy to read him in this way because of his belief that the ‘I’ is not an object (NB p.80)[4].  But essentially Wittgenstein is saying that without the external world in the first place, we would not be able to describe our mental states at all.  Instead, the position Wittgenstein takes is one opposed to both behaviourism and the mind in general (indeed, he has been described by some people as a negativist[5]).  The problem with behaviourism is that it maintains, for example, that utterances of pain and other emotions are descriptions of a particular behaviour.  But they are not actually descriptions of anything; it is simply the case that when we are in pain we express it in a form of language (a language game).  However, there are some instances when we don’t express pain through our behaviour even if we are in pain. This just reveals the phenomenon of subjectivity; that is, an ability to withhold pain that can only come from our mental side. Wittgenstein’s conclusion is that “The I, the I, is what is deeply mysterious”. (NB p.80)

Wittgenstein doesn’t provide us with a fundamental answer to the question of the self, but instead he provides us with the following analogy: 

“Think of a picture of a landscape, an imaginary landscape with a house in it. – Someone asks ‘whose house is that?’ – The answer, by the way, might be ‘It belongs to the farmer who is sitting on the bench in front of it ’.  But then he cannot for example enter his house ”. (PI, 398)

Wittgenstein is saying that the self doesn’t exist.  We can imagine an imaginary landscape that “might” belong to the farmer who just happened to be sitting on the bench in front of the house, but there is no real self that owns such things.  Likewise, we can imagine the farmer walking up to the house and entering it, but in reality he can’t.  This seems to imply that at least the phenomenon of subjectivity cannot be denied.  But objectively, the self is a nothing.


Wittgenstein’s theory of a self that doesn’t exist is problematic.  The fact remains that although language has a multitude of uses within specific contexts, this doesn’t change the fact that thinking must precede the development of language in the first place.  This is to say that although the first person pronoun ‘I’ is a human construction, it is no doubt used as a prima facie case, i.e. ‘I’ is a referring expression at first appearance[6].  But if it is a referring expression, then the only thing it can refer to is the ego which initially created it.  If we accept this view, it is difficult to see how we can come to doubt the reality of the self as a person who conceives and takes part in the world as a participant of some kind. 

The ‘I’, then, must refer to a form of identity, and one that is not solely private; that is, an identity in which “I” has both a personal and public use which enables me and others to interact.  Although it is true that the language we use to express our inner states is conditioned by society and by the language games we play, and consequently casts the ‘self’ as something non-personal, it seems irrefutable that there must be something within us that has the capacity to express itself in the first place.  It is surely some sort of personal self which initially enables us to express ourselves using language. If we reflect on our evolutionary past when language was not as developed as it is today, it seems obvious that an actual thought process must initially have been taking place within the self, thereby strongly suggesting that the latter is detectable through reason. 

Indeed, how would it be possible to refer to myself without using the first person pronoun ‘I’? If I want to describe my characteristics to others, it is inevitable that I will use ‘I’ as a starting point, but not only does it refer to me; a third person (e.g. a listener) would naturally understand the word ‘I’ to refer to the person speaking. A fellow philosopher and friend, Elizabeth Anscombe in her book ‘The first person’, attempted to defend Wittgenstein and argued that because ‘I’ can be used by individuals with extreme sensory deprivation, the ‘I’ must refer to something bodiless although Wittgenstein would surely have viewed this conclusion as illusory.  

Nevertheless, since in ordinary discourse we make frequent use of the first person pronoun ‘I’, there must be a something to which the ‘I’ does in fact refer[7] irrespective of whether we are suffering from extreme sensory disorder or, as mentioned earlier, a complete memory loss.  Although it may be the case that the self can only be expressed using language that is public, and that language itself can only express inner feelings and sensations without being able to describe and explain these inner states, it seems difficult to avoid the conclusion that, at the very least, the self exists as an appearance of some kind.  Overall, therefore, I would argue that to doubt the reality of the self, as Wittgenstein’s theory of language has done, is both impractical and inherently flawed.


G.E.M. Anscombe and R.Rhees (1953), Philosophical Investigations, Oxford: Blackwell

G.E.M. Anscombe, ‘The First Person’, in H.Glock (2001) Wittgenstein A Critical Reader, Oxford: Blackwell, p.243

H.Glock (2001), Wittgenstein A Critical Reader, Oxford: Blackwell Publishers, p. 224 - 246

H.Sluga. “Whose house is that” Wittgenstein on the self, in H.Sluga and D.G. Stern (1996), The Cambridge companion to Wittgenstein, Cambridge University Press: Cambridge, p. 320 – 354

H.von Wright and G.E.M. Anscombe, Notebooks, 1914 – 1916, Oxford: Blackwell, p. 80

J.Locke., (1996), An Essay Concerning Human,
USA: Hackett Publishing, Chapter xxvii

J.R. Searle (2004), Mind A Brief Introduction, Oxford: Oxford University Press, chapter 11.

P.F. Strawson (1959), Individuals, London: Methuen, chapter 3

The Blue and Brown Books (1958). Oxford: Blackwell, p. 66 - 70


G.E.M. Anscombe and R.Rhees (1953), Philosophical Investigations, Oxford: Blackwell

H. Glock (2001), Wittgenstein A Critical Reader, Oxford: Blackwell Publishers

H. Sluga and D.G. Stern (1996), The Cambridge companion to Wittgenstein, Cambridge University Press: Cambridge

J. Locke., (1996), An Essay Concerning Human,
USA: Hackett Publishing

P.F. Strawson (1959), Individuals, London: Methuen

J.R. Searle (2004), Mind A Brief Introduction, Oxford: Oxford University Press

H. von Wright and G.E.M. Anscombe, Notebooks, 1914 – 1916, Oxford: Blackwell

The Blue and Brown Books (1958). Oxford: Blackwell

A. Kenny (2006), The Wittgenstein Reader, Oxford: Blackwell Publishing

G.L. Hagberg (2008), Describing Ourselves, Oxford: Clarendon Press

[1] This example is taken from Searle (2004)
[2] This is controversial for Wittgenstein as it assumes we take an empirical stance on the question of the ‘mind’ David Bakhurst, ‘Wittgenstein and I’, In Glock, Hans-Johann (2001), Wittgenstein A Critical Reader, Oxford: BlackWell Publishers, p. 238
[3] Wittgenstein’s thought here is controversial.  Surely thinking must precede the evolution of language?
[4] NB is work produced in the Note Books (1961)
[5] This view has been endorsed by Hans Sluga (1996)
[6] A similar view is expressed in Bakhurst’s chapter on Wittgenstein and ‘I’, in Glock (2001)
[7] Strawson takes this argument further and argues that the ‘I’ does refer to a person at the very least with mental powers, see Strawson (1959)