Biological Psychology and the Field of Individual Differences

Critically assess the methods used in the field of biological psychology and the field of individual differences.

Following the objective of critically assessing the methods used in the field of biological psychology and the field of individual differences, the focus will be on experimental methods. It is important to note that for the purpose of the current research the concept of biological psychology will be broadly defined as representing the study of brain-behaviour relationship.

Thirkettle and Stenner (2016:14) define the experiment as being “a set-up for measurement that allows the testing of a hypothesis”. Experimental methods therefore entail establishing experimental hypotheses which aim to predict the relationship between variables of interest. A variable as defined by Minium, King and Bear (1993:23) is a characteristic which may take different values. The complex interaction existent among mental processes as well as the continuous impact of both biological (genetic) and environmental factors makes it difficult to accurately measure psychological constructs. Well-designed experimental approaches arguably facilitate testing and identifying the effects and mechanisms of relevant psychological variables (e.g. mental processes, psychological constructs etc.).

Here we will focus on identifying strengths and weaknesses of experimental methods used in the field of biological psychology and the field of individual differences.

Internal and ecological validity of psychological experiments: evidence from the field of biological psychology.

Harrison, Ness and Pike (2016) present several experiments which have contributed to the purpose of enhancing understanding in respect to memory functioning. The authors describe the contribution made by Loftus and colleagues to the understanding of false memory. After being presented with an account of an event, participants were provided with misinformation which, as it resulted from further testing, proved to alter their original memory. It was through an experimental method that researchers were able to identify the ‘misinformation effect’ and the factors contributing to enhancing it (e.g. leading questions – Loftus and Palmer, 1974, Loftus and Greene, 1980) or to reducing it (e.g. providing misleading information which is not credible – Walther and Blank, 2004; cautioning the subject in respect to the inconsistency of the information – Lindsay and Johnson, 1989; Meade and Roediger, 2002). Identifying the ‘misinformation effect’ as a process through which false memories are formed, however, far from provides an extensive explanation for the process. The experiments of Loftus and colleagues, as well as the following ones aiming to gain deeper insight in this respect, managed to shed light on several aspects of memory functioning, but these far from provide a comprehensive picture. Recent research contradicts initial assumptions which suggest that, as a consequence of the ‘misinformation effect’, the original memory would be lost. Oeberst and Blank (2012) contradicted this assumption by proving that the effect could be reversed. Ongoing debate in the scientific field has made it evident that there is no definitive answer as to why the effect occurs and what implications it has for the original memory (Harrison, Ness and Pike, 2016). The evolution of this area of research alone demonstrates that as research evolves previous results may end up being reinterpreted to fit the novel context defined by the newest results. It may be said that the experimental method allows researchers in the field of biological psychology to advance knowledge in respect to small segments of mental processes.

The experimental method makes it possible for scientists to accurately test experimental hypotheses within a controlled environment. While the controlled environment is what enhances the internal validity of the study, it is at the same time the aspect that reduces the ecological validity of the study. The internal validity refers to the extent to which an experiment is sufficiently well designed so as to reduce the interference of confound variables (uncontrolled factors that co-vary with the independent variable) so as to yield clearly interpretable results (Goodwin, 2010). Ecological validity describes “the extent to which a study reflects naturally occurring or everyday situations” (Harrison, Ness and Pike, 2016: 116). In order to satisfy the condition of having high internal validity, experiments will most likely need to reduce the level of ecological validity. In the field of biological psychology this was proven, for instance in the case of the experiment conducted by Anzures et al. (2014). The researchers involved 94 five-to-ten-year-old, Caucasian participants in a study aiming to test the ‘other-race-effect’ (ORE). The participants were shown images of Caucasian and Chinese individuals’ faces and rated their ability to recognise a previously seen face within a subsequent set of two faces (Harrison, Ness and Pike, 2016).

Harrison, Ness and Pike (2016) present several of the differences existent between the conditions to which subjects were exposed in the laboratory and the ones they would potentially experience in a real-life context. These are relevant for understanding what contributed to the reduced ecological validity of the experiment mentioned above, but they are relevant for other experiments as well. The scholars mention the following aspects:

(1) the artificial setting – an experiment will seek to control the effect of environmental factors, whereas in a real-world context the participant will be exposed to the interference of multiple such factors.

(2) the artificial stimuli – experiments often use easily accessible stimuli which may match to a significant extent the nature of the natural stimuli, the difference, however, may be the one that makes the difference. In the case of Anzures et al.’s experiment (2014), participants were provided with 2D static representations of human faces, whereas in real life, they would encounter 3D human faces, which are also moving.

(3) artificial task – recognising human faces within a controlled environment such as in the case of an experiment entails establishing a finite number of choices for the participants to opt for, in the case of Anzures et al.’s experiment (2014), there were only two options to choose from. In a real-life context an individual would be exposed to a substantially higher number of options.

(4) artificial time span – similar to the number of options available within an experimental task as opposed to a real-life context, the time frame available for solving the task would also be significantly different. In Anzures et al.’s experiment (2014), subjects were exposed to predictable and short task solving sequences, whereas in real life the sequencing would be unpredictable and significantly longer.

(5) explicit versus implicit memory – in a real-life context, implicit memory would be solicited more than in an experimental context. In the case of an experiment presenting the subject with the task they are to solve, subjects are made to use their explicit memory.

(6) consequentiality and motivation – real-life motivation for remembering faces is different to the motivation fostered within an experimental condition given that the consequences associated to recognising or not recognising faces in the two contexts are different as well.

Validity and reliability in psychological experiments: evidence from the field of individual differences.

Rigorous scientific research conducted in the field of psychology needs to address not only the aspect of validity but also that of reliability. Validity, according to MacLean (2016:259), indicates “whether a measure is measuring what it is meant to measure” whereas reliability is defined as “the consistency or dependability of a psychological measure” (2016:258). Intelligence – a concept belonging to the field of psychology, is not as straightforward and easily measurable as length – a concept belonging to physics. Both fields of research had to devise reliable measurement tools for their various concepts (i.e. IQ for intelligence and metre for length).

Testing for individual differences in respect to specific aspects entails the researchers first making sure they are using a reliable and valid measuring tool. Depending on how they are defined and operationalised, the concepts in the field of psychology can be assessed using multiple alternative measurement tools. Research conducted to test for creativity illustrates this. Researchers have proposed multiple means of assessing creativity: divergent tests (Torrance Tests of Creative Thinking – Torrance, 1987), remote associates tests (RAT – Mednick, 1968), self-report measures (Creative Activities Questionnaire – Carson et al., 2005; Creative Behaviour Inventory – Hocevar, 1980), consensual assessment technique (Amabile, 1996).

Experiments conducted by Gino and Ariely (2012) indicated a potential link between creativity and dishonesty. When further exploring this link, Gino and Wiltermuth (2014) used several creativity assessment tools and confirmed the results of Gino and Ariely (2012): participants who had the chance to cheat obtained higher scores on creativity. Similar to the case of the experiments regarding false memories, here too, the experiment made it possible to identify the link between two constructs, but not to explain the process underpinning this link. It can be inferred here why the multitude of measurement tools available for assessing creativity could have represented a problem. Different results obtained with different tools would have rendered the research results difficult to interpret. The fact that the experimenters used several assessment tools for creativity and obtained similar results proves the reliability of these measurement instruments.

Conclusions

There are both strengths and weaknesses associated with experimental methods used in the field of biological psychology and that of individual differences. The main strength of experiments is that they allow for psychology researchers to draw accurate conclusions based on experimental hypotheses tested in controlled environments. Experiments allow them to reduce the complexity of the phenomena they study to a manageable range of variables. The accuracy of their conclusions is also dependent on the reliability and validity of the measurement tools they work with within these contexts. The main drawback of experiments is that they reduce the possibility for researchers to confidently extrapolate the findings from their experiments to real-life contexts. The current state of knowledge in the field of both biological psychology and individual differences relies on previous research which has tested and elaborated hypotheses. Based on the observed history of existing research it may be stated that the currently accepted research results will remain valid only until new research results will provide a novel context which will prompt a new perspective and potentially a new interpretation of them.

References

Amabile, T.M. (1996) ‘Creativity in context: update to “The social psychology of creativity”’, Boulder, CO, Westview Press.

Anzures, G., Kelly, D.J., Pascalis, O., Quinn, P.C., Slater, A.M., de Vivies, X. and Lee, K. (2014) ‘Own- and other-race face identity recognition in children: the effects of pose and feature composition’, Developmental Psychology, vol. 50, no. 2, pp. 469–81.

Carson, S.H., Peterson, J.B. and Higgins, D.M. (2005) ‘Reliability, validity, and factor structure of the creative achievement questionnaire’, Creativity Research Journal, vol. 17, no. 1, pp. 37–50.

Gino, F. and Ariely, D. (2012) ‘The dark side of creativity: original thinkers can be more dishonest’, Journal of Personality and Social Psychology, vol. 102, no. 3, pp. 445–59.

Gino, F. and Wiltermuth, S.S. (2014) ‘Evil genius? How dishonesty can lead to greater creativity’, Psychological Science, vol. 25, no. 4, pp. 973–81.

Goodwin, J. (2010). ‘Research in Psychology, Methods and Design’, Wiley

Harrison, G., Ness, H. and Pike, G. (2016) Memory in the real world. In Ness, H., Kaye, H and Stenner, P. (2016) Investigating Psychology 3. The Open University, pp. 101- 152.

Hocevar, D. (1980) ‘Intelligence, divergent thinking, and creativity’, Intelligence, vol. 4, no. 1, pp. 25–40.

Lindsay, D.S. and Johnson, M.K. (1989) ‘The eyewitness suggestibility effect and memory for source’, Memory and Cognition, vol. 17, pp. 349–58.

Loftus, E.F. and Greene, E. (1980) ‘Warning: even memory for faces may be contagious’, Law and Human Behavior, vol. 4, pp. 323–34.

Loftus, E.F. and Palmer, J.C. (1974) ‘Reconstruction of automobile destruction: an example of the interaction between language and memory’, Journal of Verbal Learning and Verbal Behavior, no. 13, pp. 585–9.

MacLean, R. (2016) Measuring differences in people: creativity and personality. In Ness, H., Kaye, H and Stenner, P. (2016) Investigating Psychology 3. The Open University, pp. 251-298.

Meade, M.L. and Roediger, H.I. (2002) ‘Explorations in the social contagion of memory’, Memory and Cognition, vol. 30, no. 7, pp. 995–1009.

Mednick, S.A. (1968) ‘The Remote Associates Test’, Journal of Creative Behavior, vol. 2, no. 3, pp. 213–14.

Minium, E., King, B. and Bear, G. (1993) Statistical Reasoning in Psychology and Education. New York/ Chichester/ Brisbane/ Toronto/ Singapore: John Wiley & Sons.

Oeberst, A., and Blank, H. (2012) ‘Undoing suggestive influence on memory: the reversibility of the eyewitness misinformation effect’, Cognition, vol. 125, pp. 141–59.

Thirkettle, M. and Stenner, P. (2016) Introduction: Critical, Creative and Credible. In Ness, H., Kaye, H and Stenner, P. (2016) Investigating Psychology 3. The Open University pp. 1-48.

Torrance, E.P. (1987) Guidelines for Administration and Scoring/Comments on Using the Torrance Tests of Creative Thinking, Bensenville, IL, Scholastic Testing Service.

Walther, E. and Blank, H. (2004) ‘Decision processes in the misinformation paradigm: the role of uncertainty, metacognition and social influence’. Psychologische Rundschau, vol. 55, pp.72-81.

Contact Us

ESSAY WRITING SERVICE UK
TURNER HOUSE
9 - 10 MILL LANE
ALTON, HANTS
GU34 2QG

0203 0110 100

[email protected]

Quick Enquiry

Use the form below to send a quick enquiry