Category Archives: Volume 2

Punch and Judy

Punch and Judy 1880

Punch and Judy. Children’s illustration (1880).  Public domain.

1.  Introduction

2.  The Origin of Mr Punch

3. Punchinella

4.  Analogues of Mr Punch

5.  Mammet and Myth

6.  The Story Characters

7.  The Punch and Judy Show

1.  Introduction

Punch and Judy are the prime husband and wife protagonists in numerous traditional puppet shows in Britain. Mr Punch and his wife Judy for over 2o0 years have been a popular form of public and also private entertainment. In the performances Punch treats his wife Judy and their baby boy very badly, whilst thinking he can get away with doing anything he likes. Mr Punch’s wife, as will be shown, was originally called Joan. The origin of the modern Punch and Judy show is to be found with the Italian hero and heroine Punchinella and Joan. Like his Italian forebears Mr Punch appears as a notorious boaster as well as a coward. Mr Punch has a duality and is a contradictory combination of cruelty and merriment that provokes both fear and amusement (Fowler, 2012).

Punch Cruikshank

Mr Punch (1860).  George Cruikshank.

There is no single fixed or final story concerning Punch and Judy, this is because “…the drama developed as a succession of incidents which the audience could join or leave at anytime, and much of the show was impromptu.” (Frazer, 1970). The stories and performances varied from puppeteer to puppeteer over time. In Britain Punch and Judy is a glove puppet show originating in marionette plays based on Pulchinella the impudent hunchback of the Italian commedia dell’arte (Crystal, 2004). Punch’s characteristic feature is his self-styled sense of mirth during his escapades, the origin of the term ‘pleased as Punch’. Another lasting feature is his catchphrase ‘That’s the way to do it’. Mr Punch’s other cry is that of ‘Shallabalah’ in The Old Curiosity Shop (Dickens, 1912). The expression ‘pleased as Punch’ reflects Mr Punch’s self-satisfaction with his rascally behaviour (Evans, 1978).

Punch and Judy

Punch and Judy.  A Victorian illustration.  Public domain.

2.  The Origin of Mr Punch

Mr Punch is a phenomenon with a complicated ancestry. Mr Punch has been construed as a ‘Lord of Misrule’ who “…follows a long line of low tricksters from Pan to Loki to Puck.” (Fowler, 2012). In this context he represents a manifestation of individuals derived from ancient mythology. As Pan, the god of flocks, forests and pastures in the Arcadia of ancient Greek legend, he may be a symbol of lust and fertility. As Loki, the Norse deity of strife Mr Punch may symbolise an evil satanic spirit. Mr Punch in one respect may be the mischievous Puck, himself originally a demon, the sprite of popular folklore.

imagesCGWYM08B

Punch anD Judy show, Weymouth.  Public domain.

In England the first known appearance of Mr Punch was on the 9th of May, 1662. It was during the early years of the 17th century, in a puppet theatres in Bath and London that Mr Punch was at a pinnacle. In appearance he wore the jester’s motley of many colours with tasseled hat, resembling a clown just as clowns look like puppets (Fowler, 2012).

William_Merritt_Chase_Keying_up

Court Jester.  W. M. Chase.

Mr Punch is known for his iconic appearance of hooked nose, curved and jutting chin, whilst he carries a slapstick and his contrived squeaking voice.  the length of which “…signifies his lechery, as does the stick.” (Fowler, 2012). Mr Punch adopted many comical masks and poses of which the popular image is the well known “…humpbacked , hook-nosed, large jawed mime with whom we are so familiar.” (Welsford. 1973). The puppeteer perates Mr Punch with his right hand with his antagonist always the left-handed figure. His props also included, apart from his ‘slap-stick’ for beating people, a drum, a model gallows, a string of sausages and a sheep’s bell (Fowler, 2012).

3.  Punchinella

Mr Punch is derived from the Italian clown called Punchinella who in the commedia dell’arte  posed as a dim-witted servant. Therefore the  Pulchinello, as with Pedrolino and Pierrot was a ‘fool’ or ‘clown’ seen as “…subnormal men who please by exhibition of stupidity and insensibility.” (Welsford, 1973). The Pulchinella of Italy was originally played by a live actor before he became the puppet featuring in performances of travelling showmen.

imagesW660HR6E

The Italian Pulchinella, who became Pulchinello in England, was ‘born’ in 1649 and is believed related to Don Juan (Fowler, 2012). Pulchinella always appeared wearing a black half-mask and white apparel, carrying a wooden spoon and sometimes macaroni. This was the stage attire of the Neapolitan clown’s pantaloons, conical hat, wide and loose blouse, with characteristic hooked nose. In temperament Pulchinella was crafty as well as being mean and tending towards viciousness. He resorts, as his primary defence, to stupidity and the pretence of not knowing what is transpiring around him.

Punchinello

Punchinella

A central character in the commedia dell’arte  Pulchinella was known as Polichinelle in France, and eventually Mr Punch from Pulchinello in England. It was by 1650 that Pulchinello had been transformed into the comical, clown-like and witty figure of the Polichinelle of France. The apparent feature of the long-hooked nose of these characters derived from the Italian word pulchino meaning ‘chicken’. Also known as Cucurucu because of his cock feather adornment. Nonetheless, the derivation of Mr Punch is uncertain, but probably from the Latin ‘pullicenus’ for chicken and thence to Pulchino the ‘cock type’ (Welsford, 1973), or Pulliceno meaning ‘turkey cock’ (Fowler, 2012).

Pulchinella’s varied performances included an old and miserly bachelor, a married man, occasionally an aged master, as well as a young valet. In common with the character Scaramouche he could be cynical and shyly reserved. His enduring trait was his ability, in common with Mr Punch, to turn the tables on his adversaries. He could appear to be either stupid or clever, the “…stupidity of Pulchinella was always for his own evil purposes.” (Welsford, 1973).

4.  Analogues of Mr Punch

The character of Mr Punch has a number of analogues apart from his Italian forebears. He is seen as the German puppet called Hanswurst or Kasperl, as well as Guignol in France and Petrushka in Russia. Mr Punch became very popular in France and Paris and by the end of the 8th century he was performing in the American colonies.

alexandre_benois_a_costume_design_for_petrushka_nijinsky_as_petrushka_d5380010h

Costume design by A. Benois for Njinsky as Petrushka

The popular and comic German figure called Hanswurst appeared in impromptu comedy shows. First appearing in 1519 during the 16th and 17th centuries he was seen as a carnival character and rural buffoon at touring theatres. He appeared as a seemingly stupid, cunning and doltish fool. His demeanour was that of the merry and self-indulgent but nonetheless enterprising but cowardly individual.

Hanswurst

The puppet Hanswurst.  Public domain.

Similar to Mr Punch, Guignol and Pulchinella is the German marionette, known in Munich in 1855, called Kasperte or Kasperle. To the Swiss he was Chaschperli and in Bavaria he was called Kaschberl. As a traditional puppet figure he was popular in Germany, Austria, and German Switzerland, with his origins from the 17th century.

Kasperle

The puppet Kasperle.  Public domain.

In France he was known as Ponche,  an imaginary and unhistorical to reference to Pontius Pilate (Evans, 1978), appearing as a marionette character in mystery plays. Italian clowns introduced the wily rascal Punchinella, wearing different attire and motley, to France in the 16th century. Therefore, by the middle of the 17th century he had been transformed from an actor to a marionette in which role “…he continued to appear as the very embodiment of the comic aspects of the street-life of Naples.” (Welsford, 1973).

In the Paris of the 1890’s cabarets, marionette shows and puppet plays notorious for their violence, murders, macabre and gruesome events and ghosts (Evans, 1976), were transferred from the theatre to the mobile street booths of Montmartre Crystal, 2004).. The main performer from the 18th century in these French puppet shows was Guignol. The theatre or the series of performances became known as the Grand Guignol whilst himself Guignol performed regularly in a wooden booth outside the Louvre (Fowler, 2012).

Guignol

Guignol.  Public domain.

In the Italian commedia dell’arte  Scaramouche or Scaramuccia, which translates as ‘skirmish’, was black masked and rascally clown who appeared in black Spanish style attire. Scaramouche became a stock character in farces of 17th century.  In common with the clown Grimaldi he possessed affected language in combination with a sly and conceited demeanour. Similarly to Pulchinella he could be both stupid or clever and eventually became incorporated as an iconic puppet character in Punch and Judy shows.

Sxaramuccia 1880

Scaramouche

Joseph Grimaldi (1778-1837)  was an English and London born dancer, comedian and actor who expanded and took to his own an expanded role of the clown in Harlequinade. Born into a family of comic stage performers and dancers Grimaldi developed the roles of Pantaloon and Harlequin. The character that Grimaldi developed and which became a dominant feature of the London stage, especially the Theatre Royal and Drury, Sadler’s Wells and Covent Garden, was known as ‘Joey the Clown’. His white faced pierrot type make-up reflected the iconic appearance of the traditional clown.

Joseph_Grimaldi

Joseph Grimaldi.  Public domain.

Harlequin or Arlequin, originated with Arleccino or the stock character called Pantaloon or Scaramouche from Italian comedy.

Arlecchino

Arleccino.  Public domain.

Originally a demon or hobgoblin of the middle ages he became the mischievous fellow or buffoon of British and French pantomime (Evans, 1978). His name is thought derived from the Latin Herculinus meaning ‘little Hercules’.

220px-SAND_Maurice_Masques_et_bouffons_01

Harlequin in his traditional garb.  Public domain.

Harlequin was a masked character wearing part-coloured tights carrying a slapstick or ‘batte’ originally a magic wand the devil used to change the scnery of the performance. His persona, which rivalled the clown or pierrot, became adapted to the later comedie-bourgeois and the opera-comique (Benet….).

Pierrot Watteau

Pierrot by Watteau.  Public domain.

5.  Mammet and Myth

Mammet was a word for a puppet or an idol during Elizabethan times. Mammet, also maumet or mommet, is derived from Anglo-Norman mauhoumet or mahumet. In other words Mohammad or Mahumet for Mahomet as a generic  designation of any false god (Evans, 1978). The word mammet is thus an obsolete reference dating from the 13th to 17th centuries. As a term for a puppet, life-less doll, or even scarecrow, it dates from the 15th century. Hence idol or false-god worship is mammetry or idolatry.

The ‘clown’ was originally Momus, the somnolent Greek god of ridicule, son of Nyx (Night), who was always railing and carping at everything. In a similar vein Harlequin was originally the mythic Greek god Hermes or his Roman counterpart Mercury. In some respects the clowns of ancient Rome, called Maccus and Bucco, appeared as fooling and greedy dolts, who could be regarded as ancestral to the later clown figure (Welsford, 1973). In one respect all puppets “…have pagan histories.” (Fowler, 2014) as their sprite, demonic and hobgoblin origins go to prove. The modern Mr Punch developed out of his pagan roots to become puppetry’s  “…paterfamilias much addicted to beating his wife and throwing his baby out of the window.” (Welsford, 1973).

6.  The Story and its Characters

The characters in a Punch and Judy show were not immutable but resembled instead those found in folk and fairy tales and soap operas, now recognised as “…certain iconic figures.” (Fowler, 2012). Traditional and original characters included the Devil or Mephisto, who is the nastiest character, and the mistress of Mr Punch called Pretty Polly. Indeed, the Devil is eventually roasted in the show on a spit. The typical cast, that incurred the wrath of Mr Punch, comprised Judy and their baby in addition to the officious constable or Parish beadle, a hungry crocodile or alligator, the Doctor whose eyes are taken out by Mr Punch, the skeleton, and Joey the Clown (who was based on the real-life clown Joseph Grimaldi).

Additional characters included Jack Ketch the generic hangman who, in the show, is tricked by Mr Punch into hanging himself. The notorious executioner and hangman Jack Ketch, who died around 1686, had become associated with Punch and Judy in the 17th century. Other occasional characters included Toby the Dog, Hector the Horse, a publican, Scaramouche, a tradesman, Death himself, along with inclusions such as Jim Crow a black servant, a minstrel, a blind man, a monkey, boxers, and a distinguished foreigner.

The original or traditional tale about Punch and Judy is where Punch kills his baby son because he cries, and beats his wife to death because she hits him. The outrageous conduct of Mr Punch results from his conflicts with Judy and the infant child. Both victims of his fit of rage are thrown out of the window and into the street (Evans, 1978). What appears as a doble murder the performance is still presented as a comedy. The humour of the violence is derived from Judy’s violence towards Mr Punch, and thus it becomes a “…morality play about the absence of morality…” (Fowler, 2012). Arrested and imprisoned by the Parish beadle Mr Punch contrives to make his escape. The story contains allegorical elements in his triumphs over adversity, over ennui or boredom and lack of interest portrayed by the dog wearing a ruff. For example the Doctor symbolises disease, death is eventually beaten to death, and the Devil himself outwitted (Evans, 1978).

7.  The Punch and Judy Show

The Punch and Judy glove puppet show developed in Britain out of the commedia dell’arte marionette shows based on the impudent hunchbacked Pulchinella in Italian comedy. It was after the Restoration that Punch arrived in England in flesh and blood and puppet form. The puppets had to be made from poplar or birch wood. In 18th century England the character of Mr Punch developed into a more obvious heartless and sensual form than either Pulchinella or Polichinelle. At this juncture Mr Punch’s wife became known as Judy rather than Joan.

The Punch and Judy glove puppet shows were typically presented within narrow and portable booths. Originally the performances took place in or outside inns and taverns, in marquees and tents at fairs, such as St Bartholomew’s and Mayfair, or in empty halls. The shows were therefore episodic by intention because “…the story is a conceptual reality, not a set text; the means of telling it therefore are always variable.” (Leach, 1985) and so a very much ‘come and go’ event.

Heath

Punch at Glasgow Green Fair (1825).  William Heath.  Public domain.

During the late 18th and early 19th centuries the characteristic features of the travelling puppet booths were red and white stripes. Many were to be seen countrywide on seaside beaches. Especially gaudy were the booths typical of the late Victorian era. In the Punch and Judy shows a series of encounters with anarchic clowning, jokes, spirited comedy and songs were described by Charles Dickens as “…the street Punch is one of those extravagant reliefs from the realities of life…” (Dickens, 1849). Therefore reflecting on the fact that the only entertainment for the populace at large in the 19th century was performed in the street. The tradition of Punch and Judy still survives from its Victorian heyday open-air booths with their travelling puppeteers (Crystal, 2004).

Punch show

 Punch’s Puppet Show (1795).  Isaac Cruikshank.  Public domain.

The original Punch and Judy shows were intended for and performed as adult audiences. It was only at later date, during Victorian times, that the shows evolved into the present form of children’s entertainment.

imagesYCW4V9HV

Punch or Mayday (1829).  Benjamin Haydon.  Public domain.

Mr Punch outwits many of the story characters in a series of episodic encounters during his ensuing expoits, usually violent altercations between him and his antagonist. The plot is “…like a story compiled in a parlour game of consequences…the show should, indeed, not be regarded as a story at all but a succession of encounters.” (Speaight, 1970), defeating his opponents with anarchic vigour (Crystal, 2004).

The other side of the comedy was the tragic clown. This is exemplified in the character of the clown Tanio in Leoncavallo’s opera of 1892 called Pagliacci. The famous aria Vesti la giubba or On with the Motley traditionally sung by a pierrot garbed clown figure.

Pagliacci

Poster for Pagliacci of 1892.

imagesSW55FD14

Charlie Chaplin in Limelight (1952).

In the storyline about Mr Punch, which is not a fixed set-piece of routines, he finds himself in conflict with the Devil, supernatural forces and ghosts, not to mention the forces of law and order and retribution. However, in spite of the violence of the content the performance of Punch and Judy can be seen “…as quite harmless in its influence, and as an outrageous joke which in existence would think of regarding as an incentive to any kind of action or as a model of, for any kind of conduct.” (Dickens, 1849).

References and Sources Consulted

Benet, W. R.  (1973).  The Reader’s Encyclopaedia.  London.

Collier, J. P.  (1929).  Punch and Judy: A Short History.  Dover Books.

Crystal, D.  (2004).  The Penguin Encyclopaedia. London.

Dickens, C.  (1849).  Letter to Mary Tylor. 6.11.1849.  The Letters of Charles Dickens, volume V (1847-1849).

Dickens, C.  (1912). The Old Curiosity Shop.  T. Nelson & Sons, London.

Evans, I.  (1978).  Brewer’s Dictionary of Phrase and Fable.  London.

Fowler, C.  (2012).  Bryant and May and the Memory of Blood.  Bantam, London.

Frazer, P. (1970). Punch and Judy.  B. T. Batsford Ltd.

Leach, R.  (1985).  Punch and Judy Show. University of Georgia Press.

Speaight, G.  (1955).  Punch and Judy: A History. Plays Inc.

Stead, J. p.  (1950).  Mr Punch.  Evans Brothers Ltd.

Welsford, E.  (1973). The Fool: his social and literary history.  Faber & Faber, London.

Leave a comment

Filed under Volume 2

Race, Class and Intelligence

 

Part 1.  Anthropology and Race.

1.  Human Diversity.

2.  Heredity and the Inheritance of Complex Characters.

3.  Mental Capacities and Natural Selection.

4.  Genetic determination of Human Behaviour.

5.  Genetic and Cultural Changes.

6.  Superficiality of Physical Traits

7.  Environmental Determinants of Brain Function.

8.  Race and Intellectual Capacity.

 Part 2.  Intelligence and Scientistic Racism.

 9.   Nature and Nurture.

10. Human Achievement – the Nature of its Nurture.

11. Intelligence and Intelligence Tests.

12. Tests, Class and Education.

13. Tests and Ethnic Groups.

14. The Jensenist Heresy.

15. Jensen and Education.

16. Jensen and his British Critics.

17. Jensen and his British Support: Eysenck.

18 . Scientific Racism and its Role in Society.

Introduction

This article has been written as a positive contribution to the continuing controversy in certain scientific and social circles concerning the inheritance of ‘intelligence’ and its alleged uneven distribution in and between populations due to racial, ethnic, and class origins. The following pages contain therefore a critique of the theories of Professors Jensen, Eysenck, and others in this country and the USA. My contention is that these individuals and other hereditarian elements are reviving racist concepts and bolstering them by reactionary pseudo-scientific theses concerning intelligence. It is the theme of this article that the concept of ‘scientific racism’ has been therefore introduced into research and knowledge under the thin guise of objectivity in order to perpetuate the inequalities and privileges prevalent in class divided capitalist society.

It is important therefore to see the dangers of such theories to the educational opportunities of the working class as a whole, as well as the various ethnic groups that comprise it. In this vein we can view educability as a species characteristic of Mankind, which confers ‘upon him the unique position which he holds in the animal kingdom. Its acquisition freed him from the constraint of a limited range of biologically predetermined responses. He became capable of acting in a more or less regulative manner upon his physical environment instead of being largely regulated by it. Man’s suppleness, plasticity, and most of all, ability to profit by experience and education are unique. No other species is comparable in its capacity to acquire new behaviour patterns and discard old ones in consequence of training’ (Montague, 1957).

Anthropologists have given the name ‘scientific racism’ to the assertion that the various groups of the human species are not equal in terms of their inherited characteristics. This reactionary view pays particular attention to the allegation that Negros are innately less ‘intelligent’ than Whites. Previous attempts to substantiate this outlook have sunk without trace, one in particular being a book by A. Shuey (Shuey, 1966). Recent efforts to re-animate scientific racist views with regard to intelligence are connected with A R Jensen (1969) and H. J. Eysenck. Professor A. R. Jensen (1972), wrote in 1972 that the ‘reaction against admitting the existence of a genetic complement in intelligence is an adverse reflection on the psychological make-up of the protestors. It is time to give up the egalitarian ideal in education and work towards an educational pluralism (my italics) that will allow greater self fulfilment for individuals at all intelligence levels.’ (Jensen, 1972). In 1971, H. J. Eysenck, the well known Black Paper pundit published his popular and polemical book (Eysenck, 1972), which was an enraged cry setting out specifically to justify Jensen’s views on race and intelligence. In this peculiar book Eysenck on the one hand repeatedly appeals to ‘science’ and the ‘facts’ as he and Jensen wish to see them, but on the other hand he ‘directs a flow of unbridled and irrational abuse at all those who take a different view’? (Simon, 1971).

This critique of the views of Jensen and Eysenck seeks to analyse the fundamental propositions upon which the erroneous outlook of ‘scientific racism’ is based. Furthermore, not only are the claims of scientific racism politically and sociologically dangerous, but they are hardly new, some of them having circulated for hundreds of years or more.

Part One

Anthropology and Race

 As far as the majority of anthropologists are concerned there are no inferior or superior ‘races’. One of the basic propositions of ‘scientific’ racism is that the human species is divided into a number of sub-species or races which differ from one another in a number of genetically determined characters. In scientific terms can we classify men into rigidly defined races or sub-species? What do biologists mean by the term ‘race’? In essence ‘race’ when applied to the study of living organisms means that populations exist within a species that are distinguishable from one another due to their possessing certain distinctive hereditary traits. Many anthropologists ardently deny that such a descriptive term can be of relevance in the classification of mankind. Certainly such a definition in the light of modern knowledge would imply that almost every small population, or even individuals, would constitute a ‘race’.

1.  Human Diversity

Anthropologists have attempted to achieve a definition of ‘races’ in terms of physical characteristics, many of which are quite superficial. A whole series of such racial classifications have over  years has been attempted, but the majority of anthropologists recognise the existence of only a few such ‘races’. For example, the anthropologist Blumenbach (18th Century) divided mankind into five races based on skin colour and these were: white Caucasian; yellow Mongolian; red Amerindian; black Ethiopian; and brown Malayan. There have been many alternative schemes employing various physical traits, including hair form and colour; skin colour; eye form and colour; head and nasal shape; stature; and more recently, blood groups and biochemical and physiological properties. Other anthropologists and geneticists firmly deny the existence of race or deny its applicability to human populations.

Most anthropologists now regard the term ‘race’ in the human sense to mean a population which differs from others merely in the relative frequency of specific hereditary characteristics. In this respect such a human group is preferably called an ‘ethnic’ group by many workers in the field. All present day members of the human world population are members of a single species — that of Homo sapiens. All human beings are capable potentially of inter-marriage, inter-breeding and the production of fertile off-spring. Marriage or breeding is however restricted by a number of factors, including distance (propinquity), geographical barriers, and a great number of social factors including class, caste, culture, and sanctions of varying types and severity. It is due to these restrictions that the species has become divided into a number of populations, which to varying degrees have become reproductively isolated from one another

The belief now exists that all efforts to create racial classifications are unscientific, untenable, futile, academic and time wasting pursuits. To the geneticist the problem of ‘race’ is only a ‘part of a much wider enquiry into the nature and evolutionary origins of human variation.’ (Boyce, 1968). Hence we can firmly say that skin colour differences and other superficial physical characteristics of various populations of the human species provide for some a convenient means of classifying people into groups for giving a scientific gloss to discrimination and persecution in order to maintain and enhance exploitation by profit-seeking minorities.

2.  Heredity and the Inheritance of Complex Characters

Heredity refers to the transmission of characteristics from parents to offspring, and in that process hereditary factors contained in a haploid set of chromosomes (half the total number as found in a single sperm or ovum nucleus), is thought to consist of between five and ten million genes. (Bodmer, 1972). Intelligence is an example of the inheritance of a complex character. The nature of intelligence is multifactorial and complicated, with its expression dependent upon the combination of the effects of the environment and the products of a large number of genes. In terms of genetic theory ‘the number of genes actually functioning in human development runs into hundreds of thousands; and that any particular trait is determined by the chance association of a considerable number of them, so that what they eventually contribute to the adult personality is determined not by the genes alone but by the associated conditions of development. (Lewis, ).

Those characteristics that are determined by the joint action of many genes in relation to the conditions of the environment (and stature and intelligence are two examples) are termed quantitative characters because they are measured on a continuous scale. Such hereditary traits are more susceptible to environmental influence than are the polymorphisms (distinct kinds of trait within a species that occur in fairly constant proportions within a breeding population). It is because the contribution of individual genes cannot always be                                                                                                                                                                                                                                                                                                                                                                                                                                                                  be recognised that there is a need to resort to complex statistical analyses to elucidate the relative contributions of the environment and heredity in the expression of a characteristic. This aspect of development is often referred to as the nature-nurture or specificity versus plasticity controversy and will be considered later. It is also here that we can mention a further basic proposition of scientific racism, which states that the character ‘intelligence’ is largely a genetically determined attribute that can furthermore be measured and quantified by tests fitting into the embracing term psychometry.

3.  Mental Capacities and Natural Selection

In order to understand the distribution of such an ill-defined concept as ‘intelligence’ in human populations it is necessary to consider the process of natural selection and its bearing on the mental capacities of mankind. Populations of individuals constitute common gene pools which are the starting points in the study of human variation.

The inherited characteristics which vary in populations can be divided into two types. Firstly, those traits showing a smooth and continuous variation from one extreme to another, and which are the result of the interaction of many genes with the environment in the course of development. Hence the appearance of many anatomical, morphological, physiological and behavioural characteristics in human populations. Secondly, there are those traits that exhibit a discrete rather than continuous variation. These characteristics are due to differences in single pairs of genes and are responsible for many of the serological and biochemical traits (such as the blood groups) which are more or less independent of the environment.

As we have already outlined many early anthropologists made a study of the physical characteristics in humans that showed continuous variation (e.g., Blumenbach and skin colour, hair colour, and differences in stature, weight and body shape), but in addition there are now many physiological traits under study (e.g,. basal metabolic rate, capacity of the lungs, sensitivity to disease or environmental changes, and the age of menarche). The study of discontinuous variation in humans began in 1900 with Landsteiner’s work on the blood groups. Since then genetics as a field of study has identified many traits that exhibit considerable variation from one population to another.

Human populations have been described by geneticists in terms of the average values shown by various traits that exhibit continuous variation, and in terms of gene frequencies those traits showing discontinuous or discrete variation. The pattern of similarities and differences that emerges is not a random occurrence. Genetic variation (and therefore diversity) is better known in the human species than in any other. All characteristics, whether physical, physiological, mental or morbid, show heritable variation as well as environmentally determined differences. We see from studies of human populations that they are clustered and differentiated on a continental basis. But what is of importance and also particular interest in this respect is why and how different groups possessing different traits evolved as they did.

But we must bear in mind that many traits that vary among populations are in fact those traits showing continuous variation and therefore susceptible to environmental influences during an individual’s lifetime. Disease, and nutrition can lead to marked changes in both physiology and morphology.

Biological heredity is transmitted by four major mechanisms. These are mutation; natural selection; genetic drift; and hybridization. Therefore the processes operating to bring about changes in the gene frequency of populations are of several types, and not all operating at the same time or with the same effect. Firstly, there are those processes which either introduce or remove genes from populations as seen by mutation, and gene flow via migration. Secondly, there are those changes in gene frequencies that occur through what is termed random sampling. Genetic drift and the founder effect are two examples.

Mutation is the inception of a heritable variation, by chemical change at a gene locus, to produce a mutant gene which may give rise to a mutant character. This is the only source of new genes. Mutation is such a random process occurring at regular but infrequent intervals that are in no way related to the demands of the environment in terms of enhancement of the individuals capacities.   It is selection not mutation that is the most powerful directional influence on genetic variation, and is the process of evolutionary adaptation whereby the influence of environmental agencies will either favour a trait or eliminate it, thereby favouring or eliminating genotypes (the genetic constitution of an organism rather than the phenotype or set of manifest traits) according to their fitness or ability to adapt.

Hybridization resulting from the interbreeding of two populations with different frequencies of alleles (genes that occupy the same relative position or locus on homologous chromosomes) will create a new population with a new gene frequency. Genetic drift is the least important way in which gene frequencies may change in a population. Drift means that chance itself acts as a factor which will determine the presence in one small group of particular genes which will spread to descendants of the group.

Therefore, if differences between individuals result in these individuals making a differential contribution to the generations that will succeed them, then we can say that the composition of the population will change. In other words ¬selection will be operating on that group or population. Any hereditary characteristics therefore that confer an advantage of this type on their possessors are described as adaptive. The interaction of these processes has thus set the pattern of human diversity — and this is why it becomes important to determine the relative contributions of these factors to such a variable as human ‘intelligence’.

There are many variations between populations that can be traced to selection that has taken place in the wide variety of natural environments that have been inhabited by members of the human species throughout its history. Selection has operated particularly through variations and alterations in such factors as disease, nutrition and climate. It was on a continental scale therefore, in response to environmental conditions, that in all likelihood the various and diverse human characteristics developed. The intrinsic relationship between this diversity and the environment indicates that any classification of the species into discrete, arbitrary races is bound to be artificial.

Biological adaptation takes two forms. Firstly, genetic specialisation with genetically controlled fixity of traits. Secondly, the ability to respond to a range of environmental situations, which is achieved by the evolution of traits that are favourable in those situations. This latter process is known as genetically determined plasticity. Human adjustment is achieved within and through a social environment, which is complex and undergoing rapid charges. This necessitates immediate adaptations that occur primarily in social relations, social practices, and in the mental rather than physical realm.

In terms of the inheritance of intelligence we can analyse the relative contributions and interactions of specificity and plasticity. A brain that was rigidly specified would be completely determined by a set of genetic instructions, by a code. This code is carried in the molecules of deoxyribonucleic acid that an individual receives from both his/her parents at conception. If this were the case then the individual would be predictable from the moment of birth. He or she would be pre-coded and `this specificity would then be expressed developmentally by the growth of the brain — more or less independently of interaction with the environment.’ (Rose, ). However, such is not the case because the author of that analysis goes on to elaborate that there is a measure of genetic specification and a measure of environmental plasticity. The problem is also to determine the relative contribution of each, bearing in mind that the mental capacities of man cannot be ‘reduced’ to the mere reflection of molecules comprising the brain structure. It is just as much a fallacy to insist that the ideas of men are ‘nothing but’ the arrangement of molecules as it is to insist that ‘intelligence’ is ‘nothing but’ the unfolding of innate propensity.

Natural selection has resulted in the uniquely social environments which are characteristic of the fully evolved human species. True, the human species may diversify in many of its characteristics, and because these variations are influenced by many thousands of genes, so human individuals will always differ. However, such as these differences are — they are more so within groups than between them. If human beings differ in their superficial physical characteristics why should it follow that they must also differ in their mental capacities? The truth of the matter is that conditions in human societies throughout their existence have never been rigid enough to allow the selective breeding of any types who would be genetically adapted for the different statuses or forms of social organisation as would occur.

4.  Genetic Determination of Human Behaviour.

To date no genuine scientific endeavour has provided any data or basis for believing that the various groups of the human species differ in their inherited capacity for emotional and intellectual development. In so far as the genetic determination of human behaviour is concerned there is no scientific evidence to support conclusions that the cultural differences between various groups have their basis in heredity. We cannot conclude that because some peoples do not possess the technology of other groups that there is an innate reason for the discrepancy. If we wish to determine such differences and the reasons for their existence we must seek answers in social development and not in human chromosomes.

The primary determinant for the diversification of human patterns of behaviour is not heredity, but the cultural and social development and experiences each group has achieved — the mental and moral activity of the individuals comprising the group being conditioned in a social process of training and education, in their particular environment. Men as such make themselves, not as they please, but in relation to their surroundings, continually moulding and remoulding their environment, and forging new connections not only in their minds and between themselves but between themselves and their external reality.

5.  Genetic and Cultural Changes.

In the man-culture relationship during the period of the last million years, (or even the period up to about one hundred thousand years), it can be safely assumed that there was some biological factor in human evolution. However, during the last fifty, (or even hundred thousand years) there has occurred no evidence of an estimable advance of mental ability (13). It has further been stated that the view assuming the psychic unity, or even uniformity of mankind is now probably pivotal in the working philosophy of the majority of anthropologists, sociologists, and some biologists (Dobzhansky, 1970). It is maintained that biological evolution has run its course, that the genetic basis of culture has been established and is now a matter of evolutionary history. The only reasonable assumption for the study of variation is that the genetic basis of culture is uniform in its distribution.

There is no evidence that any mental selection processes are in operation among the diverse ethnic groups of mankind. Certainly none that could have acted differentially to produce different types of mentality. It is significant that to date it has been impossible to demonstrate any genetically deter-mined mental variations between ethnic groups.

6.  Superficiality of Physical Traits

To develop the argument it is valuable to point out that the physiological differences between ‘races’ is almost entirely limited to surface characteristics such as skin colour and facial characteristics.

In spite of statements made to the contrary, there are no microscopic or macroscopic variations that allow the expert to distinguish individuals from one ethnic group or another apart from the visible characteristics which lead to the social definition of ‘race’. And yet we get a statement alleging the opposite from a prominent geneticist reviewing Eysenck’s polemical work on race and intelligence! Thus we read ‘Intelligence can be studied by genetic and cytological, bio-chemical and anatomical methods as well as by those of pure psychology. Evolutionary, historical and experimental kinds of evidence connect with all these other methods and help to put the differences between human races and classes into a convincing picture.’ (Darlington, 1971). This same pundit has also on his own account elsewhere attempted to rewrite almost all of human history in the terms of genetics!’ (Darlingon, 1969).

The view has been expressed that there is every reason to believe that in certain areas of the human nervous system education can establish new connections. It is generally accepted that the nervous system of adult human beings possesses a neuronal arrangement the general outline of which is genetic — but many of the details of which are determined by the life experience of the individual, thus:- ‘In man and to a lesser extent in other animals the nervous system continues to develop long after birth. This post natal development is influenced by the experience of the individual and is more or less individual in pattern… the neurons which make up the nervous system of an adult man are therefore arranged in a system the larger outlines of which follow an hereditary pattern, but many of the details of which have been shaped by the experiences of the individual’ (Ranson, 1959).

The support for this view stresses that the material bases of brain structures which eventually function as the ‘mind’ are largely inherited in the same manner as other bodily organs — but in man this nervous system continues to develop long after birth, and as a result, is considerably influenced by the environment and the individuals life. Hence ‘no matter what the quality of the genetic potential for intelligence may be in an individual, the expression of these potentials will be significantly influenced by his total environment’ (Rose).

7.  Environmental Determinants of Brain Function

The brain, in essentials, is that organ which coordinates and integrates all nervous activity. As we have established, it performs these functions to a large extent according to the educational opportunities available to it. This educational pattern, as we have noted previously, is always culturally and socially (therefore class) determined and conditioned. An individual is therefore capable of behaving in accordance with the type (and extent) of the social and cultural life which he has experienced. The available information that he or she will have coordinated and integrated within their nervous systems will be in accordance with the individuals experience and opportunity.

It has been established that certain basic brain mechanisms are genetically determined, and it would be unscientific to deny this. It would be just as foolish to deny the effects of environmental factors in the critical period following birth. This is an important point to stress when we consider the developmental stages of brain growth during pre-natal and post-natal life. Malnutrition at this period can lead to permanent deficits in brain structure with parallel deficiencies in the interrelationships between the brain cells. Deprivation it of a less extreme type can lead to functional deficiency and therefore lead to forms of disturbed behaviour. It is now known that in common with other genetic potentials, the ‘development of intelligence is perhaps more than any other trait dependent on the kind of environmental stimulation to which it is exposed’ (Montague, 1974).

It must be stressed therefore that damage can be done to the genetic potential for intelligence by malnutrition. In human beings the childhood patterns of each generation influence the manner in which it later rears its children. It follows that the environmental factors affecting one generation can therefore exert an effect on a succeeding generation. These are known as transgenerational effects, and they can be quite substantial without the involvement of any genetically hereditary factors whatsoever (Rose). The effects of environment have been shown undeniably to result in entire series of definitive changes in brain structure and function.

When considering environmental determinants HC of behaviour in society we are studying a type of  brain ‘damage’ that is sociogenic in origin. These disorders that to a ‘significant extent’ are ‘due to social conditions resulting from an environment impoverished in the elements necessary for the maintenance of health’ have thus been termed sociogenic brain damage (Montague, 1974). That behaviour and performance in individual children can be altered by their environment is illuminated by the fact that quite subtle changes can lead to marked effects.

The period of childhood is of paramount importance and has to be borne in mind when assessing such a delicate component of the total human personality as intelligence. The expression of sociogenic damage can be seen in deficiencies not only of intelligence, but also of motivation, and learning ability. It has also been established that weight and height are also lower than normal in the children of malnourished mothers. In answer to the hereditarian propagandists we can say that their ‘physicalistic or biogenic bias seems to have been largely responsible for the failure to recognise the role played by social conditions in the causation of physical and behavioural deficiencies’ (Montague, 1974).

8.  Race and Intellectual Capacity

Every human being is born with a certain genetic potential and set of hereditary characteristics. This potential is one which is dependent upon the individuals familial genes, and not on race. Whether or not a person is allowed or able to develop their potential depends on many factors that are social, economic and political. The realisation of this potential is especially susceptible to the educational, social and cultural opportunities available to an individual, to a class, to an ethnic group.

A fluctuating concept such as intelligence (which is closely tied up with learning) cannot be reduced to the same sort of genetic explanation that would suffice for eye colour or blood groups. Those mental features that can be measured in humans and their respective groups appear to be differential characteristics that depend upon more than the nervous system. Alleged mental differences between various groups of people appear less considerable than the definite variation within groups. Because mental functions are so dependent upon social, cultural, and environmental factors, we are in no position to make judgements about any genetic similarity (or dissimilarity) between different groups. This is especially so since social and cultural conditions between groups are not comparable. In other words no statement concerning the intellectual capacity of an individual or group of individuals is of any value if it is not accompanied by specific information about the conditions of the social and cultural environment in which the particular ‘intelligence’ in question developed. Any attempt statistically or otherwise to calculate what is due to heredity (or Nature as it is so onesidely called) and what to environment (or nurture) is misconceived since, as we have seen, even the physical and neural potentialities of an individual’s genetic pattern are already a product of past social and cultural conditions

No discussion of so-called racial mental traits can be countenanced that ignores the consideration of all the relevant social variables. It is precisely these social and cultural factors that constitute the most important aspect in the creation of mental differences between (and within) groups. We can conclude that cultural achievements represent the outcome of social and historical experience — not reducible or separable from the expression of biological potentiality.

Part Two

 Intelligence and Scientific Racism

 In this part it is intended to discuss the notion of ‘intelligence’ in relation to its development in terms of the nature nurture controversy, as well as to examine its ‘testing’ by psychometric methods. We will examine the nature of scientific racism and its modern application by Jensen, Eysenck, and others. However, the fundamental premise is that scientific racism is not an isolated academic phenomenon it is woven into the fabric of bourgeois ideology, being championed by the apologists of class divided capitalist society.

9.  Nature and Nurture

The question as to whether nature or nurture, heredity or environment, specificity or plasticity, is more important in the shaping of man’s characteristics is misleading. The basic premise that one or the other is more important is fallacious In respect to most human traits all human variation is the result, as we have discovered, of as much the environment as hereditary constitution. For those characters described as continuous we have come to the valid conclusion that genotype and environment are equally important, both are indispensable. Where exists the organism without genes? Where exists the organism without an environment? The expression of such traits as ‘intelligence’, health, and temperament are all determined by the interpenetration of the respective genotypes with their environments. The question therefore as to the respective roles of genotype and environment has been raised in certain scientific circles. To what extent, the question is often posed, are differences due to genotypic, and to what extent to environmental causes? Further, what part of an observed variance in a given trait in a given population is due to diversity of genotypes or diversity of environment? Diversity of environment we have already concluded includes educational opportunities dictated by the cultural, social, and class structure of the society in question. The apparent dichotomy of hereditary and environmental factors is a false one because any trait possesses both genetic and environmental components in varying degrees.

We have a particular responsibility to establish the relative contributions of specificity and plasticity to ‘intelligence’. Whether the 80% heredity, 20% environmental formulation of Jensen and Eysenck is the correct one we shall analyse later. The fundamental answer in this critique is that scientific racism is completely erroneous, and deliberately and irresponsibly very deceptive.

The nature-nurture debate has to be stated correctly. The answer depends on which differences, which features, characters, traits, are under consideration (or attack?). For example — blood groups (polymorphisms), or ‘intelligence’. Two aspects of the problem appear, and it thus becomes obvious to any serious student of the matter that the genetic and environmental variables may be quite different for different characters, as they are when we compare blood groups and intelligence. Hereditary components operate decisively in an individual to establish his or her blood group, but environmental factors operate for such a character as language. With regard to intelligence we are faced with an-interactive process in the formation of the mental capacities possessed by an individual.

We must once again stress the importance of social and cultural variables in the determination of the complexities of such a phenomenon as multifactorial inheritance. The relative weights of hereditary and social variables are not constant. They change in both space and time, and in doing so they forbid us to assume the existence in man of any population fixity of traits. Most human differences, as with many individual variations within species, lie between the extremes of rigid genetic and primarily environmental causation. It is therefore extremely difficult to estimate the relative contributions of the hereditary and environmental components to traits observed in organisms.

Environment is responsible for a part of the observed variance in stature. If for example people lived in an homogenous environment then we could conclude that any diversity in their height was due to genetics, but if they possessed hereditary homogeneity then we could invoke environmental factors to explain variation in stature. This problem has made itself apparent in studies of the earlier maturation, and increased height of today’s children, as well as other researches in human development. Such questions as have been posed are: what is the relation of genes and environment to increased stature, or the earlier onset of menarche? Or, what is the role of genetic potential in relation to better nutrition with so much less clearly conceived a trait as intelligence, and so much more multifactorial and individual’s character in any case? There can be no simple answer or single solution to the specificity versus plasticity controversy. In order for the results of studies to have scientific validity in respect of time and place, objectivity necessarily demands that each trait be studied separately, and that no unnecessary a unwarranted extrapolation be made from one group to another when populations are the object of scrutiny.

10. Human Achievement — the Nature of its Nurture

Animals unlike humans, have their ‘achievements’ systematised or consolidated in the form of changes in their biological organisation, on an organic basis — hence their genetic and developmental direction is to a great extent specialised in contrast to the relatively unspecialised nature of the evolved human species (Homo sapiens). Man has his development consolidated by his material culture, by his social heredity. He possesses material objects, tools, commodities — products of his labour. He transmits down the ages and over distances the results of his social creativity via the means of language, science, and knowledge. In this we have arrived at that feature of human development that has a bearing not only on the human species but that which helps to differentiate the species from other animals — education. On this level education becomes the key to the appreciation and understanding of human life and its development — not genes.

In the millennia since the evolution of Homo sapiens we have been affected by, and guided by social laws which have been enumerated as a result of the relationship, the struggle with nature. Mankind has not only determined many of the laws of nature; he has a result of his working activities arrived at the determination of a number of the social laws of nurture. Through such a complex development, and in consideration of the natural and social laws of their history, men have arrived at that higher aspect of human uniqueness — mind. Thousands of years of social development and change, labour, culture, (and that common characteristic, intelligence) have produced more than millions of years of biological evolution. The achievements so characteristic of mental development have been transmitted from generation to generation as social heredity — culture. Historical processes (when compared to the age long genetic) are rapid, accelerating, and quite out of proportion to the slower tempo of the evolutionary changes of characteristics in animal populations. Race is a biological and genetic concept which has a meaning within narrow definitive limits — where fixity of traits confers advantage. The mark of man is plasticity — man who changes himself, could never do so if restricted by the straitjacket of genetic rigidity. If so we would not even be here to debate the issue — let alone work towards the society where racism is as historical, and as dead, as its first protagonists.

We can summarise therefore with the note that heredity and environment cannot be heretically reduced to ‘nothing but’ a set of quantitative variables, models, paradigms, or definitive ‘tables of data’. We must not slip into the metaphysical pitfall of quantification for the sake of quantification, which inevitably leads to vulgar empiricism, mechanical determinism or narrow biologism. The reality of both history and genetic constitutions are complex and real. However, their relative merits and truths are unlikely to be discovered by any dogmatic scientific racism, the function of which is not to determine or publicise the truth, the factual, or the scientific, but to justify the inevitabilities, inequalities, and injustices of the bourgeois status quo!

11. Intelligence and Intelligence Tests

No discussion about alleged variations in mental capacities or the ‘tests’ which claim to measure it is of any use without an analysis of what ‘intelligence’ is, and also is not. It has been said that ‘intelligence’ is ‘the ability of one man to under-stand more than another, and to understand it more quickly. It is therefore a matter of life and death. It concerns how the young live and the old survive.’ (Darlington, 1969). To most people however the term implies a functional capacity that is revealed during a course of action. To others ‘intelligence’ means a quality of ‘mind’, a kind of essence that is inherited by a child at birth. To the hereditarian point of view the classic definition describes ‘intelligence’ as ‘inborn, all-round intellectual ability. It is inherited, or at least innate, not due to teaching or training; it is intellectual, not emotional or moral, and remains uninfluenced by industry or zeal; it is general, not specific, ie. it is not limited to any particular kind of work, but enters into all we do or say or think. Of all our mental qualities, it is the most far-reaching; fortunately it can be measured with accuracy and ease.’ (Burt, 1933). We would no doubt find complete agreement from Jensen and Eysenck that ‘intelligence’ is as defined above, nothing more than an innate general cognitive ability. But it will be noted that the definition is nothing more specific than general cognition, the generality of which is dressed up as a common essence, and its all importance.

Yet amongst many psychologists and workers in the field there exists no generally or mutually acceptable definition of intelligence, or for that matter a consensus on how to even measure it accurately or correctly. Whatever psychometric tests measure, and they exclude functional intelli-gence, appears to be just the ability to do the tests. Whatever is measured is not something that can be readily stated in a precise and definitive manner. It is an enormous demand of faith to accept such a proposition, especially as something so vague yet reputedly innate and measurable as ‘intelligence’ depends on a ‘test’ which can be taught and learned. Not only has the practice of testing been used to reinforce the unfair system of streaming in education since the 1930’s, it is now extended to the divisive tactics of racial bigotry and the propaganda of nascent fascism.

Intelligence tests isolate the individual from all of his established and supportive social relation-ships — therefore divorcing him from a ‘real-life’ situation. Under such conditions the tester proceeds to present the ‘victim’ with a series of symbols and items of a restricted nature which he has to manipulate. Such a situation is artificial and staged, bearing no similarity or comparison with the normal everyday processes of the individual’s life and environment. Questions asked during the test attempt to exclude (or succeed in excluding any emotional answers or reactions to questions or situations. Yet in real life situations the emotional responses and feelings of an individual play an essential and important part in the totality of his or her response. What possible value can there be in a series of mechanistic tests that have been deliberately elaborated in order to achieve a remoteness, a detachment from real life? As we can conclude that ‘intelligence’ is only that which can be determined by an ‘intelligence test’ and therefore an ill-defined subjective construct, we can also assume that psychometry lacks objectivity.

An important subjective aspect of the ‘intelligence test’ is that imparted by the participation and personal judgement of the ‘tester’. After all, who selects the questions? More especially—who determines the ‘correct’ answers? It is the subjective opinion (and ideology? class background? political persuasion? of the tester or compiler of the test that determines the items selected for a test or sub-test. In the opinion of the tester the answering of the questions of puzzles must appear to involve the exercise of intellect. It has been stated that if you ‘choose to call a test an “intelligence” test, then it is a natural assumption to suppose you are measuring intelligence. Starting from such an assumption it is easy, using the arrogance of statistics, to reach conclusions like those of Professor Eysenck.’ (Bono, 1971).

‘Intelligence’ tests do not measure mathematical or mechanical ability, and neither do they take into consideration such variables as moral outlook, imagination, emotional development and stability, initiative, or even non-establishment of rapport between tested and tester. Furthermore, we have already established that mental capacities or ‘intelligence’ (IQ) can be affected by malnutrition, illness and even accident (cerebral injury etc). The critical factor, the crucial variable in the entire set of questions, models or tests, is the cultural or environmental or social Psychometric tests or IQ tests are culturally biased. Whether this is intentional is another matter, but in all events and in all serious analysis it is unavoidable. All tests to varying degrees are influenced and determined by the cultural milieu (and therefore class outlook) of their compiler, and the needs of these who require the results! In other words the successful completion of, or performance in, a test, depends on the participation of the individual in the same cultural background as the test or tester.

One factor militating against the tests is that they incorporate ‘convergent thinking’ — hence there is only one correct answer. Such a deficit means that tests are quite unable to investigate ‘divergent thought’. However, with ‘thinking exercises’ which are not. IQ tests it has been found that results do not correspond with those from intelligence tests. Thus a person who can see all the answers to a problem or an ‘unobvious answer scores badly on IQ tests, but can do well in thinking ability (TA) since thinking requires concept richness and fluidity rather than fixity.’ (Bono, 1971). It is evident, then, that IQ tests do not measure general cognitive ability. They measure only particular mode of thinking favoured by those who invent them.

The cultural and social milieu of intelligence testing contains within it the methods and manners of socialization and education, so that culture biased tests will therefore favour individuals from the particular culture or level of society concerned. Any evidence of this is ignored by Eysenck, especially if it demonstrates the fact that we have to learn to learn

12. Tests, Class and Education

The fundamental point at issue in capitalist society is that ‘intelligence’ as a quantifiable attribute is in reality a class-conditioned capacity. In a class society psychometric testing has a valuable role to play in the provision of streaming in schools. The purpose of intelligence tests therefore is to help provide and justify different ‘types’ of school for ‘levels’ of education. Tests therefore function as a means of selection, of division of children into streams according to their alleged innate ability. Yet when we analyse this system further we find that ‘streams’ or classes in the various type of school only appear to vindicate the hereditarian argument. This is not in fact the case. The entire edifice of streaming is artificial — it has been deliberately created — it is not a true reflection of the mental capacities of the children as a whole, and neither is it a true reflection of ‘intelligence’ with regard to class or ethnic group. Why is it on first glance that it appears that the children of ‘upper’ or ‘middle’ class parents possess higher IQ results than working class children? We have already established that the tests involve a great amount of acquired knowledge for their success-ful completion. It is at this point that the tests militate against working class children. We are not saying however that Working class children in fact know less than children from other strata.

No, we are saying that they have not had, in the majority of cases, the opportunity to obtain the type of knowledge that the tests are testing. Even with the so-called culture free (and therefore fair?) tests the end results depend upon the level of education of the individual. The ‘fair’ tests include non-verbal exercises and symbol manipulation — but still, even reaction to and understanding of symbols is socially conditioned and educationally determined.

We have also established that there is a gap in the educational attainment of working class children compared with ‘higher’ class children. Furthermore a similar gap exists between the attainments of black children and white children. The reasons are the same — although additional cultural and historical factors are also in operation. The reason is to be found in the level and nature of educational opportunity, which is decided by class politics. As intelligence tests are accredited with some validity by the attempt to compare them with educational attainment we can see that they are class bound. Just as there is a close relationship between the results of tests and educational achievement there is also a close association between the results and social class. The questions set in the tests favour not the working class but the middle class child.

The questions, and nothing but the questions, determine therefore (as well as define) the type of ‘intelligence’ being tested or measured. Yet intelligence as measured by tests is far from pure intellect or complete mental capacity, not to mention thinking ability as mentioned previously. It is time that we accepted that ‘IQ tests test mainly the ability to do IQ tests and stop using the word ‘intelligence’ in connection with them.’ (Bono, 1971). IQ tests as class bound exercises can only determine what is in reality a class element deriving from educational opportunity and achievement, and expressed in the answers to loaded questions in the tests. Intelligence, as measured by the tests therefore, is a class conditioned attribute. The fallacious concept arises that the working class and the minority ethnic groups are innately less intelligent than middle class or upper class children — but this error is based upon a class orientated and mistakenly hereditary point of view. IQ tests are part of the armoury of educational policies saturated with bourgeois ideology and orientated to the requirements of capitalist society.

We have demonstrated so far ‘that IQ is not, and could not be, a measure of cognitive abilities abstracted from all social and motivational factors. In as much as IQ tests measure anything, they measure the likelihood of educational and social success in a particular society. This is not to deny that cognitive abilities do contribute to success, but rather to claim that it is impossible to consider such abilities in isolation from their social determination and expression. The assumption on the part of intelligence test constructors that this is possible, combined with their pre-occupation with the technical details of test construction, has given the concept of IQ a quite spurious aura of scientific respectability.’ (Ryan, 1972).

13. Tests and Ethnic Groups

There exist four major reasons why the comparison of IQ tests from different groups are notoriously unreliable. These are: schooling; language; motivation; and socio-economic background. Intelligence tests fail to take into consideration cultural differences when applied to different ethnic groups. As we know — psychometric tests are not only class biased they are also culture biased. Anthropological and ethnological studies exist to prove that no meaningful results can be obtained from the comparison of test results obtained from different cultural groups. In other words, different IQ achievements within the USA or within Britain reflect the particular discrimination, cultural and class positions of Black, or Irish and other sections of the population defined by racial ideology and practice. The fact that the tests cannot be extended to the populations of other societies and their cultures proves that they measure the divisions created within the societies in which the tests were devised.

In so far as schooling is concerned the score achieved in an IQ test is influenced by the length and quality of that education as well as by such factors as overcrowding, adequate facilities, and teachers, and by level of motivation. Language presents a problem, due to so many of the tests being based upon verbal ability. It is compounded by any linguistic difficulties of minority, ethnic and immigrant groups. In so far as motivation is concerned we have to recognise that not all individuals are equally interested in performing IQ tests, especially when the test situation can be complicated by diffidence, caution, mistrust, and outright alienation by testers’ attitudes. The provocation of anxiety, resistance, or the arousal of suspicion, can all contribute to a deterioration during test performance.

It has been found with Black Studies in the USA that the very under-privileged status of black pupils in the ghetto schools has led to the development of harmful teaching attitudes. Such attitudes have developed into declarations that such black children are un-teachable. In such a climate the children are then classified as inferior, which in turn has a severely limiting affect on the effectiveness of subject matter in the creation of self-esteem and self-confidence. In Britain criticism has been justifiably levelled at factors operating against the educational achievements of West Indian children in certain schools. For the majority of immigrant children educational achievements are as for the indigenous children — but this is where conditions permit such an attainment. It is postulated here that due to cultural differences between the indigenous groups and the immigrants (or more precisely the children of immigrants who are in fact born here) far too many black children are being classified unfairly as educationally subnormal. Not only are they being classified erroneously as a sub-species, they are being ascribed as sub-cultural, when in reality they are a genuine sub-culture within the given population as a whole.

In the majority of cases they in fact share a common socio-economic background with the white working class (who themselves possess subcultures that are regional and stratum orientated). In such a situation where cultural deprivation of both ethnic groups and white working class is so pervasive and enduring within society is it any wonder that children from such circumstances have little concern for education and intellectual pursuits?

With regard to the educational attainments of West Indian children in Britain it has been stated that ‘the dismal failure of successive British capitalist governments to take into consideration the special needs of West Indian pupils underlies the precipitous situation obtaining in schools with West Indian children… guiltless of their own demise, [they] are underachieving in many schools because their special needs are being detrimentally neglected by the Authorities.’ (Cambridge, 1972). The only conclusion was the one drawn — that West Indian children are forced into the lower ability streams due to no fault of their own. In terms of opportunity we have not yet sunk to the level of the fascist minded education system that prevails in South Africa, but one wonders how much the Jensen’s and Eysenck’s, of this world sympathise with the view that the white South African’s duty ‘to the native is to Christianise him and help him on culturally.  Native education should be based on the principles of trusteeship, non-equality and segregation; its aim should be to inculcate the White man’s view of life…’ (South Africa, 1948).

In consideration of the resurgence of racism and racialism in Britain represented by ‘Powellism’ and the activities also of the National Front and other fascist groups, can we further understand the reason for the appearance of the works of Jensen and company? In view of the polarisation of attitudes to race in both this country and in the USA we can indeed claim a relationship between the views of the scientific racists and the political climate.

A constant danger to be reckoned with is the possibility of the spread of the ideas of the scientific racists. In the controversy there is an imperative need to prevent the permeation of the bourgeois mentality, to combat the notion that IQ is a real and quantifiable part of the human intellect. We can equate a ‘high’ IQ with the bourgeois doctrine of ‘selfish individualism’, with the bourgeois determination to ‘get ahead’ — all of which constitutes much of the ideology of capitalist education policies. It becomes increasingly clear that the idea that man is ‘nothing but’ a machine fits very neatly into the mechanomorphic view that all the working class is fit for is repetitive ‘robot’ production work.

In relation to class power we can see that the theories of the innate and immutable determination of ‘intelligence’ are part of the outlook of scientific racism — which is itself part of the ideology of the bourgeoisie. In other words scientific racism is part and parcel of the total means whereby the ruling class maintains its power over the working of the dictatorship of the bourgeoisie. The question of race and intelligence, the issue of class and intelligence — neither are mere academic issues. Neither are simply right wing versus left wing political controversies. It is not a simple issue of black versus white; labour versus conservative; or liberal versus radical. It is part of the whole struggle between the bourgeoisie and the proletariat. It devolves to the fundamental issue — in whose hands shall lie economic and political power?

Yes, the pundits do have the ‘intelligence’ to see the writing on the wall, they are fully aware of the inevitability of the capitalist system being replaced by scientific socialism, and it is for this reason that they must divide the people on spurious racial issues, and deprive them of opportunities for their intellectual development. But for all of the Jensen’s and Eysenck’s of this world nobody will prevent the eventual victory of the common people with the release of their boundless potential and development of their intellectual abilities.

The bourgeoisie is desperate, it is in crisis, it is in decay and disarray — it is therefore more dangerous than ever. The theories of scientific racism are a pointer to its direction, and we dare not forget that the ideology of nazism and fascism still today lurks within the womb of imperialism. The ‘bitch is in heat again’ as the baying hounds in the form of Jensen and Eysenck fully show! It is the ‘intelligence’ and the abilities of the ordinary people, white, black, brown, and yellow who will inevitably and inexorably prove the unscientific absurdity of the views of scientific racism. Jensen and Eysenck are destined to become as much a part of the garbage of history as the system that spawned them. It is painfully obvious to many that they have the ‘intelligence’ to appreciate the fact ¬because of their furious and specious defence of the status quo!

14. The Jensenist Heresy

The theories of Jensen on educational attainment and the ‘heredity’ of intelligence have become the bone of a fierce controversy. The current argument was initially provoked by Jensen’s views published in the USA and in Britain. Jensen who as professor of educational psychology at the University of California, Berkeley, USA, is of the opinion that ‘individual differences in intelligence, that is IQ, are predominantly attributable to genetic differences, with environmental factors contributing a minor proportion of the variants among individuals.’ (Jensen, 1972). Jensen proceeds to argue that a genetic hypothesis to account for observed differences between whites and blacks in the USA is not unwarranted. He calculates the IQ heritability factor as 0.8 or 80%. His main theme is therefore obvious — it is none other than the discredited view that the variations in IQ between individuals is due predominantly to hereditary factors, and that this applies both within and between groups.

Developing his platform Jensen proceeds to argue that a dysgenic trend exists within the negro population in the USA. In other words we have a rehash of the theories of the ‘fall of civilisation’, or as Jensen sees it, the higher birth rate of low IQ families will lead to a general decline in the intelligence of the negro sub-group. It is also the view of Jensen that different mental abilities are differentially distributed amongst different social classes, as well as racial groups. Two of these abilities mentioned by Jensen are associative learning and abstract reasoning.

Jensen therefore attempts to show that negro performance is not as good as that of whites when it comes to doing IQ tests. The difference he ascribes to the operation of genetic factors. Jensen derives all of his evidence from psychometry, believing that whatever the tests measure must constitute intelligence. By claiming that both racial and class differences are due to heredity he is further intimating that such differences are impervious to the effects of education or socioeconomic policies. As he says ‘these genetic differences are manifested in virtually every anatomical, physiological, and biochemical comparison… one can make between representative samples of identifiable racial groups… there is no reason to suppose that the brain should be exempt from this generalization.’ (Jensen, 1972). Jensen’s argument about IQ tests measuring intelligence is far removed from what we have already established concerning the hereditary potential of mental capacities.

15. Jensen and Education

Jensen’s instigation of his scientific racist platform was due to the failure of the compensatory education programmes in the USA. The so-called ‘Headstart’ programmes. Jensen argues that their failure was due to ignorance of the alleged inherited component of intelligence. The aim of these compensatory education programmes was to provide intensive education sessions to boost the IQ and therefore educational and occupational opportunities of poor children — especially black ghetto children.

The four major groups of mankind are represented in the Americas, and these are the African, the Mongolian, the Amerindian, and the Caucasian. The United States is furthermore an area where an extensive process of miscegenation or mixing of genetic factors has taken place on a large scale. Likewise, in the context of our previous conclusions and the population structure of the USA we cannot attempt to define ‘race’ in biological terms. In view of this ‘race’ can only have  a ‘sociological’ connotation. It is here that Jensen exposes some of his unscientific views. He employs asocial concept of race and then proceeds to regard this group as a biological entity, in order to confer upon it the validity of a sub-species.

The limited and experimental programme of ‘Headstart’ only took place in periods of months or vacations. Jensen wrote his first polemic in 1969 at a time when the USA was preoccupied with racial differences, and he based his arguments on the failure of the Headstart project with an attempt to lay the blame on the heads of the children themselves. In fact Headstart had hardly started, when Jensen began his work, and it later showed results of having boosted IQ until the children returned to their un-boosted environments and achieved their former levels again.

For those of the like of Jensen ‘how comforting to find a “scientific” view which suggested that the racial differences were rooted in genetics and beyond environmental manipulation.’ (Richards, 1973). Further to this Jensen has proposed that class variants in intelligence were explained by two different genotypic processes. These were termed level 1 for ‘associative ability,’ and level 2 for ‘conceptual ability.’

Whites in general were supposed to have performed better than Blacks on general IQ (i.e., conceptual ability) and the poor working class (including Black children) better on associative rather than conceptual tests.

What is the implication here? Certainly it means that poor (black) children due to their having low IQ’s would benefit from rote learning, which in the opinion of Jensen means boosting their opportunities for associative learning. Low IQ white children however would benefit only from boosted conceptual learning opportunities

16. Jensen and his British Critics

Since Jensen’s protestations in the USA we have had a battery of supporting articles, books and ‘researches’. An extreme example has been the view of William Shockley with his proposal that low IQ negroes should be sterilized and that a programme with such an aim should be inaugurated (Jensen, 1972). William Shockley is a professor of engineering at Stanford University, and Nobel Prize winner in 1956 for electronics, has repeatedly tried to persuade the US Academy of Sciences to finance research to reduce what he terms the environment/ heredity uncertainty. In this field Shockley is worried about much the same issues as Jensen, but he expresses himself without the smokescreen of verbal qualification that is part of the normal trappings of science and a noticeable item in Jensen’s armoury.’ (Gillie, 1970).

In July 1970 Jensen came to Britain and defended his position in the debate at the Cambridge Union which was sponsored by the Cambridge Society for Social Responsibility in Science. The Society had its own views, which it made quite obvious in an introductory leaflet to which Jensen naturally took exception. The leaflet pointed out the main political theme of what we have come to view as scientific racism when it said the constant harping on pseudo-scientific ‘biological differences’ between children is only the expression of a political wish to retain the worst social inequalities of the British and American political systems’

In a similar vein to his statement Jensen took his critics to task with his usual excuse that he is objective and everyone else is blind. He objected to an opposition indulging in ‘well meaning wishful thinking… ostrich-like dismissal of the subject and taboos against open discussion.’ (Gillie, 1970). Yet critics of Jensen have correctly pointed out that the IQ test is a political weapon, not a neutral means of ‘measuring” intelligence’. Just because the Head-start project proved a near disaster is no reason to think that it is useless to attempt any environmental or educational improvements. The crux of the matter is as stated — the `possibility that genetic factors play a part in IQ differences between different racial groups is quite irrelevant to the educational problem of aiding the opportunities of a particular disadvantaged child’ (Morris, 1970).

17. Jensen and his British Support: Eysenck

In Britain support was voiced by Eysenck for the racist ideas of Jensen. The support took the form of a particularly mendacious book which appeared two years after Jensen’s salvo of 1969. Eysenck in 1971 published the first of his assaults on the mental capacities of the ethnic groups and working class of this country. Following in the Galton-Burt-Jensen tradition Eysenck had previously established a reputation as a Black Paper pundit, having written on one occasion that ‘an elite, pre-destined and predisposed to intellectual leadership and to the enjoyment of the fruits of education’ had to be developed on the basis of these characters being genetically determined. Echoes of Gobineau’s ‘born to rule’ minorities (Eysenck, 1969). Associated with Eysenck in the preparation of this particular paper were Kingsley Amis, Robert Conquest, and of course — Sir Cyril Burt. The central theme and policy was opposition to ‘free play’ and discovery methods in primary education; opposition to comprehensive education; opposition to the expansion of higher education; opposition to expenditure of funds on deprived educational areas. In addition this pernicious group supported selection at the age of eleven; IQ tests and the consequent streaming; grammar schools; and the continuation of the traditional system of examination.

As a matter of policy both Burt and Eysenck common with Jensen are devoted and committed exponents of psychometry. All three are avid supporters of the Galton paradigm with its attendant dedication to the preservation of an elite as a necessary prerequisite for social progress. Whereas we could call Galton the father of ‘scientific method’ as applied to human variation, we owe the elaboration of the first ‘intelligence’ tests to the Frenchman Binet at the turn of the century. Yet, to the credit of Binet, he never intended that they should be used as they were. Indeed, Binet himself protested at the vulgarisation and misuse of his tests.

Eysenck has adopted the position whereby he champions the views of Jensen as a service to humanity, stressing further that the educational implications of his work indicate the necessity for selection and streaming. Eysenck is also firmly of the belief, as is Jensen, that compensatory policies for the educationally under-privileged and deprived will be valueless. Not only this but that remedial measures can only do harm, because as he says ‘with limited resources available for all of education, special help to some means less education for others’.

In his polemical work Eysenck demonstrated in a very loose and unscientific manner that in all truth he could offer no tangible proof of his theories. With regard to anthropological studies he could only make pathetic comparisons, as for example the ‘constant discovery of new blood genes has forced experts to increase the number of races recognised’. Intelligence is a multifactorial and continuous genetic trait heavily inter-dependent on the environment and culture. Blood groups are discrete and discontinuous traits — and since when has a blood gene been regarded as a racial indicator anyway? With equal conviction Eysenck further states that ‘North American negros are certainly hybrids’. It can safely be assumed that North American whites are also hybrids.

The researches of Eysenck are completely irrelevant to the needs of education, as well as the problems that surround it. There can be no value to the work of Eysenck and his fellow racists ¬only fuel for confusion, and weapons for supremacists, fascists, and those with a vested interest in a class divided social system. Certainly in terms of Eysenck’s ideas it is obvious that he is unacquainted with the recent advances in neurobiology and brain physiology. If he is aware of these advances then we can only conclude that he has deliberately ignored them.

Both Eysenck and Jensen argue that the 80% heredity and 20% environment equation for ‘intelligence’ is correct and unalterable. We have to examine this superficial analysis in terms of modern population genetics theory. To find out what the influence of heredity is, the unit within which the heredity factor is being measured must be a genetic unit, i.e., a common gene pool; and any discussion is of necessity restricted to within a fairly homogenous group in terms of genetics and cultural experience.

The 80/20 ratio tells us only about a variation within a population at a given point in space and time. It can as a consequence tell us nothing about the inherited diversity existing within another group — especially if there is little or no miscegenation or interpenetration of genes between the two separate groups. Without a free and open process of interbreeding, as well as free and equal social and cultural exchange the 80/20 hypothesis remains un-testable and therefore untenable. And any conclusions that differences in IQ test performances between statistically and socially defined groups must be genetic because 80% of IQ ability is inherited are also untenable. They remain not only pseudo-science but pernicious propaganda.

18. Scientific Racism and its Role in Society

We can bring our investigation and critique of the racist views of Eysenck and Jensen to a conclusion by analysing the purpose and role of their theories in modern capitalist society. There is nothing at all ‘scientific’ about psychometry. The batteries of tests represent a pretension that IQ tests are objective, absolute, sophisticated and a true reflection of innate ability. We cannot arrive at an all-round understanding of the issue of ‘intelligence’ unless we consider all the factors involved. A refutation of scientific racism in purely genetic terms is as little use as is a purely sociological refutation. The entire polemic concerning the distribution of ‘intelligence’ and its connection with ‘race’ and social class is permeated with elements of ‘biologism’ and the other ammunition of scientific racism.

The revival of biologism and other obsolete doctrines must not be viewed in isolation from the crisis that imperialism is going through — the decline of capitalism brings forth many outmoded outlooks, as well as reviving old positions and analyses to hold back the development of the democratic and socialist forces in the imperialist states and the developing countries. In its ever increasing frustration the ruling capitalist bourgeois frantically scrape the barrel for any weapon to use in the class struggle. As such the ideologists and apologisers for capitalism have encouraged the dissemination of biologism which has the intention of describing and interpreting the activity of human beings in terms of biological urges, instincts, innate propensity and animal behaviour. Unwarranted extrapolations from animal to human societies are made in order to justify the status quo, maintain class oppression and to give credence to war, persecution and reactionary policies.

We have seen such doctrines popularised throughout the field of anthropology, biology, and human science. Such examples can be seen in the polemical pseudo-science of Desmond Morris, Robert Ardrey, as well as Jensen and Eysenck. The theory of intelligence fits very neatly into the ideology that attempts to ascribe human development mainly in terms of the biological and in disregard of socioeconomic factors.

References

 Montague, M. F. A.  (1957).   The Direction of Human Development, Watts, London

Shuey, A. M.  (1966).  The Testing of Negro Intelligence.

Jensen, A. R.  (1969).Environment,’Heredity and Intelligence, Harvard Education Review vol 39

Jensen, A. R.  (1972). Genetics and Education, Methuen

Eysenck, H. J.  (1972).  Race, Intelligence and Education, Temple Smith

Simon, B.  (1971).  Review of Eysenck’s text, Morning Star, London.

Boyce, A. J.  (1968).  New Scientist (18.10.68)

Bodmer, W. F.  (1972).  Race and IQ: The Genetic Background in Race, Culture and Intelligence, Penguin

Lewis, J.  The Uniqueness of Man,  Lawrence & Wishart, London.

Rose, S.  The Conscious Brain, Weidenfield & Nicholson,

White, L. A.  (1949).  The Science of Culture.

Dobzhansky, T.  (1970).  Mankind Evolving.  Yale U P.

Darlington, C. D.  (1971). Review of Eysenck’s 1971 text.

Darlington, C. D.  The Evolution of Man and Society. Allen & Unwin.

Ranson, W. & Clark, S. L.  (1959).  The Anatomy of the Nervous System, 10th Ed.

Rose, S. (1972). Environmental Effects on Brain and Behaviour (in Race, Culture and Intelligence, Penguin)

Montague, M. F. A.  (1974).  Man’s Most Dangerous Myth: The Fallacy of Race, Fifth Ed. OUP.

Burt, C.  (ed). (1933). How the Mind Works.

Bono, E. de.  (1971).  eview of Eysenck’s 1971 text.

Ryan,  J. (1972).  IQ — The illusion of Objectivity (in Race, Culture and Intelligence, Penguin 1972)

Cambridge, A. X. (1972).  Education and the West Indian child ¬a criticism of the ESN school system. The Black Liberator, 1972; and Bernard Coard, New Beacon, 1972

Programme for Education, Institute of Christian National Education, South Africa  (1948).

Richards, M.  (1973). Times Higher Education Supplement, 20.7.73

W Shockley, W.  (Review of Educational Research, vol 41,).

Gillie, O.  (1970).   Science journal, September.

Morris, R.  (1970).  New Scientist, (23.7.70)

Eysenck, H. J.  (1971).The Rise of the Meritocracy.

Postscript

‘There are biological reasons why significant racial differences in intelligence, which have not been found, would not be expected. In a polytypic species races adapt to different local conditions but the species as a whole evolves adaptations advantageous to all its races, and spreading among them all under the influence of natural selection and by means of inter-breeding. When human races were evolving it is certain that increase in mental ability was advantageous to all of them. It would then have tended over the generations to have spread among them in approximately equal degrees. For any one race to lag definitely behind another in overall genetic adaptation the two would have to be genetically isolated over a very large number of generations. They would, in fact, have to become distinct species; but human races are all interlocking parts of just one species’ (G G Simpson, Biology and Man. Harcourt, N Y, 1969).

Sir Cyril Burt (deceased 1970) was recently accused by Oliver Gillie (Sunday Times, 24.10.76) of the perpetration of scientific fraud with respect to certain of his researches into IQ. The charge against Burt is that he knowingly published false data and invented crucial facts to lend support to his theory that intelligence is largely inherited. The four major accusations levelled against Burt are that (1) he assumed parental intelligence during interviews and then proceeded to regard such assumptions as actual facts; (2) that two of his associates (who were credited with joint authorship of some of his papers) may not have been real individuals; and (3) Burt supposedly produced identical answers with an accuracy of three decimal places from separate sets of data (which in reality is a statistical impossibility); and further (4) Burt actually tailored data to fit his predictions in order to justify his pet genetic beliefs.

A Sunday Times investigation apparently came to the conclusion that certain elements of Burt’s work were therefore a perpetrated fraud and deliberate distortion. Burt’s data was collated by Dr L J Kamin (Princeton University) and serious variations became apparent. Further to this investigation a Dr A Clarke and a Professor A Clarke (University of Hull) also investigated the consistency of Burt’s data. Not only was there no trace of Burt’s alleged associates—but the Clarke’s came to the conclusion that scientifically Burt’s results are a fraud.

Within the context of this article not only is the ideological framework of the immutable hereditary determination of intelligence seen to be untenable and unscientific, but also that in order to justify the innate theories of Burt there was the deliberate falsification and manipulation of data. But whether or not Burt did falsify his data is not really relevant to the theme of my present critique.

Leave a comment

Filed under Volume 2

The Baker Rifle

1884.27.39 Baker rifle.

Baker rifle, in the Pitt Rivers Museum, Oxford. Cat.No. 1884.27.39.

Introduction

The so-called ‘Baker’ rifle is, in fact, the Pattern 1800 Infantry Rifle, but referred to since Victorian times as the Baker Rifle. This infantry rifle was used by the British Army throughout the Napoleonic Wars. The Baker Rifle had the distinction of the longest service by any rifle in the British Army. The rifle placed in their hands by its inventor, Ezekiel Baker, ‘…was a superbly designed weapon, both robust and practical.’ (Urban, 2004). The Baker Rifle, which was a muzzle-loading flintlock, was the first British rifle to be used. Issued to the Rifle Brigade in 1800 it remained in use until 1838. There is mention of it being used by troops engaged in the so-called ‘Kaffir’ Wars of 1851, and records of its distribution as late as 1841.

Detail of 1884.27.39

Detail of 1884.27.39

The Baker Rifle in the Pitt Rivers Museum

The displayed rifle in the Gun Case has a label that states the weapon is a Baker Rifle of circa 1800 that was issued to specialist rifle regiments at the beginning of the 19 th century. Further stating that, with the technology of the day, it was too costly for general army issue. Furthermore, it was the first British military firearm to be rifled. It has an Accession Number of 1884.27.39. The rifle was donated by Augustus Henry Lane Fox in 1884 (and therefore part of the Founding Collection) but was collected prior to 1874. It was originally displayed in Bethnal Green and Kensington Museums (V&A).

Stamped on the silver coloured metal lock of the rifle is ‘Tower’ and ‘GR with crown’. Also on the lock is a lock proof mark of a crown over an arrow or chevron pointing downwards. On the brass butt tang is stamped ’14/9″CRR’. The weapon is noted as being 1165 mm in length. As will be shown later the rifle on display is, in fact, an 1806 Tower Pattern Infantry Rifle (made after 1806) and possibly issued to the Ceylon Rifle Regiment (CRR) who were formed in 1817, dressed in green, and supplied with a rifle that also used a sword bayonet. Regimental marks were often stamped on the butt tangs of rifles.

Stamped on the barrel (see illustration) is a set of proof marks. The crown and GR always appears uppermost to the crown and crossed sceptres symbol. The symbols combined on the barrel of this Baker Rifle indicate that these are Georgian Government proofs from 1815 to 1830 (Bailey, 1986). The barrel is government manufactured. If the barrel had been made privately, and only proofed by the Ordinance proof house, the crown and sceptres would be stamped twice. This proof mark sequence always occurs in conjunction on rifles made and proofed by the government ordnance. This also shows that, by its proof marks, this rifle was made after 1815 and before it was supplied to the Ceylon Rifle Regiment (CRR) circa 1817.

The Origin of the Baker Rifle

The first breech-loading rifle made for the army use was the Ferguson rifle designed in 1774. Rifles had been employed by some units of militia in a number of actions with noted success. The Board of Ordnance had bought, in 1796, some rifles from the famous gunmaker Durs Egg. This weapon looked like a musket and had a 39 inch barrel with 0.704 inch bore. It was this fact that came to the notice of the British Board of Ordnance. The late 18 th century Board of Ordnance was a separate department to the British Army that researched procurement of the best weapons, and established in offices in Horse Guards. They had the overall responsibility of determining which weapons regiments used, as well as naval artillery requirements. As such the Board was a scientific and professional organisation. It was their intention to obtain the best rifle to equip an elite and specially trained rifle corps as well as already existing rifle units such as the 5th Battalion of the 60th Regiment of Foot.

In January of 1800 Colonel Coote Manningham received a letter, from the Adjutant General of the Army, which informed him that the Duke of York intended to give him command of a Corps of detachments from 14 Regiments of the Line. This was for the express ‘…purpose of its being instructed in the use of the Rifle and in the System of Exercise adopted by soldiers so armed.’ (WO 3/21 cited in Blackmore, 1994). This Corps of Riflemen, at Woolwich, as Manningham was informed was not a distinct or permanent unit but was a ‘…Corps of Experiment and Instruction.’ (WO 3/32 cited in Blackmore, 1994).

During the first week of February a series of rifle experiments were conducted at Woolwich near London. Apart from the words of Ezekiel Baker, and the recorded travel expenses of the Master Furbisher, no report of the rifle tests exists. The trials of many submissions resulted in Ezekiel Baker’s barrel being adopted as the first issue British rifle. As Baker himself opined ‘In the year 1800 the principal gun makers in England were directed by the Honourable Board of Ordnance to procure the best rifle possible, for the use of a rifle corps (the 95 th Regiment) raised by the government. Among those who were selected on this occasion, I was desired to attend: and a committee of field officers was appointed for the purpose of examining, and reporting according to their judgement. There were also many rifles from America and various parts of the continent produced at the same time. These were all tried at Woolwich; when my barrel, having only an quarter of a turn in the rifle, was approved by the committee.’ (Baker, 1823). The initial design was not innovative but reflected the better features of continental examples. Baker’s first two submissions were rejected by Manningham because they were of musket size and bore and believed too cumbersome, but the third model was approved and this eventually became the first rifle pattern adopted by the British army. As Baker himself said ‘When the 95 th Regiment was first raised, I made some rifles of equal dimensions of the muskets, in order that they might be supplied with ammunition, if necessarily supplied, from any infantry regiment that might be near them. They were, however, strongly objected to by the Commanding Officer, Colonel Manningham, as well as all the officers of the Regiment, as requiring too much exertion, and harassing the men from their excessive weight. They were consequently immediately relinquished, and twenty to the pound substituted.’ (Baker, 1823).

It seems that Manningham, the father of the thinking rifleman, had a vital role in the decision making process of the Board. It was Manningham who provided Baker with a German Jaeger rifle with the recommendation that he copy it. The final selection therefore of Baker’s pattern was one with the Jaeger barrel of 30 inches length. The rifle commissioned by the Board had also a ‘carbine bore’ of 0.625 inches with a quarter turn seven groove rifling. The rifle did indeed resemble the German Jaeger model, as well as other continental rifles, but the real innovation given the rifle was Baker’s quarter turn rifling which was claimed to give greater accuracy. Selection of Ezekiel Baker’s third rifle pattern to be the weapon of choice for the new Rifle Corps was a process lasting two years.

In October 1800 another matter was concluded after much argument. The elite Corps of Riflemen was officially established on August 25 th with their accoutrements and distinctive green uniforms approved and authorised for eight companies, and they were equipped throughout with the Baker Rifle. In March the Board of Ordnance had provided Ezekiel Baker with a request for his pattern barrels and rifles. This first batch was for 800, especially for the 95 th Regiment of Foot, and were ordered from gunsmiths in London and Birmingham. This Board of Ordnance manufacturing system established a network of contracts for barrels and locks from gun-makers Egg, Nock, Baker, Pritchett, Brander, Wilkes, Bennett, Harrison and Thompson. The first rifles cost 36 shillings for those with patch boxes in the butt and 32 shillings for those without.

Ezekiel Baker and his Rifle

Ezekiel Baker originally served his apprenticeship with the gun-maker Henry Nock and subsequently worked for this master. However, in 1794, Baker became gun contractor to the British Board of Ordnance. Established in a small workshop in the London Minories he was employed on producing locks and barrels. For a while Baker was in partnership with a lock maker called James Negus. Baker also had government contracts for smooth bore muskets and pistols and supplied the Honourable East India Company.

The specimen rifle made to his specifications and submitted for experiment was chosen in 1800 for the then newly raising Rifle Corps. It was afterwards that he wrote and published his ‘Remarks on Rifle Guns.’ Indeed, as is known Baker ‘…demonstrated his inventions superiority in competitive trials organised by the Board of Ordnance.’ (Urban, 2004). Further to this, for what eventually became seen as the essence of the Baker Rifle, it ‘…was also remarked, that the barrel was less liable to foul from frequent firing, than the whole, the three-quarters, or half-turns in angles of the rifle, which was considered of great advantage to the corps, particularly when engaged, as they would not require so often sponging out as the greater angles would and yet possess every advantage of the other rifle in point of accuracy and strength of shooting at three hundred yards distance. For all these reasons the committee gave mine a preference, and recommended to the Honourable Board of Ordnance to have their rifles made upon a similar construction.’ (Baker, 1823). From this it can be seen that the rifling twist rate had only one quarter of a turn in the rifle. Such rifling endowed a far more rapid spin to the round lead ball and, in theory, imparted greater accuracy. The barrel of Baker’s rifle was only 30 inches in length and therefore one turn in 120 inches. As elements of continental rifles had been incorporated into the pattern it was, as Baker himself pointed out, only the innovative rifling system that he claimed as his own. Bakers main improvements were to reduce barrel length and overall size and weight, and also to reduce the rifle bore to a standard for the time of 0.625 inches.

In 1805 Ezekiel Baker established his own production facilities at 24 Whitechapel Road in London. On one side there was Size Yard and at the rear a large warehouse which he converted into a factory and his own proof-house. Baker had come to the attention of the Prince of Wales and this Royal patron, as Colonel of the 10th Dragoons arranged the adoption of Baker’s cavalry rifle for that Regiment. Soon Baker was appointed court gun maker. Further encouragement by the Prince of Wales led to Baker establishing his own proof house whereby he subjected his guns to his special ‘Fire, Water and Target’ proof and special proof mark stamps. Ezekiel Baker’s private shop and factory developed into a rival to the other gun-makers proof house.

Ezekiel Baker was responsible for improvements in firearms that included bayonet design and fitting, pistol grips, special locks, barrel rammers. The Society for the Encouragement of Arts and Manufactures gave him three silver medals for his developments in safety locks and his bullet moulds. Not only had Baker’s rifle shown its improved and reliable accuracy it had also ‘…managed to overcome the prejudice against such weapons by being robust enough for field service.’ (Urban, 2004).

The Development of the Baker Rifle

As the Baker Rifle was, under the terms of the Government contract, made in many gunsmith shops in London and Birmingham, it is not surprising that there are subtle variations to be seen between individual weapons. In addition the rifle was subject to certain modifications throughout its life as a service rifle.

The progress of the Napoleonic War led to changes in the Baker Rifle. A Second Pattern was fitted with the ‘Newland’ lock and a Third Pattern appeared in 1806 with a pistol grip trigger guard. In addition it had a four and a half inch butt box (or patch box) with a characteristic rounded plain front. This is the type displayed in the Pitt Rivers Museum gun case. Also notable in the Pattern 3 was the 5 inch long flat lock plate, a raised semi-waterproof pan, a sturdy safety bolt, and a flat ring neck cock. By 1809 riflemen were equipped with the Third Pattern introduced in 1806, which by 1823 had become standard issue. As with the Pitt Rivers example the furniture (e.g., butt tang, escutcheon, side plate, trigger guard) of the rifle was made of brass. A sling was fastened to the rifle and it was sighted for 200 yards.

However, Baker Rifle quality varied. This depended on the type of flintlock fitted, on whether they were made in Birmingham or London, but nonetheless service reliability ensured production until 1838. Most of the rifles made between 1800 and 1815 were produced under the Tower of London System, not by Ezekiel Baker. The System meant that Baker subcontracted out production to some 20 or more gunsmiths. For the period 1805-1815 Baker made only 712 rifles. A number of variations included the 1801 Pattern West India Rifle (a simplified version minus a butt box); the 1809 Pattern with its 0.75 inch musket calibre; and 1800/15 Pattern Rifle that had been altered to accept a socket bayonet instead of the usual sword-bayonet.

Between 1805 and 1808 the Board of Ordnance took into its stores some 10,078 English made Baker rifles. This had increased to 14,000 by the end of the Napoleonic War. It was from 1813 that the Baker cavalry carbine had been issued to the 10 th Light Dragoons, whereas a cavalry carbine made by Ezekiel Baker was issued to the Life Guards in 1801. An average of 2,000 Baker Rifles of various patterns were produced in London and Birmingham gun shops between 1804 and 1815. Of these Birmingham supplied 14,615 complete rifles plus 32,582 barrels and 37,338 rifle locks.

Technical Aspects

The Baker Rifle and its pattern variations was in service with the British Army between 1801 and 1838. The weapon was a standard rifle with a calibre (ammunition size) of 0.625 inches (15.9 mm) or ‘carbine bore’. It weighed about nine pounds (4.08 kg). Designed between 1798 and 1800 it was 43 and three quarter inches in total length (1162 mm) but the camouflage browned barrel was only some 30 inches (762 mm) long. The Pitt Rivers Baker Rifle measures 1165 mm in total length. Muzzle-loaded, it fired by flintlock ignition a lead ball of 0.615 inches diameter (hence the need for greased linen or leather patches), but later ammunition supplied was ball cartridge. Ignition was provided by a TOWER marked lock (firing mechanism) which was also marked with a crown over GR forward of the lock. A proficient rifleman could achieve a rate of three rounds per minute, and a semi-skilled man could be credited with two rounds per minute. Baker rifles, like Brown Bess muskets, were fully stocked with the wood extending the length of the barrel.

The Baker Rifle stocks were made from English walnut and comprised two class types. Earlier versions have large and two compartment butt box. The second type of stock is not drilled but slit to accommodate a housing for the rammer, and has a smaller butt box. The Pitt Rivers Museum Baker Rifle is of this second type. The butt box of the second type was covered by a 4 and a half inch brass plate or lid. This covered a single compartment for the tools required for regular and essential maintenance. This feature also suggests that in the later version the butt box was no longer a patch box but could contain the new integral ball cartridge.

Rifle Corps officers permitted their men to load their rifles after their own fashion or preference. This allowed on the condition that they could demonstrate it was accurate to set standards. Live ammunition was used in practise and riflemen could achieve ranges of 150 to 200 yards firing twice a minute. This is a previously unknown level of accuracy compared to the standard issue musket’s unreliability beyond 75 yards. Rifle accuracy was required in order to strike an enemy soldier, at a distance greater than that of the enemy musket, somewhere about his person. Certainly with the intention of rendering him hors de combat, if not dead or mortally wounded. The rifleman, who could accurately shoot birds and rabbits for food at some range were naturally expected to shoot moving French, or other troops, with a good measure of accuracy and regularity. For this purpose the Baker Rifle had brazed to its barrel two sights, front and rear. The rear sight consisted of a block situated 7 inches forward of the breach and which was cut with a V notch. The front sight was made from an iron blade on a thin rectangular base. The front sight of the Pitt Rivers Museum example appears to be made of brass. The barrel shows the camouflage browning that was intended to prevent glare from exposing the positions of sharpshooter riflemen.

Following the German style the Baker Rifle was designed to accept a sword-bayonet of some 24 inches long. Therefore the first bayonet for the Baker Rifle was a single-edged flat sword of 23 inches length. It was brass handled with a knuckle bow and clipped onto a muzzle bar. It weighed 2 pounds and, as later reports confirmed, created difficulties of for firing when it was attached to the rifle muzzle. Production of the sword-bayonets was contracted out to the Birmingham sword cutler Henry Osbourne. The sword-bayonet was a feature of the rifle during the Peninsular War but was replaced after 1815 with a lighter socket bayonet. Contemporary diaries and letters of riflemen suggest that they liked their little sword even though it was rarely used for hand to hand fighting for various reasons. The sword-bayonet was a weapon of last resort, it was too short to be effective, especially as riflemen by definition were sharpshooters. The sword bayonet was, however, very useful for chopping wood, digging holes, cutting and toasting meat, and many other tasks.

The sword-bayonet became an inevitable concomitant of the Baker Rifle’s development. It continued unmodified until 1815 with the length of the sword-bayonet conceived as a rifle and sword to parallel the musket and bayonet concept. The Pitt Rivers Museum sword-bayonet (Accession Number 1884.28.43) is stated to belong to the Baker Rifle displayed (1884.27.39). Although it is not displayed, the weapon is described as a sword-bayonet, straight and flat, single-edged, brass handle and cross-guard forming a bow guard, plate from guard over with spring and button. It states it was made in Birmingham in 1801 although the Baker Rifle on display was made after 1806.

The Baker Rifle, the British Army, and Other Units

Skirmishers were a feature of the early battles fought during the French Revolution. Accordingly, the British Army considered expanding those of its units able to fight in dispersed order. It followed that such units would need to be supplied with a rifle.

riflemen

Riflemen of the 60th and 95th.  Source: public domain.

The Baker Rifle was initially issued to Manningham’s Experimental Corps of Riflemen in 1800. The demand for more Baker Rifles soon outgrew the initial order for 800 to equip the single battalion of the 95th Regiment of Foot. An additional two battalions each for both the 60th and the 95th Regiments had Baker Rifles by 1806-1810. The Baker Rifle was supplied officially only to rifle regiments, their use restricted to those units considered to be elite units. These included the 5th Battalion of the 60th , and rifle companies of the 6th and 7th Battalions of the 60th Regiment of Foot. Rifles were issued to the 3 battalions comprising the 95th Regiment of Foot (which served between 1808 and 1814 in the Peninsular War under Wellington). Baker Rifles were used by the 3rd Battalion of the 95th in the War of 1812 as well as at the Battle of New Orleans. Again by the 95th who stood their ground at the Battle of Waterloo in 1815.

rifleman_s

Rifleman and his equipment during Peninsular War.  Source: public domain.

The Baker Rifle was also distributed to the Light Troops of the King’s German Legion when they formed in 1804. Other German units such as the Brunswick Oels received Baker Rifles, as did the Portuguese Cacadores. Volunteer Units also, as did the Honourable East India Company in receiving its first order in 1802. Variants of the Baker Rifle (in its carbine pattern) were issued to the 10th Hussars. After the end of the Napoleonic War Baker Rifles were issued to other light regiments of foot. The 21st Royal Scots Fusiliers were using Baker Rifles when stationed in Australia between 1833 and 1840. Indeed, the Baker Rifle was eventually used in many countries during the first half of the 19th century, including by Mexican troops at the Battle of the Alamo.

As far as the rifle regiments were concerned their recruits were chosen for their qualities. Most riflemen could read and write and surviving diaries and letters bear testament to this. In addition, each rifleman carried a bag for tools containing a ball puller, worm, tommy bar and turns-screw, as well as spare flints and greased patches if required. It is notable, compared to the structure of other Line Regiments, that rifle officers often dined with their men and thus came to know them well. In the field Skirmisher riflemen using Baker Rifles often faced their opponents in pairs. More experienced riflemen had trained and practised in techniques to enable them to shoot running soldiers. This was aided in the field by their ability to practice shoot and hunt rabbits and birds. Riflemen also used specially made moving targets to increase their proficiency in hitting moving soldiers at range. Whereas the Baker Rifle could achieve an average accuracy of 1 in 20 shots hitting the target, in the field this compared to 1 in 200 for the musket.

Conclusion

Designed as a soldier-proof military weapon for ease of mass-production, the Baker Rifle proved to be a very successful and long-serving gun. It was eventually issued to units across large geographical distances – as the Pitt Rivers Museum Baker Rifle indicates it may have seen service in the Ceylon Rifle Regiment some time after 1815, after being made some time after 1806.

There were basic requirements that needed to be met by this rifle. These were: (1) it accepted an existing and established military calibre ball; (2) its rate of fire was reasonably fast for battlefield conditions; (3) it was generally accurate in battle up to (and frequently beyond) 150 yards, and (4) it was robust enough to withstand the rigours of battle and campaigning military service. The accuracy of the Baker Rifle can be attested by the actions of one Rifleman Plunkett of the 1st Battalion of the 95th Regiment. During the retreat to Corunna Plunkett shot through the head and killed the French General Colbert at an estimated range of 600 yards. On denying it was a lucky shot he thereupon shot an aide-de-camp going to Colbert’s assistance.

Even though it is thought that the friendship of the Prince of Wales aided Baker’s success with his Infantry Pattern Rifle now named after him, nonetheless the gun had much to recommend it. The Baker Rifle was a major improvement on the smoothbore musket nicknamed the Brown Bess, which had become standardised as the army’s flintlock firearm for over a century. Compared to the 57 inches long Brown Bess the specialist issue , relatively short Baker Rifle proved to be an innovative and handy weapon.

From the time of its 1800 introduction the lock of the Baker Rifle underwent several improvements until the end of the Napoleonic War. This was in common with most other arms of the period. The advantages of the Baker Rifle over its rivals was that it was simple to reload and was less likely to foul after about 25 shots. The Baker Rifle was also sighted along its shorter barrel which ostensibly allowed for greater accuracy over longer ranges.

Recently a series of novels and television series telling of the exploits of a fictional 95th Regiment Officer – one Richard Sharpe – and his riflemen companions during the Peninsular War, has popularised the history of the Baker Rifle and the 95th Regiment of Foot under Lord Wellington. The rifle carried by these men in the television series is a replica of the 1806 Third Pattern Baker rifle. It is identifiable by its later pattern butt box with rounded brass plate front. As such the replica is almost identical, if not identical, with the Baker Rifle displayed in the gun case of the Pitt Rivers Museum.

This article was originally printed online as part of ‘The Other Within’ project of the Pitt Rivers Museum, Oxford, Spring 2008. I am grateful to the museum for allowing me to reprint my article in its present format.

References, Sources, and Further Reading

http://www.army.mod.uk/infantry/

Baker, E. Remarks on Rifle Guns. 8 th ed. London, 1823.

Bailey, D. W. British Military Longarms, 1715-1865. London (1986)

Blackmore, H. L. British Military Firearms 1650-1850. Greenhill, 1994

http://diggerhistory.info/pages-asstd/customs.htm

Haythornthwaite, P J. & Hooke, C. British Rifleman, Osprey, 2002.

http://home.vicnet.net.au/ » rifles95/rifle.htm

http://www.militaryheritage.com/bakerrifle.htm

Arming the Rifleman. Regimental HQ. Royal Green Jackets Museum, Winchester, 2000.

Peterson, H. L. ‘Encyclopaedia of Firearms’, The Connoisseur, London, 1964.

http://www.rememuseum.org.uk/arms/armindex.htm

http://southessex.co.uk/weapons/baker.htm

Urban, M. The Rifles. Faber & Faber, London, 2004.

http://waterloobattletours.users.btopenworld.com/

http://www.1st95thrifles.com/history2.htm

2 Comments

Filed under Volume 2

A Fisherman’s ‘Lucky Stone’ from Newbiggin-by-the-Sea, Northumberland.

h2_54-183

Cullercoats Fish Lass (1883) by Winslow Homer.

1.  Introduction

2.  The donors

3.  The folklore of holed stones

4.  Newbiggin-by-the-Sea

5.  The Newbiggin fishing industry

6.  Northumbrian fishing folk

7.  The William’s Twizzell

8.  Afterword

References

1.  Introduction.

At the front of the Sympathetic Magic Display, case 61a, in the Pitt Rivers Museum, Oxford, is a perforated black limestone beach pebble with a string attached through a hole. The museum’s accession book states that this is a “Beach pebble of black limestone bored by a pholas, hung behind a door in the cottage of William Twizel, fisherman, as a ‘lucky stone’.” (Humble, 1908). Apparently several of these stones hung by varous doors of the cottage. The stone comes from Newbiggin-by-the-Sea, Northumberland, and was donated in 1908 by a Miss Humble, Alexander James Montgomerie Bell, and William Twizel (actually Twizzell) in 1908. There is no mention of a William Twizel in the 1901 census. However, there were several William Twizzell’s (various spellings) in Newbiggin one being born in 1822 and who died a retired fisherman in 1913. The most likely donor is a William Twizzell who was born circa 1829 or 1830 and who died a retired fisherman in 1909.

Image (268)

2.  The donors

Miss Humble is described as a field collector but little else is known about her. Accession records in the Pitt Rivers Museum, Oxford, say she was a resident of Newbiggin. However, the name is fairly common in the north-east and it was not possible to identify her in either the 1891 0r 1901 censuses (england.prm.ox.ac.uk/collector). Much more is known about James Alexander Bell. Born in Edinburgh in 1846 he was an undergraduate at Balliol and matriculated as an Exhibitioner in 1864, gained his BA in 1869 and took his MA in 1871 (Oxford University Alumni 1500-1886). The obituary of Bell describes him as a career academic, teacher and antiquarian, and amateur archaeologists. He worked sometimes as a tutor and had more formal roles as a schoolmaster (Marlborough, Fettes) and college lecturer and examiner (St John’s, and Worcester). Alexander Bell was also known for his work and research on the Wolvercote gravels and deposits near Oxford (Nature, 1920). He died in 1920 aged 74 and his artefact collection was sold to the Pitt Rivers Museum. Alexander Bell lived in 1891 with his wife Anna and children Archibald, Evelyn, Mary, and William at Rawlinson Road in Oxford. At this period he was engaged in private tutoring in classics, geography and geology (RG12. 1166. 87.). The family was still there in 1901 when Alexander held a position of private tutor at a public school (RG13, 1381- 35.). Indeed, Alexander’s son Archibald Colquhoun Bell (born 1886), and who had a long naval career, also became a donor to the Pitt Rivers Museum around 1920 (England.prm.ox.ac.uk/collector).

3.  The folklore of holed stones

The Newbiggin stone “…a pebble of black limestone, bored by a pholas, was hung behind the door of William Twizel’s cottage…” (Ettlinger, 1943). Such holed stones were “…evidently regarded as magical as early as the second millennium B.C., as shown by the excavations at Tell el Ajjul (ancient Gaza)…” (Murray, 1943). As such these stones were deliberately place with three in a room and one in a grave.

The hole in the Newbiggin stone was made by a burrowing bivalve mollusc called Pholas dactylus. Also known as the ‘Common Piddock’ or ‘angelwing’ it is similar to a clam and bores into a range of soft rock sub-strata including chalk, peat, clay, and sandstone (www.marlin.ac.uk/species). This elliptical shaped boring bivalve, which can reach 12 cm in length, is found at everal sites along the east coasts of Northumbria and Yorkshire. It stays in its burrow for its entire eight-year lifespan. It is recognised by its typical whitish colouration and is also known for its bio-luminescence  (wikipedia.org/wki/Pholadidae; www.marlin.ac.uk/spcecies).

Image (269)Pholas dactylus or the Common Piddock.

A naturally holed flint, also in the Pitt Rivers Museum, was “…found attached to a hammered peg, buried beside a brick wall of a workhouse in Thame (Oxon), built in 1836.” (Ettlinger, 1943). Holed stones were therefore built into walls as amulets. A carter called Kimber, and employed by General Pitt Rivers at Rushmore, had one nailed to his door. Another example in the museum is from Northern Ireland whre a holed stone was used as a charm to keep pixies from stealing the milk. Holed stones are known variously as hagstones, witch stones, holey stones, snake stones, thunderstones, dobbie stones, and in the north-east of England sometimes as adder-stones, with geodes referred to as eagle-stones (Simpson, 2000; www.nothernearth.co.uk, 2008).

As can be seen the use of naturally holed stones was widespread in farmyards where “…holed stones were fastened to the hose or byre door…to keep way witches or pixies, or just for good luck.” (Ettlinger, 1943). Naturally holed stones were used as protection against “…unworldly misfortunes from the evil eye (which they might be thought to resemble)…” (www.northernearth.co.uk. 2008). Belief in the protective powers of holes stones was widespread and they were regarded as magical devices to protect both man and beast. Holed stones were attached to cattle stalls, horse stables (where they are often referred to as witch-riding stones), and in Whitby (1894) for example, such stones were tied to front door keys “…to ensure prosperity to the house and its inmates.” A similar post-medieval use was found for prehistoric stones axes. These were used to protect the inhabitants of households and buildings, eith animal or human, from spells and were called ‘witch hammers’. Known to have been used for barns in Durham.

A very early allusion to holed stones was during the 15th century when they were used as charms against nightmares (Opie, 1989), with stones of this type “…sometimes regarded as preventives of bad dreams.” (Murray, 1943). Holed stones were often hung on bed-posts to deter demons, including the night-hag, the night mare, or s succubus (www.weymouth. 2008). It was believed earlier that a “…stone with a hole in it hung at the bed’s head will prevent the nightmare. It is therefore called a Hag Stone from that disorder which is occasioned by a Hag or Witch sitting on the stomach of the part afflicted. It also prevents witches riding horses, for which purpose it is often tied to a stable key.” (Hazlitt, 1905). Hag stones, also called witch stones, fairy stones, eye stones, witch stones, nightmare stones, or occasionally Ephiates Stones, were perforated flints, stones or polished pebbles. In bygone days they were commonly seen hanging above the household doors. In a similar vein hag stones were either carried on one’s person or worn around the neck on a string (Rankine, 2008).

As with Newbiggin-by-the-Sea fishermen, boatmen in Weymouth in 1894, fastened holed stones to the bows of their boats as charms to keep their craft safe. It was “…not uncommon for row boats at Weymouth to have ‘holy stones’ tied to nails or staples in the bows…(Colley-March, 1906). Again, the intention was to keep witches and evil spirits away from the boat, with boat ropes often threaded through beach-holed-stones for the same purpose (www.Weymouth, 2008). As well as holed stones found on a beach others, such as hag stones. were also fastened to the bows of boats to protect them at sea (Rankine, 2008). As one local Weymouth fisherman once ventured, these Holy stones were “…beach pebbles with a natural hole through them…holy through having a hole through them, of for being sacred, or both, I know not.” (Colley-March, 1906).

4.  Newbiggin-by-the-Sea.

Newbiggin-by-the-Sea has been for centuries a maritime locality of some importance, as a large fishing village, a grain import port, lifeboat station, and eventually seaside resort. Situated on a fine and broad bay it was, in the 18th and 19th centuries, a large fishing village on the Northumberland coast (Tomlinson, 1888).

005394

Newbiggin-by-the Sea. Launching the ‘Ada Lewis’ circa 1907.

The village was originally called, in 875, South Wallerick, but after the Danish invasion of that year it was renamed Neubegang or Newbegining (eventually Newbiggin). The port has a long and varied history. An early reference is from 1199 when it is recorded as a toft or homestead.  However, by 1240, it is recognised as a fishing port equal in importance to Newcastle and, as early as 1352, large amounts of corn were being shipped to Newbiggin (Tomlinson, 1888; www.black 2008). Its importance as a port for shipping grain at one time made it only third in importance to London and Hull (www.newbigginbythesea. 2008). Evidence of shipping activities obviously date back to the early 14th century, with the port given its toll authority in 1316. Newbiggin-by-the-Sea is on the coast of Northumberland 15 miles north-east of Newcastle. At Newbiggin point is the church of St Bartholomew

136970_0654691f

St Bartholomew’s Church (1846) at Newbiggin Point.

built in 1846 to replace the original of the 14th century. It is believed to be the site of a small church that existed before 1174.

It was during the 19th century that the Newbiggin fishing industry went from strength to strength (www.black. 2008), the expansion being due to mid-19th century to first world war herring boom when “…travelling the herring…” became a way of life (Robinson, 1991).In 1885 the population was 717, by 1891 it had risen to 1388, and by 1911 there were 3466 inhabitants. However, the modern size of Newbiggin is a result of the one-time coal mining industry. For what was previously a hive of maritime activity in Newbiggin has become a thriving holiday destination (www.newbigginbythesea. 2008).

5.  The Newbiggin fishing industry

In 1626 there were only 16 fisherman working 4 cobles out of Newbiggin. Yet, in 1831, there were 27 boats which rose to over 140 in 1969 (www.black. 2008).

A coble is a distinctive type of open fishing boat, with flat bottom and high bow that was developed on the coast of north-east England (Robinson, 1991). This type of boat responded well to both sail and oars, and its high bow was required for north sea sailing. The shape of the coble boat was also favourable for launching from a beach into surf. In addition it was ideal for being hauled into shallow sandy beaches from that self-same surf. Cobles were usually launched from the beach using an axled wheel support (Robinson, 1991). Most cobles were crewed by three men and a boy and each boat belonged to a family.

coble_and_castle

Coble and castle from Lindisfarne Harbour. Photo by Stephen Trainor (2007)

The design of a coble, which used a lug sail at sea, contains relics of Norse influence but also evidence of Dutch origin. Herring was fished for by larger cobles but from 1871 onwards was replaced by open keeled boats called ‘mules’ which operated from 1875 onwards (Robinson, 1991), a known coble was the ‘Sweet Home’, owned by Thomas, William, and John Taylor. A modern coble has a diesel engine, no lug sail, and is launched by tractor. By 1991 only 9 boats were operating out of Newbiggin and they fished mainly for salmon, white fish, shellfish, but line fishing had been abandoned.

Ernest_Dade_-_Crabbing_Coble_off_Filey_Brigg,_North_Yorkshire

Crabbing coble off Filey off north Yorkshire, by Ernest Dade (prior to 1936).

6.  Northumbrian fishing folk

North-east of England fishing families are very close knit and often intermarried and as a result shared a handful of surnames, with families even living in certain areas of Newbiggin (Newcastle Evening Chronicle, 22.1.2005). Indeed, fishing folk in Newbiggin and elsewhere (e.g., Cullercoats) were members of a “…distinctive clan and rarely married outside of it.” (Peacock, 1991). In 1861 there were 21 families called Armstrong within Newbiggin, and everyone of them were fishing folk (communities.northumberalnd.gov.uk, 2008). In Newbiggin common fishing family names included, as well as Armstrong, Robinson, Dawson, Storey, Dent, Renner, Brown, Taylor, Lisle, and Twizzell (Newcastle Eventing Chronicle, 22.1.2005).

The cottages of fishermen, as with those of coal miners, were usually built around squares or in uniform rows. This arrangement of dwellings allowed for the publically carried out domestic routines, including line-baiting by fisherwomen (communities.northumberland.gov.uk. 2008). A fisherman’s cottage, typically, comprised one large room with an open range, a loft storing fishing nets and ropes where the children slept in summer. Prior to fresh water plumbed through in 1911, families obtained their supplies from public wells. Water for domestic chores was from a rain barrel.

007759

Fishermen’s cottages in Newbiggin-by-the-Sea

The role of fisherwomen or ‘fishwives’ was a very important and arduous one. They were indeed the working partners of their menfolk. Within the village the fisherwomen collected limpets and baited lines with them, worked in the local smoke-house, and knitted the woollen ‘gansey’s’ and thick seaboot stockings for their men. In other words fisherwomen played an essential role in the industry, as well as in the sale of the fish.

800px-Homer,_Winslow_-_'Fisherwomen,_Cullercoats',_1881,_graphite_&_watercolor_on_paper

Fisherwomen, Cullercoats (1881), by Winslow Homer (1836-1910).

Prior to the coble putting out to sea the ‘fishwives’ carried the boxes and baskets to the boat, carried between 22 and 26 ballast sand-bags to the boat, and then helped launch the family craft into the sea. When the coble returned from its fishing expedition these fisherwomen – mothers, wives, daughters – assembled on the beach and hauled up the boat onto the sand. They then unloaded the boat, and proceeded to assist in the beach auction of the catch (Robinson, 1991). However, these fisher women had another role and that was the sale of fish within the surrounding district.

Beach-scene-Cullercoats-1881-e1273700994508

On the Beach (1881) by Winslow Homer

The term ‘fish wife’ has, unfortunately, a derogatory meaning attached to it. The archaic meaning is ‘a woman who sells fish’ but another definition (Oxford, 1999) states that a fishwife is “…a coarse-mannered woman who is prone to shouting.” Admittedly some north-east fishwives were “…buxom, ruddy-cheeked ladies who could take their liquor and swear as any first mate in a sailing ship…” (Peacock, 1986), and thus became renowned as “…mistresses of invective…”.  These hard-working fisher lasses were garbed in traditional pleated skirts and apron, as well as an over-cloak and knitted shawl. Their costume that resembled a kind of seafaring appearance “…consisted of a heavy blue serge skirt, blouse and cowl of the same material, and the suggestion of a sailors collar at the back, terminating with thick woollen stockings and ‘stout boots’.” (Peacock, 1986). They carried  their fish in creels – woven wicker baskets carried on their backs – throughout the neighbouring district. On occasion they visited all the ale-houses found on their way back and often had to be assisted home to Newbiggin.

PICTURES2%20474

Newbiggin fisherfolk

Over time the role of these fisherwomen changed. The last Newbiggin woman to carry the creel was Mrs Mary Hunter (nee Twizzell) who died in 1979. The last actual fisher lass who hawked fish using a mobile van was Mrs Mary Robinson (nee Armstrong) who died in 1987 (Robinson, 1991). One fishwife of note was Annie Twizzell, who may have been related to our William Twizzell, and who was married into the Newbiggin Dent family.

An episode in the history of Cullercoats fishing folk was the lifeboat saga during the loss of the coal brig called the Lovely Nelly in 1861. The Cullercoats lifeboat called ‘The Percy’ was alerted about the brig being driven to disaster on the coast at Brier Dene on January 1st. The lifeboat was dragged overland, though a blizzard, by six horses and local women and fishermen. After launching the boat and its crew fought through raging seas and rescued the stricken crew of the vessel. Only the cabin boy was failed to be saved. The heroic episode was immortalised in the painting of 1910 by Winslow Homer called ‘The Women’.

LOVLEY-NELLY

The Women (1910).  By Winslow Homer (1836-1910).

Much of the life of this fishing community was bounded by tragedy. Newbiggin fishing families regularly lost husbands, fathers and sons to the vagaries and dangers of Rudyard Kipling’s the ‘old grey widowmaker’ – the north-east coast and the North Sea. Many ship and boat losses were due to the carnage caused by the local north shore Black Middens and the Tyne estuary Herd Sands (communities.northumberland.gov.uk. 2008). For example in 1915 several men of the Brown and Taylor families were lost at sea – one of the coble’s in which they sailed and did not return was called the Mary Twizzell. Previously, in 1904, six husbands and a son were lost – a John Dent and six of the Armstrong family.

7.  The William’s Twizzell

A number of William Twizzell’s (spelt variously Twizel, Twisel, Twissell, or Twizzell) were living contemporaneously in Newbiggin-by-the-Sea between 1822 and 1913. One William was born in 1822 and died a retired fisherman in 1913 (Death Index, 1913). This individual is probably not the William Twizzell who’s ‘holed stone’ is in the Pitt Rivers Museum in Oxford, however a brief survey of censuses sheds light on the fishing community of Newbiggin.

This William Twizzell was born in Newbiggin in 1922 and is first recorded, as a fisherman aged 19, in the census of 1841 (HO107a, 1841). In 1851 (HO107, 1851) he is still a fisherman living in the ‘village’ of Newbiggin with his mother and brother. By 1861 (RG9 1861) they are resident in Main Street sharing a cottage with his sister and the member of the Brown family she had married. Their neighbours at this time were other Twizzell’s, Brown’s, Dent’s and Oliver’s (a Mary Oliver later married the William Twizzell whose artefact is in the Pitt Rivers Museum. In 1871 William was living in Vernon Place with his brother, sister Isabella Brown, and two nephews (RG10a. 1871). Their neighbours were still other Twizzell’s and Brown’s plus Dent’s and Oliver’s. By 1891 William was still at Vernon Place and still working as a fisherman aged 69 with the Brown’s (RG12b. 1891). By now the neighbours, apart from the Dent’s were other traditional fishing families such as the Armstrong’s, Morton’s, and Storey’s. This brief outline shows the nature of the local fishing community and the closeness of the families associated with it, including a William Twizzell whose wife was originally Ann Taylor.

The most likely owner of the donated ‘lucky stone’ is another William Twizzell whose life fits in with the conation history of the stone. Born in Newbiggin around 1829 to 1830 this William was recorded in 1841 (HO1017b. 1941) as a Twizell (note single z spelling) aged 10 and working as a fisherman (or fisherman’s boy). Still a fisherman in 1861 (RG9b. 1861), aged 30 and living with his mother and two daughters, in Main Street with the Storey’s, Armstrong’s and Renner’s for neighbours. All were fishing families. In 1881 (RG10b. 1881) William Twissell (now with two s’s) was living in Prospect Place with his wife Mary, two sons, two daughters (ne called Hannah Brown), and a granddaughter also called Hannah Brown aged 9 days. Neighbours included the Storey’s. William Twizzell had married Mary Oliver (born 1834) in 1861 (PRO, 1861) and she died in 1907 aged about 72. In 1901 William and Mary were living by themselves in their Prospect Place cottage where he was described as a retired fisherman. Times had changed by then. William still had some Armstrong’s as neighbours but his other neighbours, without traditional fishing community names, were now coal miners rather than fisherman. Both employments ironically were concerned with making a living out of the deeps. William died some two years after Mary aged 79, and his death was registered in Morpeth in 1909 (Death Index, 1909).

8.  Afterword

It can be seen that William Twizzell (born in 1830) had his name spelt differently in different censuses, and this explains the disparity of the name Twizel in the Accession Notes of the Pitt Rivers Museum. It seems that his death, and that of his wife Mary, suggests how the ‘holed stone’ in the museum came to be donated in 1908. The sequence of events may explain the journey that William Twizzell’s ‘lucky stone’ took from the door of his cottage in Prospect Place, Newbiggin-by-the-Sea, to Case 61a in the Pitt Rivers Museum, Oxford.

The people of Newbiggin-by-the-Sea were a god-fearing people and regularly attended services. Many of the fishermen and their families were of deeply religious persuasion and ardent supporters of either chapel or church and often “…the men were lay preachers…” (Peacock, 1986). It may seem odd that such a devout people would readily keep charms against witchcraft and as a protection against misfortune. It is no wonder that these hardy, short and sturdy people kept lucky charms, such as holed stones, as an additional recourse to a safer life. Newbiggin fishing families with their “…slatey blue eyes, the colour of the sea in front of their cottages…” (Peacock, 1986) from whence their livelihood as well as their sorrow came, also preserved their ancient Northumbrian dialect.

A dialect which owes its origins to a language spoken by Angle mercenaries from southern Denmark in the 5th century AD, and which was the forebear of much of modern English (Arnold, 2008). The dialects of north-eastern England (including ‘Geordie’, Northumbrian, and ‘pitmatic’) still retain features no longer to be found in modern English, and possessing a vocabulary not found elsewhere. The Venerable Bede of Jarrow, circa 672-735 AD, wold have understood the meaning of many words still current in Northumbria (including Newbiggin) and Newcastle and Tyneside, because today (see: www.northeasternengland.talktalk.net/GeordieOrigins) “…the only part of England where the original Anglo-Saxon has survived is in the north-east.” Bearing this in mind it is not surprising that ancient superstitions should also linger in communities that preserve so much of the past. In many respects it is the whole of the background to William Twizzell’s ‘lucky stone’ that needs to be considered. True, it does not look much but, for a long time, such stones meant a lot to a lot of people a lot of the time.

This article is an updated and illustrated version of the original printed on line as a contribution to the England: The Other Within Project of the Pitt Rivers Museum, Oxford in July 2009.

References and sources consulted

Arnold, P. J.  Northumbria was here first.  Newcastle Journal. 4.11.2008.

Colley-March, Dr H.  ‘Witched fishing boats in Dorset.’  In: Somerset and Dorset Notes and Queries. X, 49-50. 1906.

communities.northumberland.gov.uk.  2008.

Death Index. 1909 Public Records Office, PRO vol 10b, page 286.

Death Index. 1913. Public Records Office. PRO, vol 106, page 479.

England.prm.ox.ac/collector (2008).

Ettlinger, H.  Documents of British Superstition in Oxford.  Folklore, March 1943.

Hazlitt, W. C.  Brand’s Popular Antiquities of Great Britain: Faiths and Folklore. London (1905).

HO107a.  Public Records Office, Piece 836, Book 18 (1841).

HO107b.  Public Records office, Piece 836, Folio 18, (1841).

HO107.  Public Records Office, Piece 2418, Folio 485, 1851.

Humble,Miss.  Accession Book entry per A. M. Bell esq. February 10th, 1908.

Murray, M. A.  Folklore.  May, 1943.

Nature.  105.  August, 1920.

Newcastle Evening Chronicle.  22nd January, 2005.

Opie, I. & Tatem, M.  A Dictionary of Superstitions.  OUP. 1989.

Peacock, B.  A Newcastle Boyhood 1898-1914.  Newcastle upon Tyne Libraries, 1986.

Oxford Concise Dictionary, 10th edition.  1999.

Oxford University Alumni 1500-1886.  biblio.tu-bs.de/cgibin.

PRO.  1861.  Public Records Office, Jan-Mar qtr, vol 10b, p142 (1861).

Rankine, D.  Crystals: Healing and Folklore.  Capall Bann. Cited in: http://www.ladyofthehearth.com.

RG9a.  3875.68.  Public Records office. England census, 1861.

RG9b.  3875.32.  Public Records office. England census, 1861.

RG10a.  5168.68.  Public Records Office. England census, 1871.

RG10b. 5168.88. Public Records Office. England census, 1871.

RG12a.  166.87.  Public Records Office. England census, 1891.

RG12b. 4260. 100. Public Records Office. England census. 1891.

RG13. 1381.35. Public Records Office. England census. 1901.

Robinson, J.  Newbiggin by the Sea: a fishing community.  Northumberland County Library, 1991.

Simpson, J. & Roud, S.  A Dictionary of English Folklore.  OUP, 2000.

Tomlinson, W. W.  Guide to Northumberland. 1888.

www.black.uk.net/places/northumberland/newbiggin.  2008.

www.marlin.ac.uk/species.  2008.

www.newbigginbythesea.co.uk/history.  2008.

www.northeasternengland.talktalk.net/Geordieorigins.  2008.

www.northernearth.co.uk.  2008.

www.weymouth.gov.uk.  2008.

2 Comments

Filed under Volume 2

Amulets – the self-management of misfortune and belief

1. The Lore of Charms and Medical Folklore

    1 (a)  Sympathetic magic

    1 (b)  Superstition

    1 (c)  Folk medicine and quackery

2. Amulets and Protective Charms

    2 (a)  Amulets

    2 (b)  Holed stones

    2 (c)  Touch pieces

3. Amulets and Charms in London Museums

    3 (a)  the Wellcome Historical Medical Museum Collection

    3 (b)  the Edward Lovett Collection

4.  Amulets and Charms in Britain

     4 (a) Oxfordshire

     4 (b)  England

     4 (c)  Wales

     4 (d) Scotland and Ireland

5.  Amulets and Charms in the Pitt Rivers Museum, Oxford.

6. Conclusion

References and Sources

1.  The Lore of Charms and Medical Folklore

The wearing of charms, to ward off ill-luck or evil spirits may have begun as amuletic adornments millennia ago, at the dawn of human history.  Evidence from Africa some 75,000 years ago shows shells were used for adornment.  In Palaeolithic Germany mammoth tusks were intricately carved or engraved into charms or talismans around 30,000 B.P.  Prehistoric amulets were made from shells, animal bones, fossils, or fashioned from clay.  Charms of later periods were made from wood, stones, rocks, and gems.  In Ancient Egypt amulets were worn as a means of identification (perhaps totemic), symbols of belief and good luck, and as prophylactics for good health and to fend off illness, see Figure 1. and Figure 2

 

AMULET 2

Figure 1. Examples of Ancient Egyptian Amulets and Talismans

7827814_f260

Figure 2.  Examples of Egyptian amulets.

During the Roman Empire tiny emblematic fish charms were secreted within clothing by Christians.  In Judaic Law tiny inscribed passages inside amulets were worn near the heart, see Figure 3.

misc53-amulet_150

Figure 3. Jewish amulet for wearing adjacent to the heart.

It becomes obvious that amulets “…appeared throughout history and across many cultures in an infinite variety of forms…” (Powell, 2012).  Charms were worn or used in the belief that these objects would obtain favour for their wearers.  Many amulets were seen as protective against the Evil Eye.  Wearing amulets and charms, often in the form of beads made of gold, silver, bronze, coral, or clam and cowry shells “…play an important part in the campaign against the Evil Eye” (Seelig, 1905) and its effects, see Figure 4.

AMULET 4

Figure 4. A selection of amulets and charms

Interest in amuletic folklore in the 19th and 20th centuries led to a noticeable increase of scholarly studies (Bratley, 1907; Fernie, 1907; Villiers, 1929; Bridge, 1930).  Folkorists enthusiastically ventured into exploring the topic (Udal, 1922;  Harland & Wilkinson, 1861), as well as a work of fiction (Nesbit, 1906). However, charms and amulets are “…not simply specimens of folklore.” (Hill, 2001), because the “…question of belief is not only crucial to an understanding of narratives, but also central to other folklore genres, superstition in particular.” (Roud,  2008).

1 (a)  Sympathetic Magic

Historically “…magical remedies, rituals and explanations which were passed down by word of mouth from one generation to the next…as a narrative of folk or religious discourse.” (Williams, 1999).  This is an demonstrates the persistence of the effect of oral transmission of belief even after the original reason for the belief has long ceased to exist.  The theory of sympathetic magic, similarity and contagion originated with the works of Sir James George Frazer (1854-1941), the Scottish anthropologist who influenced the early progress of modern studies in comparative religion and mythology.  Frazer opined that if “…we analyze the principles of thought on which magic is based, they will probably be found to resolve themselves into two: first, that like produces like, or that an effect resembles its cause; and, second, that things which have once been in contact with each other continue to action each other at a distance after the physical contact has been severed. The former principle may be called the Law of Similarity, the latter the Law of Contact or Contagion. From the first of these principles, namely the Law of Similarity, the magician infers that he can produce any effect he desires merely my imitating it: from the second he infers that whatever he does to a material object will affect equally the person with whom the object was once in contact, whether it formed part of his body or not.” (Frazer, 1933).

Early healing was “…supposed to be attained by the homeopathic principle that like cures like.” (Ettlinger, 1943).  In the view of Frazer sympathetic magic or the Law of Sympathy “…can be subdivided into its two branches. Firstly, Homeopathic Magic or the Law of Similarity, and secondly Contagious Magic or the Law of Contact.” (Frazer, 1933).  It follows that charms based on the Law of Contact or Contagion can be described as Contagious Magic.  Moreover, with regard to the mystique of charms and amulets, they encompass a “…particular kind of mediation, and interplay between authoritative knowledge (science) and enchantment (magic).” (Macdonald, 2005).  Essentially ‘primitive’ magic is based on the idea “…that by creating the illusion that you control reality, you can actually control it.” (Thomson, 1973).  In other words amulets, charms, and talismans represent the appreciation and practice of personal sympathetic or homeopathic magic by the individual.

1 (b)  Superstition

To begin with it can be assumed that “…a superstition is an irrational belief in luck, omens, spells and supernatural powers.” (Roud,  2008).  From time immemorial peoples have had a respect for the numinous which can be regarded as a “…mythic reverence for the so-called unknowable.” (Seelig, 1905).  The study of amulets, talismans and charms implies the investigation of their special connections and involvements with particular cultures and peoples.  An amulet as a charm has a specific purpose in that it “…is worn as a necklace, bracelet or other decoration about the person in order to benefit from its magical properties.” (Pickering,  1999).  In the opinion of Edward Lovett the “…most interesting features in the study of superstition is the remarkable array of objects which are associated with magic by primitive folk all over the world.” (Lovett,  1905).  Amulets and charms are objects adopted by individuals and as such express a narrative that is intensely personal.  As material objects charms and amulets internalise a magic that for the superstitious “…embraces the valuable truth that the external world can in fact be changed by man’s subjective attitude to it.” (Thomson, 1973).

1 (c) Folk Medicine and Quackery

The practice of folk medicine has been defined as the comprehension of “… charms, incantations and traditional habits and customs relative to the preservation of health and the cure of disease.” (Black, 1883).  Concerning amuletic folk remedies it has been said that “…prophylaxis has received much less attention from the folk than curative treatment.” (Seeling, 1905).  Historically medical folklore has been as extant as the cultures that engendered it, and has “…been present for as long as there have been socialised societies.” (Trimmer, 1965).  Medical folklore flourished alongside, and sometimes overlapped with medical quackery.  Essentially the derivation of ‘quackery’ is from the farmyard, with its rural connotations, because those who come to be called ‘quacks’ resemble ducks.  The allusion to farmyard ducks is because medical quacks “…advertise themselves noisily with strident exclamations, those who have come to be called quacks make themselves heard in similar manner.” (Trimmer, 1965).  One needs to understand the folklore of health and disease and the persistence of quackery.  Firstly, quackery flourishes when qualified medical practitioners are not around, or treatment and advice are difficult to obtain.  Secondly, quackery found a ready-made niche when ailments or diseases were too offensive, repugnant, or untreatable by recognised physicians.  Thirdly, in view of the first two, people turn to homeopathic magic and superstition which is “…the assertion of, and belief in doctrines not possessing the necessary and rational basis on which to rest.” (Seeling, 1903).  Turning to magic, even unconsciously, in preference to scientific method and medical opinion, indicates hope in finding a way out of a medical problem.

2.  Amulets and Protective Charms

 Amulets are objects which are similar to talismans whose word comes from the Arabic tilasim, the intention of which is to bring good luck or bestow protection on the wearer or owner.  The mid-15th century term ‘amulet’ or amalethys is derived from the Latin amuletum, and is perhaps related to  amoliri,  meaning ‘to avert, carry away, remove’. The earliest meaning of the word is found in the  Natural History (77-79 AD) of Pliny the Elder (23-79 AD).  The word was not recorded in English until around 1600.  In Middle French between 1595 and 1605 it was known as amulette.  The word ‘amulet’ can be traced to the Arabic hamala meaning ‘to carry’ which is also the name of the cord which suspends the Koran from the neck.  Talisman is derived from the Arabic tilasm from the Greek telesma (payment) or Greek telein (to complete, perform), which means to ‘initiate into the mysteries’, and as an amulet believed to possess supernatural or occult powers. Used as a synonym for amulet.

2 (a)  Amulets

The majority of amulets are mundane, and objects of common origin, that demonstrate considerable variation according to their origin in place and time.  Such objects are used by ordinary people as protective amulets.  In reality an amulet is any object to which is “…assigned a magical function by a single person… or a single object with a meaning that would be recognised by most members of a culture.” (Pitt Rivers Museum,  2010).  It mattered not how rudely made, or crude and scanty, poor or lacking in sophistication, their essential point was magical power.  The magic of these prophylactic charms was naturally and inherently linked to their materiality, their physical existence.  An amulet therefore is anything “…worn about the person as a charm preventative against evil, much of disease, witchcraft etc.” (Pitt Rivers Museum,  2010).  The intrinsic worldliness of an amulet is its meaning.  Many amulets that appear closely related or superficially similar are often fundamentally different.  Nonetheless, as objects they possess a commonality.  Amulets can confer protection by causing harm.  Either by conferring upon the possessor the ability or strength to resist magic, disease, death, or misfortune.  Amulets are believed protect and save people and property against assumed evil by causing injury or harm to opponents, threats, or malign spirits. In spite of variability in amulet type the same type of amulet can confer protection against different evils as dictated “…by their owners needs.” (Freire Marreco,  1910).

In essence an amulet confers protection by its presence and retains its potency for as long as its wearer retains trust in it, and are cared for by its owner.  The theme that validates the use of amulets and charms is that the people who made and wear them also “…believe in them…” (Pitt Rivers Museum, 2010), and that these artefacts are “…examples of sympathetic magic which generally means the appearance of an object it resembles, in some way, the cure it is believed to offer.” (Pitt Rivers Museum, 2010).  By their prophylactic role in warding off disease and evil, and bringing about good luck and harmony, amulets are believed endowed with magical power.  Ordinary people believed in and used amulets because they were “…important in their lives, shaping their attitudes, spirituality, well being, or even life and death…” (Hill,  2007).  Amulets are assumed efficacious when touched, held in the hand, or kept close against the body.  As such ‘objects of solace’  they have been “…invested with the hope or belief that it could somehow mediate on behalf of its owner.” (Powell,  2012). Amulets have a twofold role in the sense that they are, to their owners, both familiar and peculiar.

The majority of charms have been made often in a process accompanied by incantations and magical rituals that are decisive in the making of the object.  The potency of the charm is assured by ritualistic practices and expectations.  People believe and wear charms because these magical objects are “…tiny embodiments of the anxieties we feel about our human frailties, their assumed power of drawing on the dark arts of superstition and magic.” (Powell,  2012).  The properties of charms are not necessarily those of amulets in the sense that charms, unlike amulets, transfer their effects across distances.  Charms that oppose are only intended to be used for a limited period, and are eventually destroyed for their intended purpose.  Moreover, the same type of charm acts “…only in ways specified by tradition…its effects…are limited and defined.” (Malinowski,  1925).  Whereas a charm is an artefact worn in order to avert misfortune some objects are “…neither amulets or charms but objects that were used in ritual or instilled with a supernatural power.” (Pitt Rivers Museum,  2010).

2 (b)  Holed Stones

                Stones “…with natural holes in them were formerly believed to have magical powers of various kinds.” (Hole, 1980; Edwards, 2008).).  These stones are found in many places and appear as standing stones, small perforated pebbles, and even as large holed rocks.  There is “…a widespread belief in the magic properties of naturally holed stones, called hag stones or witches stones, and that prehistoric man attached magic properties to fossils.” (Oakley,  1978).  Small holed stones, which when carried in the pocket were thought to protect against witchcraft, and variously known as hag-stones, witch-stones, holy stones, holey stones, dobbie stones, adder-stones, and in Scotland as mare-stones, see Figure 5.

AMULET 5

Figure 5Examples of holed stones.

Other names are wish-stones, nightmare-stones, witch-riding, and Ephialtes-stones, which may refer to ancient Greek stones inscribed with Athenian reforms (Rankine, 2002). A similar superstition concerning nightmares and “…a stone with a hole in it at the bed’s head will prevent the nightmare…called a Hag Stone from that disorder which is occasioned by a hag or witch…” (Hazlitt,  1905).  Holed stones were at times in Europe regarded as prophylactics for bad dreams, see Figure 6.

AMULET 6

Figure 6A holed ‘Fairy Spying Stone’.

Indeed, on the question of stones Ettlinger “…mentions holed stones. Such stones were evidently regarded as magical as early as the beginning of the second millennium BC…”  (Murray,  1943).  Perforated stone amulets were not only seen as hostile to the multifarious crafts of witches but also “…protective against the much dreaded evil eye.” (Elworthy, 1903).

Upper Palaeolithic Groups in Southern France came across  fossil echinoids as a result of flint working.  Therefore fossil echinoids of regular shape called Cidaris and Diadema “…were apparently regarded as magical objects by the early Celtic peoples thousands of years later after the disappearance of the Palaeolithic hunters.” (Oakley, 1985).  In antiquity Pliny the Elder in his Natural History relates a story of an object called a ‘snake egg’ allegedly by Druids which is known as an ovum anguinum, and was invested with great magical powers.  For one naturalist the fossil sea urchin was valued as an antidote to poison (Boodt, 1609).  A fossil echinoid from Dolni Vestonice was a form which came to be known as ‘Jew’s Stones’.  Their shape “…suggested utility in the treatment of urethral and bladder troubles, in accordance with the principle of sympathetic magic (Oakley,  1985).  Another account stated the “…bodies called Tecolithi by Pliny, Lapidus Judaici, and Syriaci…much celebrated by the ancient Physicians for their diuretic properties…” (Woodward,  1728).  For more than three millennia Jew’s-stones have been used as talismans (Oakley, 1985) and the earliest record of usage was in Ancient Egypt during the 24th Dynasty around 650 BC (Fraas,  1878).  Oakley (1985) goes on further to surmise “…that their use began in Upper Palaeolithic times, perhaps nearly as 20,000 years ago.”  In Denmark numerous Cretaceous echinites have been found that were used as amuletic pendants during the earliest centuries AD.

The prophylactic use of small holed stones was quite common with some forms known as holy volints, and obviously similar to holed or holy. The amuletic use of such stones was an “…attempt to apply a mystical remedy in a practical manner.”  (Elworthy,  1903).  The belief in the magical nature of a holed stone was not its actual substance but its perforation, its holeliness.  It was the hole that bestowed value to the object.  Of importance was the belief that it was the perforation that gave protection against influences that were malignant.  Just as a naturally holed Perugian piece of coral had its virtue in the perforation as much as in the coral itself. Snake-stones of Oriental origin whose marbled markings resemble snakes are called draconitis or draconita lapis (Hole,  1980). It is recognised that “…forms of beads depend upon religious and magical beliefs is a generally accepted opinion.” (Smith,  1925) and can explain the use of pomegranate seeds as a charm. In deed beads can be seen as a serial assemblage of tiny holed stones.  The use of the seed of the pomegranate is assumed to have magical properties and is popularly considered an aphrodisiacal charm in ancient Mesopotamia.

2 (c) Touch-pieces          

Coins or medallions used as ‘touch-pieces’ are attached to attracted superstitious beliefs. Such objects are known as touch pieces and believed to have prophylactic properties and cure ailments and disease.  They also function to bring luck and influence people.  The commonality of touch-pieces is in their name whereby to be effective they have to be touched or in close contact.  Only in this manner can the permanent magical power contained in the coin be transferred.  Once done the touch-piece effectively becomes an amulet.  Touch-pieces used to cure disease can be those bequeathed at Holy Communion and are used to treat rheumatism by rubbing the coin on the affected part.  Medalets and medallions with representations of defeated Satan were specially minted in Britain – as specifically made objects they are also charms – were distributed among the poor to reduce the incidence of sickness and disease (Waring,  1987).  Touch-piece traditions can be traced to Ancient Rome where Emperor Vespasian (9-79 AD) donated cons to the sick at a special ceremony known as “the touching”.           

3.  Amulets and Charms in London Museums

                The metropolis of London has, despite the passage of time, retained clear evidence of a folklore and a folk-life. During the 20th century, especially its initial decades, it was the accepted view that magical and superstitious beliefs were generally associated with rural areas rather than  the city environment (Wright & Lovett,  1908).  A meeting of the Royal Society of Arts in 1919 was addressed by Arthur Rackham on the topic of folklore who asked “…is it not time that we ourselves are making history? And yet London is a very large country with peculiar boundaries, and also a country concerned with folklore. Ideas are constantly coming into London and constantly going out of it.” (Rackham,  (1919), cited in Macfarlane,  (2011).  As a large metropolis with a long history it is apparent that with its “…size and history of a mixing bowl of peoples and leveller of traditions and customs…this region has a certain unity and superficially, a common culture.” (Celoria,  1965).  It seems therefore that amulets deposited in London’s museum collections form a repository reflecting the concerns and beliefs of the population of the capital.

Edward Lovett, a member of the Council of the Folk-Lore Society mounted an exhibition of charms at Southwark Central Library in Walworth Road in 1917, who claimed it shows “…how widespread was the belief, especially in East and South London, that the fortunes of individuals can be affected by some inanimate object deemed to be lucky or potent against disease.” (Lovett, 1917). Today it is obvious that there are, and were, London variants of a “…universal or national lore.” (Celoria,  1965), an example being that folk whooping cough remedies were recorded as much in London as the Midlands.  Edward Lovett was a cashier in a London bank and amateur folklore collector who amassed a treasure trove of some 1400 amulets and charms.  He devoted much time and effort collecting from market vendors, herbalists, and costermongers in working class London from the 1880’s onwards.  His now mostly forgotten collection is now scattered around London in the Wellcome Collection, the Cuming Museum in Southwark, and the Science Museum, as well as the Pitt Rivers Museum in Oxford where they are archived but rarely seen.  Lovett also dealt with the Horniman Museum (Forest Hill, South London), the Imperial War Museum, and the Bethnal Green Museum of Children.

3 (a) the Wellcome Historical Medical Museum Collection

The collection of Henry Salomon Wellcome (1853-1936) contains some 4000 amulets including dead animals and meticulously carved shells.  These amulets and charms thus form a special collection within a collection (Hill, 2007). These curiosities were sold to welcome by the obsessive amateur folklorist Edward Lovett.  Lovett scoured London after dark seeking and buying from the City’s  mudlarks, sailors, and barrow men.  The Wellcome Historical Medical Museum was opened in 1913 at 54a Wigmore Street.  It is now part of the Wellcome Collection in the Science Museum.  Lovett sold to Wellcome a bronchitis necklet from Bermondsey (1914); a pair of dried mole’s feet from Kings Lynn against rheumatism from between 1881 and 1903, see Figure 7; also a small metal amuletic boot used by a man of the Surrey Regiment (1901-1916),

AMULET 7

Figure 7.  Moles feet.

Wellcome himself accepted contemporary evolutionary theory (Skinner,  1986; Symons, 1993; James, 1994), and his purchased amulets were displayed in the Hall of Primitive Medicine after 1914.  The amulets were shown with “…emphasis on exhibiting material culture as objects of knowledge in their own right…” (Hill, 2007), see Stocking (1985) and Shelton (2000).

3 (b) the Edward Lovett Collection 

Some 1400 amuletic artefacts were collected, sold, or donated by Edward Lovett (1852-1933) “…a paradigm of middle-class respectability…” who “…spent his working life in the City of London, where he rose to the rank of Chief Cashier in the Bank of Scotland, see Figure 8.

AMULET 8

Figure 8A selection of Lovett’s charms.

His activities as a folklore collector however, led him to explore a very different side of the capital .”  (Macfarlane, 2011), he was also President of the Croydon Natural History and Scientific Society in the 1880’s.  Just like the house of General Pitt Rivers, the Caterham home of Lovett was filled with his large trove of charms and amulets, where his collection included “…numerous examples from the First World War, with British soldiers travelling to the Western front with an array of good luck mascots and totems…” (Macfarlane,  2100).  Pinning on these amulets and charms British ‘Tommies’ sought protection in military action and “…some sewed old farthings as mascots into braces, offering protection through close proximity to the heart.” (Hill,  2007).  Some soldier’s talismans were made from used cartridges (Lovett, 1925; Saunders, 2003) with one bullet charm engraved ‘Frank’ (Saunders,  2002), see Figure 9.

AMULET 9

Figure 9Example of trench art amuletic bullet.

A large part of Lovett’s collection consisted of medicinal charms, hence the interest of Wellcome, of which he wrote “…these primitive amulets may be referred to as sympathetic magic… (Lovett,  1925).  However it is now accepted that his hoard of amulets did “…capture something of the beliefs of everyday Londoners from a century ago.” (Macfarlane,  2011).  It was items of  Lovett’s “…curious collection of ‘charms…” that were “…carried in the pockets of Londoners for luck or protection…” (Powell,  2012).

With regard to bronchitis in West London the Medical Inspector for the Schools in Acton informed Lovett that children wore necklaces of glass beads to ward off the illness.  These necklets, believed to be charms against bronchitis were never to be taken off.  These necklaces, usually of 34 beads, were usually sky blue, though sometimes yellow, were worn underneath the clothes.  On the basis of over 60 lower class shops he visited – every one of which recognised the blue beads as a cure for bronchitis, see Figure 10, and Figure 11.AMULET 10

Figure 10. Blue beads.

AMULET 11

 Figure 11Coral beads.

Lovett created a distribution map which went to prove that magical beliefs and practices were alive and well in London, see Figure 12. It is worth noting the blue beads, which were of Austrian origin, were similar to the blue Ushabtiu Egyptian figures.  The belief was not confined to London as the beads were used from Cardiff to Newcastle upon Tyne to Ramsgate in the south east. Again a connection may be with port towns and cities?

AMULET 12

Figure 12Lovett’s London distribution map.

Amuletic cures for rheumatism abounded and are represented in Lovett’s collection.  A potato or knuckle bone (astralgus from a sheep) – the idea being a dead bone will absorb the affliction – carried in the pocket was seen as efficacious, see Figure 13, as were “small glass tubes containing mercury, hermetically sealed and covered with soft leather, to be carried in the pocket by those who suffered from that complaint.” (Lowett, 1925).  These phials, which were supplied by the London pharmacist’s Allen & Hanbury, who were still selling them in 1924, were carried my many people – including ‘City Men’.  Indeed, some refugees from Belgium, around the time of the First World War, wore cat skins to treat rheumatism as well as chest complaints.

AMULET 13

Figure 13A bone for rheumatism.

Considering childhood complaints a general prophylactic was to place a necklace of acorns around the neck of a child. A necklet of nightshade was regarded as useful in helping an infant cut its teeth. In Whitechapel the Jewish population employed orris root to relieve sore gums due to teething. The orris root is  Iris florentina, once used in herbal medicine, which was chosen in its resemblance to a human figure – male was a ‘he root’ for boys and female was a ‘she root’ for girls However, in non-Jewish communities no sexual distinction was made.  In South London a number of teeth cutting charms were available.  A box of calf’s teeth (or sometimes the child’s mother’s teeth) was put in a bag and placed around the baby’s neck, see Figure 14, and was recommended for an infant having difficulty cutting its first teeth, see Figure 15.  Another tooth cutting soother was the ‘lost tooth’ of a girl that was saved till marriage and children born.

AMULET 14

Figure 15.  A flint necklet for teething.

AMULET 15

Figure 15Charms for the toothache.

Whooping cough or pertussis was treated with an old fashioned remedy in Bethnal Green in 1913.  A small scrap of a child’s hair was placed btwenn two slices of buttered bread and,  the following day, the front door was opened and the bread given to a dog to eat, after which the door was closed.   Other therapeutic oddities include remedies for cramp.  In Whitstable in Kent there are found fossil sharks teeth in the London Clay.  Known as Cramp-stones they are carried as a charm in the pocket and were locally regarded as very effective.  Some sharks teeth were sold in London street markets as a cure for cramp.  A hyoid bone from a sheep’s head was seen as a lucky charm against drowning and would have been popular in a port city such as London.  In the north this amulet was called ‘Thor’s Hammer’.  Another fossil charm worn by Londoners during the smallpox epidemic of the 1850’s was a brooch made from Madrepora coral, a stony coral from tropical reefs known as the ‘mother of corals’.

 4.  Amulets and Charms in Britain

Despite awareness of the existence of healing charms from the Middle Ages onwards “…modern British academics have largely neglected this aspect of poplar magic.” (Davies,  1996).  However, there has been a long-term interest by scholars in Anglo-Saxon and medieval charms and amulets, as well as traditions attached to them.  Academic research has centred around (1) their content and application; (2) and the fact that popular folk medicine and healing magic extends backwards to the Anglo-Saxon period.  Not only written talismans and spells obtained as charms against fevers, the ague, and toothache, there is also “…quite extensive evidence for the widespread use of these charms, during the eighteenth and nineteenth centuries…” (Davies,  1996). Nonetheless, regarding the ailments of children “…much more importance appears to have been given to prevention than to other branches of folk-medicine.” (Rolleston,  1943).

4 (a)  Oxfordshire

There are many examples of amuletic folk medicine in Oxfordshire.  Belemnites were squid-like cephalopods from the Jurassic or Cretaceous ages.  Their remains are found as ‘belemnites’ or ‘bullet stones’ which are the fossilised ‘guard’ or rostrum of the animal which is composed of calcite or aragonite.  These fossils were deemed to be thunderbolts from the heavens, and therefore celestial in origin.  Once regarded as supernatural in origin “…they were endowed in the popular mind with a medical virtue…” (Balfour,  1939a).  These objects were referred to as ‘thunderbolts’ and in Oxfordshire were used to treat an oral ailment in children so “…overwhelming was the people’s faith in the ‘thunderbolt’”. (Balfour,  1939a).  Many folk medicine charms are in the sympathetic magic collections of the Pitt Rivers Museum, but their “…efficiency however, must be regarded as based purely on superstitious beliefs.” (Ettlinger,  1943).

In 1899 a piece of belemnite and in the possession of a Mrs Yates of Garsington was used as a treatment for children’s ‘white mouth’.  Powder was scraped from the fossil was mixed with and administered with water. Frictional Keratosis or ‘white mouth’ was an eruptive disease of the lips.  Around 1900 a man from Oxford carried a flint on his person, that resembled a leg and swollen foot, in the belief it was a prophylactic against gout.  A woodland remedy (one of a number) in Oxfordshire “…folk-medicine prescribes against croup and whooping-cough: Go alone into the fields and find a branch which has bent to the ground and rooted to form a bow. Take the child 9 mornings and pass it 9 times through the natural arch.” (Skeats,  1912). Another rural example was a bramble obtained from Horspath Common in 1898 because of the idea that disease could be transferred to the soil while the thorns were supposed “…to prevent the disease from following.” (Bonser,  1932).  Again, a pendant consisting of a silver mounted lodestone between 1813 and 1873, owned by a MR Blaydon from Puddington, was carried suspended to the pit of his stomach.  The pendant was carried in order to avert the King’s evil and to cure fits.  The King’s Evil was ‘scrofula’ or tuberculosis of the neck.

4 (b)  England

                Amuletic homeopathic charms are found throughout England. Holed stones play a role in folk remedies such as Lincolnshire where ‘nether stones’ or adder-stones were hung round a child’s neck to cure whooping cough, as well as adder bites and the ague (Gutch,  1908).  Moreover, such amulets were also worn as a remedy for pertussis by the offspring of well educated persons (Black,  1883).  It is worth noting “…the importance of odd numbers in folklore medicine… (Rolleston,  1943),  where to cure whooping cough a string of nine knots was tied round a child’s neck in Lancashire, Leicestershire, and Worcestershire. In the west of Sussex childhood convulsions are curedd by placing an amulet consisting of peony root and mistletoe around the infant’s neck  (Black, 1883). Holed stones used as charms show widespread use throughout England.  Between 1800 and 1850 there was a popular belief on Tyneside that a stone that originated from Ireland “…possessed the virtue of curing cattle that had been bitten by an adder…” (Webb,  1969).  Again, in 1884,  another stone that came from Ireland was collected from an old woman who lived nearby the old abbey of Blanchland, Northumberland (Egglestone, 1889), for whom it was a family heirloom used many times to treat adder bites.  It is worthy to note the “…banks of the River Derwent, a tributary of the Tyne, were said to be infested with adders.” (Egglestone,  1889).  In Yorkshire a child suffering from rickets would be drawn through the aperture of a large holey stone (Wright,  1914).  As has been shown holed stones had many different names,  and magical usages, including repelling witchcraft, disease caused by spells, and the influence of the Evil Eye.  In Cambridgeshire it was custom at times to place a holed stone under the bed to prevent cramp (Porter,  1969).  Ammonites were invested with the supernatural in the belief they were petrified snakes, and the segments of fossil encrinites stems were called St Cuthberts Beads, with the fossil echini themselves called  ‘shepherds crowns, with the nummulites referred to as ‘fossil money’ (Elworthy,  1903).  Nummulites are coiled fossils sometimes called ‘little money’.  An example comes from the Whitby Snake Myth.  The geological formation of the Whitby area of Yorkshire is the Lias with large numbers of the fossil cephalopod s known as ammonites.  The myth contains the old idea that the fossils were coiled snakes that were petrified by Hilda the patron saint of Whitby.  In Keynsham in Bristol another saintly myth says “…it was believed that the Celtic virgin Saint Keyne had likewise turned the snakes into stone and these were the ammonites or snakestones.” (Hole,  1980).  James Frazer pointed out that the belief in snakestones was confined to the Celtic lands. On a maritime note holed stones had superstitious associations with fisher folk  and such “…holy stones, sea-rolled flints with a ‘natural bore’ (used to be), tied as charms inside the bows of Weymouth boats. I have watched a boatman in the act of fastening one to his craft.” (Moule,  1895). In Madron in Cornwall there is the Crick or Creeping Stone.  If a sufferer from lumbago crawls through its large hole nine times, on all fours or ‘widdershins’, as well as against the sun, they will be cured (Hole,  1980), and furthermore in the same parish is the Men-an-Tol, see Figure 16, which mothers draw their children through 9 times against the sun as a cure for rickets (Hunt, 1881).   

AMULET 16

Figure 16The ‘Men-an-Tol Stone’

4 (c)   Wales

In Wales stone charms of great repute are snake-stones referred to as Maen Magl or Glain Nadredd, which were described by Edward Lluyd as Cerrg y Drudion or Druid Stones.  Glain y Nadraedd means ‘bead of the adders’ with such snake-stones called ‘adder beads’ in England Morgan,  1983).  The  Folklore claims they are derived from snakes and bring good fortune and are in demand for eye afflictions. The word maen means stone and magl an ancient word for eye or stye.  A variation of Maen Magl is a cure for rabies called the Llaethfaen or hydrophobia stone (Davies, 1911).  The ovum anguinum is a fossil, named by Pliny, found in Wales called Wyeu’r Mor or ‘sea eggs’ which are in the “…tradition of the beads called milprev (literally a thousand snakes) used as amulets…”,  a word which can also be found in Cornish.

4 (d)  Scotland & Ireland

In Scotland were a number of names for prophylactic amulets with supposed magical properties (Britten,  1881) which included the snake-button or adder-bead and found between the Highlands and down to Wales; the Cock-knee-stone or Echinites pileatur minor a fossil found in flint; the toad-stone used to prevent a house-fire;  a snail-stone which was a small blue hollow cylinder of glass made up of four or five amulets and used to cure sore eyes; the mole-stone which were blue glass rings with a similar purpose to snail-stones; and shower-stones which are possibly a variant of meteoritic star-stones and ‘thunder-stones’. In Scotland sea urchin fossils are sometimes called “…cock knee stones…” (Dalyell,  1835) and are used for magical and medicinal purposes. In Aberdeenshire, at Fyne, there is the Shagar Stone where children are pulled through the whole beneath to strengthen them if they are weakly.  (Hole,  1980), and similarly at Coll on the Hebrides consumption sufferers have to crawl through a certain holed stone and then leave an offering.  A ‘celt’ of green quartz mounted in silver was sewn to an officer’s belt as a cure for a kidney complaint. As elsewhere beliefs and superstitions to do with magical medicine are to be found in many places and times in Scotland. In the lowlands and the highlands there are found an extensive variety of amulets and charms.  In the West of the country a child is thought most liable at risk from the effects of the Evil Eye before its baptism (Napier, 1980). As a protection against the Evil Eye the West of Scotland remedy is to bathe the child immediately after birth in salt water and made to taste it three times (Napier, 1980). Prophylactic measures for whooping-cough is to place an anodyne necklace of beads around the child’s neck whereas diphtheria is to place around the neck a recently removed cat’s fur or scratch the neck with mole’s claws (Rolleston,  1939; 1943). A spider placed in a goose quill and well sealed and put around a child’s neck will cure the thrush (MacGregor, 1891). Similarly a piece of red flannel wrapped around the neck of a child is thought to ward off the disease (Black, 1883), whereas in Morocco whooping-cough is treated with a neck amulet of a camel’s windpipe (Fogg, 1941, cited in Rolleston, 1943). Prophylactic action to prevent convulsions in childhood includes the “…biting off the head of a live mouse and hanging it as an amulet around the child’s neck.”  (Rolleston, 1943), and similarly in Upper Franconia, or modern northern Bavaria, parents or relatives also bite off the head of a mouse to use as a neck amulet to treat enuresis as well as convulsions.  Again in Moravia, the treatment of convulsions consists of a neck charm of coins or ‘blood-stones’ and then laying the child in a graveyard.

In Ballymena and Antrim flint arrowheads were boiled in water as a cure for cattle ‘grup’ – the superstition being that arrowheads, regarded as ‘thunderbolts’ or ‘elf-shot’ or ‘elf-darts’, supernaturally make the water palliative (Ettlinger,  1943).  An Irish custom is a twig of ‘muggwurth’ or twigs of wild wormwood are carried as protection against the Evil Eye “…which had been singed in St John’s fire…” (Ettlinger,  1943).  Seeds of Entrada scandens (a tropical forest giant bean) drift on the Gulf Stream from the far west.  Some eventually wash up on the west coast of Ireland where the locals call them ‘Virgin Mary Beans’ or ‘nicker beans’ and appreciate them as charms to aid in childbirth (Lovett, 1925).  Also known in other places as sea-beans, sword-beans, Mackay beans, and Queensland beans, examples can be seen in the Pitt Rivers Museum in Case 143a, 1926.23.60. (Edwards, 1980), see Figure 17.

AMULET 17

Figure 17Sea beans

5.  Amulets and Charms in the Pitt Rivers Museum.

The amulets and charms in the Pitt Rivers Museum are displayed as a demonstration of superstitious customs.  An example the collection of horse brasses shows objects once regarded as amulets.  However, with the passage of time “…the original idea has been forgotten and they have degenerated into mere ornaments.” (Ettlinger,  1943).  The same could be said of the charms and amulets which comprise part of the Sympathetic Magic display, see Figure 18.  An example of  stone implement and regarded as a thunderbolt is a cast presented by Dr Marett .  It was found in 1897 at La Maye, Jersey where it was “…built into a house to prevent lightening.” (Balfour, 1939 b).   Prophylactic stones include: a piece of amber (1911.75.9) carried by a fisherman from the Suffolk coast as a cure for rheumatism;   a veined water-worn stone (1911.75.10) from south Devon carried as a cure for toothache; a stone (1911.75.11) for rubbing on warts from south Devon and an example of the ‘transfer of virtue’ from patient type of charm.

AMULET 18

Figure 18.  Pitt Rivers Museum tooth charm

From Suffolk a knuckle bone or astragalus from a sheep carried as a cure for cramp and rheumatism. A popular charm against rheumatism was to carry a potato in the pocket but to be curative the potatoes had to be stolen, see Figure 19. However, there may be a logical explanation to such apparently superstitious practice. The eyes of potatoes contain atropine which is reputedly a cure for rheumatism and may justify the belief, see also Figure 20.

AMULET 19

Figure 19  Potato charm for rheumatism.

AMULET 20

Figure 20Anti-inflammatory onion charm.

Another charm against cramp is the cramp-nut or the “…woody outgrowths, common on beech or ash tree…” (Ettlinger,  1943), and carried in the pocket for effect.  Cramp-bones have to  worn near the skin as possible and lose their power if the touch the ground (Black, 1883;  Elworthy,  1895).  An eel skin from Carlisle (1911.75.13) prepared and sold as a cure from cramp and rheumatism. A touch-piece (1909.60.1) is represented by an Elizabethan gold coin which was given as a cure for the King’s Evil.  This was seen as a potent object for the transfer of virtue.  Natural vegetable amulets include a large bryony root from 1916 bought by a Headington labourer because of its resemblance to human shape who believed it to be “…a mandrake and have magical potency.” (Ettlinger, 1943).

A child’s caul (1911.75.16) or foetal membrane (amnion) left in situ on the infant’s head, was seen as a good omen and charm against drowning.  It is from Oxford having been found in St. Ebbe’s in 1906 and originally part of the Balfour Collection.  A caul or ‘kell’ has amuletic value for sailors and the term is derived from ‘silly how’ or ‘sely or holy how’.  In France it is called a ‘etre ne coiffe and means the person is very lucky. In Nelson’s time there was a limited sailor’s trade in cauls based on the belief it was a sure charm against death at sea by drowning (Lovett,  1917). Children’s cauls belong to the sub-group of amulets that include human body parts or the representation of such part. Other such charms include the dried tip of a human tongue from the collected by E. B. Tylor before 1897 and known “…to have actually been carried for a considerable time before 1897 as an amulet against disease in Tunbridge Wells, Sussex.” (Ettlinger, 1943).  However, it was thought more usual to carry the tips of dried anmal tongues to bestow good luck or to prevent the pocket becoming empty.” (Henderson,  1979).  A very interesting prophylactic amulet is the feet of a mole that were used originally to help erupt the first teeth of small children. Moles feet were later used for all toothaches (1911.75.16 from Norfolk), and even to ward off cramp (1911.75.17 from Sussex).   It was the shape of the mole’s feet or specifically “…the front feet, or digging feet…how strongly they are curved…this permanent curve is regarded by the folk as due to cramp and therefore ‘like cures like’, it must be a cure from cramp if carried in the pocket (or in a bag round the neck).”  (Lovett,  1928). The efficacy of the mole’s foot charm depended on it being cut from the animal while still alive, which was then allowed to go away, see Figure 21.

AMULET 21

Figure 21.  Moles foot.

The Pitt Rivers Museum specimen was carried in an old man’s pocket in 1902, who lived in Staffordshire who believed it would permanently free him from toothache.  In Case 30 b there is a collection of silver sirena or mermaids and they are  charms especially dedicated to infants and protect from the Evil Eye, see Figure 22.  The superstitious belief in the influence of mal-occhio (evil eye) in Neopolitan terms requires amuletic protection against jettatura (jettatori are bringers of ill luck). Neapolitan for evil eye is maluocchje.

AMULET 22

Figure 22.  ‘Syrena’ charm against the ‘Evil Eye’.

In the Sympathetic Magic Case 61 b is a stone from Newbiggin (1908.11.1.).  One of a number that hung around a fisherman’s cottage in Newbiggin, Northumberland, and is a prophylactic charm against ill-luck and witches (Edwards, 2011), see Figure 23.

AMULET 23

Figure 23The ‘lucky stone’ from Newbiggin.

In Case 126 a (1884.58.74) is a small blue amuletic figurine Ptah-Sokaris.  The figure represents the dwarf god Pataikos.  These dwarf amulets of Ptah-Patakoi guarded the living, particularly children, and what appears to be an insignificant glazed turquoise figurine is really a magical amulet, see Figure 24.

AMULET 24

Figure 24.  Ptah amulet.  Source: Public domain.

6.  Conclusion

From the foregoing it is obvious that homeopathy “…is a cardinal principle of magical medicine….” (Halliday, 1924).  The ‘magical’ artefacts, amulets and charms have become a “…silent witness…to countless narratives…” (Powell, 2012).  These objects were, and are still, embedded in their communities, the milieu of their social relations.  Amulets and charms in museums are part of a matrix through time and space which “…highlight the ways in which various objects…act as ‘social glue’, affirming a variety of relationships between people and objects…” (Hill, 2007;  Gell, 1998). Amulets and charms, as material objects have a history that is layered, and forms a sort of palimpsest, especially as meanings have changed over time. What was once a vital and magical object is now for many a mere curio.  In essence once the belief in a charm is lost it does lose its power, its meaning in terms of modernity has indeed been lost.  In other words amulets and charms, many of which are inanimate, or are made from, dead objects, had life invested in them by generations of people who believed misfortune could be circumvented by using sympathetic and homeopathic magic.  This implies, as far as displays are concerned, that these folklore beliefs spread far beyond the locus of the museum.  In this context it has been stated there these “…objects are enactments of strategies, and actively participate in the making and welding together of social relations.” (Pels, 2002).  Recent debates show a tension between modernity and the display of ‘magic’ artefacts especially in the context of museum collections.” (Bouquet, 2005).   The scenario becomes one where museum displayed amulets and charms, despite modernity, still exert their magic through wonder, curiosity, and continuing belief for some.

References and Sources Used.

Balfour, H.  (1939 a).  Concerning Thunderbolts.  Folklore.  XL (1), March, 1939.

Balfour, H.  (1939 b).  Thunderbolts.  Folklore.  Xliv, p 236, 1939.

Black, W. G.  (1883).  Folk Medicine.  Folk-Lore Society, 1883.

Bonser, W.  (1932).  Survivals of Paganism in Anglo-Saxon England.  Transactions and Proceedings of the Birmingham Archaeological Society.  Vol lvi, 1932.

Boodt, A.  (1609).  Gemmarum et Lapidum historiae.  Hannover, 1609.  Revised in 1636, Leiden.

Bouquet, M. & Porto, N.  (2005).  Science, Magic and Religion: The Ritual Processes of Museum Magic.  Berghahn Books, Oxford, 2005.

Bratley, G.  (1907).  The Power of Gems and Charms.  Gay and Bird, London, 1907.

Britten, J. (1881).   Amulets in Scotland.  Folklore Record.  4, (1881).

Budge, E. A. W.  (1930).  Amulets and Superstitions.  Oxford, 1930.  .

Celoria, F. (1965).   Preliminary Survey of London Folklore.   J. of the Folklore Inst.  2 (3), Dec, 1965.

Dalyell, J. G.  (1834).  The Darker Superstitions of Scotland.  Curry & Co, Dublin, 1834.

Davies, J. C.  (1911).  Folklore of West and Mid Wales.  Welsh Gazette, 1911.

Davies, O.  (1996).  Healing Charms in Use in England and Wales 1700-1950.  Folklore.  107. 1996.

Edwards, E.  (1988).  Jack’s Magic Beans.  Friends of the PRM Newsletter.  63.  Oct, 2008.

Edwards, E. (2008).  A Fisherman’s ‘Lucky Stone’ from Newbiggin-by-the-Sea, Northumberland.  England: The Other Within.  Pitt Rivers Museum, Oxford.  9.12.2008.

Edwards, E.  (2011). A Lucky Holed Stone’ from Newbiggin-by-the-Sea. Friends of the PRM Newsletter.  72.  Nov, 2011.

Egglestone, W. M.  (1889).   Monthly Chronicle of North Country Lore, March.

Elworthy, F. T.  (1895).  The Evil Eye the Origins and Practices of Superstition.  London, 1895.

Elworthy, F. T.  (1903).  On Perforated Stone Amulets.  Man,  Volume  3, 1903.

Ettlinger, E.  (1943).  Documents of Superstition in Oxford.  Folklore.  LIV (1), March, 1943.

Fernie, W.  (1907).  Precious Stones: For Curative Wear and Other Remedial uses…  J. Wright & Co, Bristol, 1907.

Fraas, O.  (1878).  Geologischen aus dem Libanon.  Jbv. Ver. Vat. Nat. Wurttemburg.  Stuttgart, 1878.

Frazer, J. G.  (1933).  The Golden Bough.  Macmillan & Co, London, 1933.

Freire Marreco, B.  (1910).  Charms and Amulets.  In: Encyclopaedia of Religion and Ethics (ed Hastings).  Vol 111, 396a.

Gell, A.  (1998).  Art and Agency: An Anthropological Theory.  OUP, Oxford, 1998.

Gunther, R. A. (Ed).  (1945).  Early Science in Oxford.  OUP, Oxford, 1945.

Gutch, Mrs & Peacock, M.  (1908).  County Folklore, Lincolnshire.  Vol IV, 1908.

Haddon, A. C.  (1906).  Magic and Fetishism.  Constable & Co, London, 1906.

Halliday, W. R. (1924).   Folklore Studies: Ancient and Modern.  Methuen & Co, London, 1924.

Harland, J. & Wilkinson, T. T.  (1867).  Lancashire Folklore.  London, 1867.

Hazlitt, W. C.  (1905).  Holed Stones.  Brand’s Popular Antiquities of Great Britain: Faiths and Folklore.  London, 1905.

Henderson, W.  (1879).  Folklore of the Northern Counties.  London, 1879.

Hill, J.  (2007).  The Story of the Amulet.  Journal of Material Culture.  12 (1), 2007.

Hunt, R.  (1903). Popular Romances in the West of England (1881).  3rd ed.  Chatto & Windus, London, 1903.

James, R. R.  (1994).  Henry Wellcome.  Hodder and Stoughton, London, 1994.

Lovett, E.  (1902).  The Modern Commercial Aspect of an Ancient Superstition.  Folklore.  13, 340-7, 1902.

Lovett, E.  (1905  ).  The Whitby Snake – Ammonite Myth.  Folklore.  16 (3), Sept 1905.

Lovett, E.  (1909 a).  Difficulties of a Folklore Collector.  Folklore.  20, 2227-8, 1909.

Lovett, E.  (1909 b).  Amulets from Costers’ Barrows in London, Rome and Naples.  Folklore.  20, 70-1, 1909.

Lovett, E.  (1910).  English Charms, Amulets and Mascots.  Croydon Guardian.  December 17, 1910

Lovett, E.  (1913).  Folk Medicine in London.  Folklore.  24, 120-1, 1913.

Lovett, E.  (1917 a).  The Belief in Charms. An Exhibition in London {Arranged by E. Lovett}.  The Times.  March 15, 1917.

Lovett, E.  (1917 b).  Belief in Charms.  Collecteana.  Folklore.  29 (1), March, 5, 1917.

Lovett, E.  1922).

Lovett, E.  (1925 ).  Magic in Modern London.  Croydon Advertiser, 1925.

Lovett, E.  (1926).

Lovett, E.  (1927).

Lovett, E.  (1928).

Macdonald, S.  (2005).  Enchantment and its dilemmas, the museum as a Ritual Site.  In Bouquet, (2005).

MacFarlane, R.  (2011).  London’s Lost Amulets and Forgotten Folklore.  Daily Telegraph, 28.10.2011.

MacGregor, A.  (1891).  Highland Superstitions.  Eneas Mackay, Stirling, 1891.

Malinowski, B.  (1925).  Magic, Science and Religion.  In Needham, J. (Ed),  Science, Religion and Reality.  London, 1925, p71.

Morgan, P.  (1983).  A Welsh Snakestone, its Tradition and Folklore.  Folklore.  94 (ii), 1983.

Moule, H. J.  (1895).  On Holy Stones: Notes and Queries.  Folklore.  July, 1895.

Murray, M. A.  (1943).  Correspondence: On British Superstition.  Folklore.  LIII, May, 1943.

Napier, J.  (1879).  Folk-Lore or Superstitious Beliefs in the West of Scotland within this Century.  Arden Books, 1980.

Oakley, K. P. (1978).   Animal Fossils and Charms.  In: Porter, J. R. & Russell, W. M. S. (Eds) Animals in Folklore.  Ipswich, 1978.

Oakley, K. P.  (1985).  Decorative and Symbolic Uses of Fossils.  Occasional Papers on Technology (Inskeep, R. R. Ed.), Pitt Rivers Museum, Oxford, 1985.

Pels, D.  et al.  (2002).  The Status of the Object.  Theory, Culture and Society.  19 (5-6), 2002.

Pickering, D.  (1999).  The Cassell Dictionary of Folklore.  Cassell,  London, 1999.

Pitt Rivers Museum.  (2010).  Discover Amulets and Charms.  PRM Introductory Guide, 2011.

Porter, E.  (1969).  Cambridgeshire Customs and Folklore.  Routledge & Kegan Paul, London, 1969.

Powell, F.  (2011).  Charmed Life: The Solace of Objects.  Wellcome Collection Exhibition, London, 6.10.2011 – 26.2.2012.

Rackham, A.  (1919).  Cited in MacFarlane, R. (2011).

Radford, E. & M.  (1980).  Holed Stones.  In: Hole, C (Ed).  The Encyclopaedia of Superstitions, London, 1980.

Rankine, D.  (2002).  Crystals: Healing & Folklore.  Capell Bann Publishing, 2002.

Rolleston, J. D.  (1939 a).  West London Medical Journal.  Vol  xliv, 1939.

Rolleston, J. D.   (1943 b).  Folklore of Children’s Diseases.  Folklore.  LIV (2), June 1943.

Roud, S.  (2008).  London Lore.  Random House, London, 2008.

Saunders, N. J.  (2002).  Trench Art.  Shire, Princes Risborough, 2002.

Saunders, N.  (2003).  Trench Art, Materialities and Memories of War.  Oxford, 2003.

Seelig, M. G.  (1903).  Superstition in Medicine.  Medical Library and Historical Journal.  3 (3), 1905.

Shelton, A.  (2000).  Museum Ethnography: An Imperial Science.  In Hallam, E. & Street, B. (eds).  Cultural Encounters: Representing ‘Otherness’.  Routledge, London, 2000.

Skeats, W. W.  (1912).  Snakestones and Stone Thunderbolts as Subjects for Systematic Investigation.      Folklore.  XXIII, March, 1912.

Skinner, G.  (1986).  Sir Henry Welcome’s Museum for the Science of History.  Medical History, 30 (4),  1986.

Smith, S.  (1925).  The Pomegranate as a Charm.  Man.  Vol 25, September, 1925.

Stocking, G.  (1985).  ‘Essays on Museums and Material Culture’, Stocking (ed), in Objects and Others.  Wisconisn.

Symons, J.  (1993).  The Wellcome Institute for the History of Medicine: A Short History.  Wellcome Trust, London, 1993.

Thomson, G.  (1973).  Aeschylus and Athens.  Lawrence and Wishart, London, 1973.

Trimmer, E. J.  (1965).  Medical Folklore and Quackery.  Folklore.  76 (3), Autumn, 1965.

Udal, J.  (1922).  Dorsetshire Folklore.  S. Austin * Sons, Hertford, 1922.

Villiers, E.  (1929).  The Mascot Book.  T. Werner-Laurie, London, 1929.

Waring, P.  (1987).  The Dictionary of Omens and Superstitions.  Treasure Press, 1987.

Webb, D.  (1969).  Irish Charms in Northern England.  Folklore,  80 (4), Winter, 1969.

Williams, S.  (1999).  Religious Belief and Popular Culture in Southwark c.1880-1939.  Oxford, 1999.

Woodward, J.  (1728).  Fossils of all Kinds.  London, 1728.

Wright A & Lovett, E.  (1908).  Specimens of Modern Mascots and Ancient Amulets of the British Isles.  Folklore.  19, 288-303, 1908.

Wright, A. & Lovett, E.  (1908).  Specimens of Modern Mascots and Ancient Amulets of the British Isles.  Folklore.  19, 288-303.

Wright, E. M.  (1914).  Rustic Speech and Folk-lore.  Milford, H.  London, 1914.

All illustrations are from the public domain, the Pitt Rivers Museum, or otherwise credited.

Written text of a lecture entitled “The Self-management of Misfortune by the Use of Amulets and Charms”. Given at the Institute of Social and Cultural Anthropology, University of Oxford in February 2012. Part of the “Small Blessings” Project.  Eric W. Edwards, BA Hons (Oxf), MA (Oxf), MPhil.  February 2012.

Leave a comment

Filed under Volume 2

Shedding Light in Dark Places: the story of the miner’s lamp

george-stephenson

George Stephenson (1781-1848).

Born Wylam. Northumberland.

1.  Introduction

2.  Mine explosions due to fire-damp

3.  Historical background

4.  George Stephenson’s ‘Geordie’ lamp

5.  Sir Humphrey Davy’s lamp

6.  The controversy over priority

7.  Pit disasters

8.  Pit disease

9.  Women in mines

10  Children in mines

11. Anthropological perspective on mining

12. Mining communities

13. Conclusion

Appendix: Chronology of the Stephenson and Davy lamps.

References and sources

1.  Introduction

“There is blood on coal!”

Coal mining, as well as metal mining, was “…carried out for centuries in this country before any practicable form of safety lamp was produced in 1815.” (Wedgewood, L. A. 1946). There are three examples of miners’ safety lamps displayed in the Pitt Rivers Museum, Oxford.

In the Pitt Rivers Museum Oxford, in case 141.A in the Court are displayed three examples of miners’ safety lamps. One lamp (1932.88.1152) was collected by Henry Balfour and donated by him in 1932. This lamp is of the type invented by Sir Humphry Davy in 1816 and is an example employing wire gauze to make a naked flame safe in a gaseous atmosphere.  Another lamp is made of brass and has a glass safety surround with above it a metal gauze tube. Another example is a later safety lamp (post 1839) with linear wick possibly burning naphthalene (lighter fuel). The gauze does not go all the way to the top but ends in a gauze cap. The lamp is topped by a brass arch and hook for suspension. Situated in between these two is a later model (1930.22.2) that was once owned by Alfred Walter Francis Fuller and donated in 1930, and is the French Marsaut type made after 1882. The lower part has a glass surround with an upper gauze chimney completely enclosed in a metal bonnet. Most miners’ safety lamps made after 1882 had gauzes protected by such bonnets. The miners’ safety lamp was first and foremost a methane detector. Moreover “…you can still buy one, because even today every pit deputy must carry one, despite the universal use of electricity for lighting collieries.” (Adams, 2005). Prior to the invention of the miners safety lamp it was in “…mining districts near the sea common for miners to work in dangerous places by the phosphorescent gleam of dried and usually putrid fish.” (N.E.I.M.M.E. 8.12.2010). In some circumstances a water-soaked ‘fireman’, described as a ‘penitent’, went into a roadway with a candle on a long pole to ignite any gas accumulation. Skilled miners also ‘tied the candle’ in order to determine the amount of gas present. See Figure 1 for examples of miner’s safety lamps.

Image (18)

Figure 1.  Examples of Miners Lamps: (1) Davy Lamp; (2) Clanny Lamp;

(3) Muesler Lamp; (4) Marsaut Lamp; (5) Marsaut Lamp II

2.  Mine explosions due to fire-damp

Towards the end of the 18th century explosions with increasing numbers of fatalities in coal mines occurred because seams were being dug at deeper levels. The use of steam engines for hoisting and water pumping enabled colliery deepening in England. At deeper levels fire-damp (methane) was more prevalent.  At this time all explosions were attributed to fire-damp because the explosive nature of coal dust clouds was not recognised. Most explosions occurred at the point of a tallow candle flame. Developing ventilation technology, which meant the presence of large pumps and winding gear both below and above ground, pushed the danger of fire-damp explosion into the background. Consequently, in the early 1800’s many pitmen died in northern England due to large colliery explosions. Indeed “…major incidents alone accounted for 558 deaths in Northumberland and Durham between 1786 and 1815…” (Adams, 2005). Indeed, from 1807 to 1812 estimates of casualties give 300 miners killed. See Figure 2.

Image (16)

Figure 2.  Aftermath of the Easington Colliery Mine Explosion.

Fire-damp or methane (CH4) is carburetted hydrogen. The gas is lighter than air and usually colourless and odourless. Fire-damp derives from bacteriological decay of the vegetable matter cellulose. Fire-damp in mines is really trapped marsh gas produced by chemical processes completed many millions of years previously. Fire-damp is able to combine with twice its volume of oxygen and after explosion leaves one volume of carbon dioxide (CO2) and two of hydrogen. In order to become explosive fire-damp has to achieve critical mixtures. A mixture of 90.5% air and 9.5% fire-damp can cause a devastating explosion but a mixture of about 7 or 8% of fire-damp is easier to ignite. The range of explosive capability is approximately mixtures of 5 to 15%.

A devastating mine explosion will create havoc amongst the equipment situated below. Not only will the violence kill by blast and fire but wreck brattices (shaft partitions), destroy accumulated corves (baskets), tubs, rolleys (vehicles), ponies and horses. Moreover, the destruction of ventilation systems will lead to the asphyxiation of colliers by lethal after-damp resulting from combustion. This after-damp is a toxic gas mixture consisting of nitrogen, carbon monoxide, and carbon dioxide. Another lethal gas, black damp or choke damp (also known as stythe) is formed in mines when oxygen is removed from an enclosed atmosphere. This asphyxiant consists of argon, water vapour, nitrogen and carbon dioxide. The term damp is believed derived from the German dampf or vapours and similar mining terms are white-damp (carbon monoxide) and stink- damp (hydrogen sulphide). See Figure 3.

Image (4)Figure 3.  Glossary of Coal Mine Gases

Initially an explosion is a violent out-rush of gas from the ignition source, but an inevitable and following in-rush (termed an after-blast by miners) fills the vacuum left by cooling gases and steam condensation. There are many causes of ignition of fire-damp in mine explosions. In the early days explosions resulted mainly from naked flame lamps and the accumulations of gas called blowers. Other reasons included the use of the early flint steel mill, defective safety lamps, flame from shot firing tunnel explosives, and sparks from faulty machinery, metal implements, and electrical equipment.

3. Historical background

The Felling mine explosion, on the 25th of May 1812, was one of the first major pit disasters in England, and claimed 92 lives out of 128 working in the pit, see Figure 2. This was the first great explosion that provided reasonably accurate records. Felling colliery, situated between Gateshead and Jarrow in County Durham (now South Tyneside), was extended in 1810 with a new deeper seam – Low Main. The pit had two shafts in use – William Pit and John Pit. The colliery was owned by John and William Brandling and their partners Grace and Henderson. See Figure 4.

felling

Figure 4.  Memorial Plaque for those Lost at Felling, 1812.

It was in the new seam that the engulfing explosion took place. An ignition of fire-damp triggered a coal dust explosion with devastating effect. The blast was heard up to 4 miles away and around the pit small coal, timber and wrecked corves (wagons or large baskets) rained down. Both headgears of the shafts were destroyed and a huge blanket of coal dust caused a dusk-like twilight in neighbouring Heworth where it descended like black snow. Tragically the blast produced a ghastly sight of miners “…some mutilated, some scorched like mummies’, and some blown headless out of the mineshaft like bird shot.” (Holmes, R. 2008). The resulting fire raged for 5 days. It took nearly seven weeks to remove the dead after putting out fires and waiting for the after-damp to disperse. Ninety-two men and boys (more than 20 were 14 or younger) lost their lives and the eventual funeral procession comprised ninety coffins when it finally reached the church. Of the boys three were brothers – one of 15, one of 13 and one younger. The names of those lost are collected under the heading “In Memoriam” in the archives of the Durham Mining Museum. In addition their “…places of burial are also given where known: a tribute to the lasting loyalties and strength of feeling among the mining communities to this day.” (Holmes, R. 2008). A further explosion at Felling (which is just one and a half miles ESE of Newcastle) on Christmas Eve 1813 killed 12 men and 9 boys aged 8 to 15 years.

The aftermath of the tragedy was first effort to establish a properly co-ordinated movement of public opinion in favour of mine safety. This movement not only aroused scientific interest and endeavour in the cause of accident prevention. It also drew attention to the need for a flame lamp that would not ignite fire-damp, and to devise a means of lighting safe in a gaseous atmosphere. See Figure 5 for an engraving of the aftermath of a pit explosion.

Image (25)

Figure 5Victorian Engraving Depicting the Rush to the pit after and Explosion.

A major protagonist in the campaign was one Reverend John Hodgeson (1779-1845), ministrant to the bereaved and he who buried their dead as incumbent of the parish of Jarrow and Heworth. Hodgeson was instrumental in establishing the accident prevention society which came to fruition in Sunderland on 1.10.1813. A Safety Committee under the auspices of the Duke of Northumberland and the Bishop of Durham was thus established but dithered until the second explosion at Felling in 1813 spurred them to action.

Sir Humphry Davy (who was on the continent with his wife at the time) was enlisted by the Society in Sunderland to investigate the phenomenon of fire-damp (Davy, J. in Davy, H. 1839). In July 1815 Davy was on holiday in the Highlands. It was correspondence between Hodgeson and Dr Robert Gray of the Coal Mines Safety Committee that requested assistance from Davy. They earnestly stressed that the “…situation in the mines was becoming critical (another fifty-seven men had dies at Success Colliery, Newcastle, in June).” (Holmes, R. 2008). Davy replies on August 18th and proposes to visit Walls End Colliery outside Newcastle to observe fire-damp. Thus travelling “…as a bachelor, he rode down to Walls End (from the Yarrow Valley) and on the 24th of August had a long discussion with John Buddle, the Chief Mining Engineer.” (Paris, J. A. 1831). After visiting some mines in County Durham he returned to London where he took over the Royal Institution laboratory on the 9th of October, 1815. In August of that year he had examined some fire-damp in wine bottles despatched from Hebburn Colliery. Davy thereupon recruited the Institutions’ instrument-maker (John Newman) and summoned Michael Faraday to assist him.

Meanwhile, inspired by the Felling disaster “…an almost untutored genius at Killingworth Colliery, see Figure 6,  on the north bank of the Tyne, was trying independently to discover the means to produce a reliable lamp.” (Duckham, 1973).  This was George Stephenson, a then unknown engineer, who was backed by a Nicholas Wood, a Richard Lambert, and the Bramblings’ as owners of Felling Colliery.

Killingworth High Pit

Figure 6An Early Victorian Engraving of Killingworth Pit

Spedding devised the flint and steel mill in 1740 as the first serious attempt to provide pit lighting, but it proved to be of dubious safety as well as cumbersome and clumsy, requiring constant working by a boy. A famous medical member of the Society was a certain Dr William Reid Clanny (1776-1850) who himself since late 1811 had been attempting to devise a safety lamp. His efforts eventually had him awarded gold and silver medals by the Society of Arts. William Martin (1772-1851) also invented a safety lamp, accepted by pitmen but not by the mine-owners and it was suppressed. Martin, who lectured on Davy’s “murder” lamp tested his lamp at Willington Colliery, near Walls End 1n 1818 (Adams, 2005).

William Reid Clanny was an Irish inventor born in Bangor, County Down, in 1770, and who died in Sunderland (after practising as a physician for 45 years)  in 1850.  Clanny invented the Clanny Safety Lamp in 1813 and published his observations in 1816. This lamp was first used Herringham Mill pit where Clanny had experimented in person. Northern coal owners and other contemporaries noted the value of his lamp which was emphasised in his obituary in the Sunderland Herald. After his first “blast lamp” of 1813 he maintained his interest in lighting in gaseous environments and created six other lamps. The last two are regarded as true Clanny lamps, between 1839 and 1842. See Figure 7.

Image (33)

Figure 7. William Reid Clanny’s Publication of his Work on Miner’s Safety Lamps.

The 1813 lamp, which was an oil lamp, was operated by a bellows with the flame isolated behind glass by water reservoirs. It was seen as clumsy and, as it went out in the presence of gas, it had little practicality in a coal mine. On Clanny’s lamp George Stephenson considered “…it as constructed upon a principle entirely different from mine, that of separating the external and internal hydrogen by means of water.” (Stephenson,  1817 a). See Figure 8.

Image (17)

Figure 8. Two examples of Clanny’s lamps made in Newcastle in 1870 and 1880.

4.  George Stephenson’s ‘Geordie’ lamp

George Stephenson was born in Wylam (as was William Hedley the inventor of the locomotive “Puffing Billy”) nine miles west of Newcastle on 9.6.1781 and died 12.8.1848.  He was the second son of Robert Stephenson, foreman of the Wylam Colliery pumping engine.  Aged 14 he was an assistant fireman to his father at Dewley Colliery, then at Duke’s Winning Pit at Newburn.  Aged 17 he was engineman at Water Row Pit west of Newburn and in 1801 began working at Dolly pit at Black Callerton Colliery as a “brakeman” (controlling pit winding gear). Married in 1802 he moved to Wilkington Quay east of Newcastle working as a brakeman. He moved again, as a brakeman, in 1804 to West Moor working at Killingworth Pit and the adjacent Mid Hill Winning Pit. The pumping engine at High Pit, Killingworth, had to be repaired by him in 1811. As a result he was elevated to an engine-wright for the surrounding collieries of Killingworth. Yet it was not until 1799 that he began, in his spare time, to learn to read and write.

After the Felling disaster Stephenson began, in 1813, experimenting with a safety lamp that could employ a naked flame without igniting an explosion. It was his conclusion that “…if a lamp could be made to contain the burnt air above the flame, and permit the firedamp to come in below in small quantity to be burnt as it came in, the burnt air would prevent the passing of the explosion upwards and the velocity of the current from below would also prevent its passing downwards.” (Encyclopaedia Britannica, 1962). It was after 1811, to Stephenson’s credit, that he started to apply his inventive capacities to design a miners’ safety lamp. His design was one which used small tubes to allow the entry of air to support combustion and passage of gases. This lamp design was arrived at by trial and error and the prototype was tested at Killingworth on 21.10.1815. An improved version was tested again on the 4.11.1815 and 30.11.1815, and shown to R. W. Brambling and a Mr Murray on the 24th of November, when he “…had just built his first locomotive at Killingworth Colliery.” (Adams, 2005). The test was at a fire-damp issuing fissure underground in Killingworth pit a month before Sir Humphry Davy presented his design to the Royal Society in London. Stephenson showed his successful safety lamp design to the Newcastle Literary and Philosophical Society on 5.12.1815. See Figure 9.

Image (11)

Figure 9.  Stephenson’s Lamp

Stephenson’s lamp became known as the ‘Geordie Lamp’. Unlike the Davy lamp it had no gauze but glass around the flame, gave a brighter light and was popular with miners. Glass breakage was a problem with the Geordie lamp but, with the invention of safety glass, this was later resolved. The Geordie lamp, unlike the Davy lamp, was employed exclusively in the north east pits. Stephenson was unaware that Sir Humphry Davy was working on the same problem. Sir Humphry applied scientific methods and analysis whereas Stephenson relied on practical empiricism and, lacking Davy’s laboratory facilities, worked in his own home and was obviously “…blessed with a fertile mind and considerable mechanical ingenuity.” (Barnard, 1936).

5.  Sir Humphrey Davy’s lamp

The Davy lamp of 1815 contained a candle, even though he is recognised as the inventor of the safer oil burning lamp, and some of the ideas of Clanny and Stephenson. The Sunderland Society for the Prevention of Accidents in Mines charged Sir Humphry Davy with investigation of the problem of mine explosions. It was at the end of October, 1815, that Davy had three prototypes of his “Safe Lantern” which were sealed lamps using metal tubes or “fire sieves” as air inlets. He read a paper about these tube lamps the Royal Society on 9.11.1815.

It was Davy who surmised that a flame cannot ignite fire-damp or mine-damp if contained within a wire mesh. He showed this using a metal mesh of 28 openings to the inch gauze. This mesh screen, using two concentric mesh tubes to increase safety, cooled combustion products so that flame heat was too low to ignite the gases outside the gauze. This gauze contraption functioned therefore as a flame arrestor. The fine mesh permitted methane to pass through but stopped the passage of the flame itself. The first trial was carried out at Hebburn Colliery on 9.1.1816. As we can see Davy’s lamp was developed towards the end of 1815 but not tested until January 1816 in the collieries at Wallsend and Hebburn. Davy spent two hours down G-pit with the result that “…the state of the flame indicated the presence and even the strength, of the fire-damp in a shaft! His lamp not only caged the flame, it transformed it into a canary.” (Holmes, R. 2008; also Davy, J. Vol 6, 116-117).  It became obvious that tubular lamps were only relatively safe but Davy in “…late December or early January…made a further technical breakthrough…” (Holmes, R. 2008) which he reported to the Royal Society on November 9th, 1815. Davy discovered he could replace the need for an airtight glass lamp chimney with a “…fine-gauge mesh (that) would work even better than thin metal tubes in preventing an explosion.” (Holmes, R. 2008). It was this gauze-enclosed prototype lamp that became known as ‘the Davy’ that was presented to the Royal Society on the 25th of January, 1816, and successfully tested at Hebburn and Walls End pits later that month (James, F. 2005). See Figure 10. Stephenson’s lamp “…represents the lamp at present in Killingworth Colliery. One…was in the hands of the

Image (10)
Figure 10.  Stephenson’s lamp compared to that of Davy.
 

manufacturer at the time I exhibited my former one to Mr R. W. Brandling and Mr Murray…on the 24th of November, and was tried in the same mine on the 30th, and on the 5th December was exhibited before the Literary and Philosophical Society of Newcastle.” The Davy lamp represents a wire gauze safe-lamp as constructed according to the specifications of Sir Humphrey Davy. It shows the wire gauze cylinder, which should not have less than 625 apertures to the square inch.

Flammable gases were noted to burn with a blue tinged flame and when placed on the ground the flame went out due to accumulations of the asphyxiant gas (CO2) known as black-damp or choke-damp.  Davy was performing experiments with fire-damp at the same time as others. In 1815 he realised that the holes of fine metal gauze acted the same as narrow tubes (viz Stephenson’s lamp), thus mine air passed through small orifices fed a flame that would not ignite the outside gas. Davy’s original experiments with fire-damp “…discovered its ‘lag’ on ignition.” (Barnard, 1936).  Davy’s lamp [see 1932.88.1152] was eventually surrounded by metal mesh and thus differed from Stephenson’s lamp with its glass surround. Thus Davy wrote, in a communication of 1816 that his “…invention consists in covering or surrounding the flame of a lamp or candle by a wire sieve…”, and further that his object  “…at present is only to point out their application to the use of the collier.” (Davy, 1816 b.)

6. The controversy over priority

Davy was in France and Italy 1813 to 1815 but on his return started experiments with lamps for colliery use. H. R. Clanny and the then unknown George Stephenson had already shown the idea of a safety lamp.

In 1813 the Society for Preventing Accidents in Coal Mines was formed in Sunderland (TWAS 1589 cited Smith, J. 2001) and which was directed by Reverend John Hodgeson who invited Davy in 1815 to research fire-damp (Northumberland Record Office, cited in Smith. 2001). George Stephenson was directly involved as a mining engineer and already experimenting with fire-damp and a safety lamp (Stephenson, 1817 a). In his own time Stephenson’s research led to “…the consequent formation of a Safety Lamp, which has been, and is still, used in that concern…” which his friends considered “…as precisely the same in principle with that subsequently presented to their notice by Sir Humphry Davy.” (Stephenson, 1817 b).

It was to Stephenson that we were “…indebted for the discovery of the Principle of Safety…” that hydrogen will not explode down narrow tubes and “…will hereafter recognise as the Stephenson Principle.” (Charnley, 1817). The Principle was pointed out to several persons long before Davy came into the County, and Stephenson’s lamp was in the hands of the manufacturer during Davy’s visit. (Stephenson, 1817 b.). Stephenson made “…three lamps, all perfectly safe:  and by following precisely the same steps, Sir Humphry Davy was enabled subsequently to construct one…” (Charnley, 1817). The Northumberland Record Office possesses 37 unpublished letters signed by Davy dated September 1815 to March 10th, 1818, and known as the Hodgeson Bequest. Within this context Davy made “…complete acknowledgement of the priority of Mr Stephenson’s claims”, and moreover “…acknowledges the same principle of safety which Mr Stephenson had previously established and proceeded with his experiments in the same way.” (Charnley, 1817).  Admitting that “…my habits, as a practical mechanic, make me afraid of publishing theories…” Stephenson avowed that the principle “…has been successfully applied in the construction of a lamp that may be carried with perfect safety into the most explosive atmosphere” (Stephenson, 1817 a). Davy’s response described the dispute as an “…indirect attack on my scientific fame, my honour, and veracity.” (Davy, cited in Smith, J.  2001). It seemed to many that “…the invention of a miners’ lamp, similar in design to Davy’s, with a measure of evidence to suggest priority, by a largely uneducated colliery engineer, stuck in Davy’s craw.” (Smith, J. 2001). Especially as Stephenson had previously announced to many associates the principles of his lamp and begun its manufacture (Newcastle Chronicle, 1815, November 2nd). Davy only announced the results of his fire-damp experiments on 19th October.

In 1816 Davy was awarded £2000 as a public testimonial for his lamp whereas Stephenson received a miserly 100 guineas. The following furore at such a snub resulted in a local subscription that raised £1000 from local dignitaries, colliery owners and managers. A Resolution of the Coal Trade, August 31st, 1816, considered the award to Davy for his safety lamp, but an adjourned coal owners meeting, 11.10.1816, credited Davy with inventing the safety lamp. At this point Stephenson joined the fray with letters, with supporting correspondents, in the Newcastle Chronicle.  A supporter opined “Mr Geo Stephenson, of Killingworth Colliery, was the person who first discovered and applied the principle upon which lamps may be constructed.” (Brandling, 1816, Newcastle Chronicle, August 29th).

Davy among many derided Stephenson and poured scorn on his invention and the priority dispute became “…characterised by local patriotism on the one hand and academic sneers on the other…” (Duckham. 1973). There was attempt made by Davy to contact Stephenson.The experience with Davy made Stephenson distrust theoretical and scientific experts based in London for the remainder of his life. Davy has been described as “…less than fair to the man who was to father Britain’s railways” (Duckham, 1973), especially for others as the evidence awards conclusively “…the priority to Stephenson in the invention of the miners lamp.” (Smith, 2001). In token of gratitude Davy was awarded £2000 at the same time as Stephenson was accused of stealing Davy’s idea, and it is regrettable that “…Davy regarded Stephenson as no more than a pirate…” (Knight, 1996).  It is noteworthy that Davy received his award “…at a banquet presided over by his old friend John Lambton, afterwards Earl of Durham, who had been with him at Bristol under the care of Dr Beddoes.” (Hartley, 1971). It was the high-minded attitude of Davy over precedence that initiated the bitter dispute. In the spring of 1816 the normally publically reticent Stephenson challenged Davy over the issue of priority and accused Davy of plagiarism in respect to his own ‘Geordie Lamp’ the model which employed solid glass and metal using tubes and perforations. This was the lamp that had its final working version tested in Killingworth pit on the 21.10.1815. The Stephenson and Davy lamps did look similar but at this juncture Davy’s “…gauze lamp had not yet been published – or indeed invented.” (Holmes, R. 2008).  It is worth noting that Davy left no original laboratory notes concerning his work on the lamps. Davy responded by complaining about Stephenson’s alleged pilfering and miserable lying and in this way Davy “…showed no professional generosity towards Stephenson.” (Holmes, R. 2008).

In 1817 George Stephenson published two pamphlets pointing out that his lamp was the result of ‘mechanical principles’ whereas the lamp of Davy was one based on ‘chemical principles’. In these pamphlets Stephenson signed himself “The Inventor of the Capillary Tube Lamp”. Davy’s announcement of his prototype lamp in November 1815 was somewhat premature because Stephenson’s lamps “…had been introduced before Davy’s, worked safely, were cheap and robust…legally adopted by many Newcastle miners who fondly referred to them as home-grown ‘Geordies’. “ Holmes, R. 2008). Prior to this the Newcastle Literary and Philosophical Society adopted an objective position and showed examples of both Clanny’s bellows lamp and Stephenson’s conical lamp on the 5th of December, 1815. When The Society compared “…examples of the true gauze lamp, as used by Buddle at Walls End, at its meeting of 6 February, 1816.” (Holmes, R. 2008), it became obvious that the two lamps were different instruments. See Figure 11.

Image (34)

Figure 11. Publication describing the safety lamp invented by George Stephenson  (1817).

Considering the derisory comments from Davy and his supporters it is worth considering Stephenson’s own words in his defence as recorded in 1817 (Stephenson, G. 1817 a). Stephenson pointed out that his lamp was “…the same in principle with that subsequently presented to their notice by Sir Humphry Davy.” Furthermore the gauze of Davy’s lamp was “…a variation in construction.” Stephenson goes on to say that “…it might be considered a want of candour were I not to take notice of the lamp constructed by Dr Clanny, but my reason for not inserting it is, that I considered it as constructed upon a principle entirely different from mine, that of separating the external and the internal hydrogen by means of water.” Stephenson then proceeds to vindicate himself chronologically by saying “…the following dates I have extracted from Mr Hodgeson’s letter, and the Newcastle Chronicle.” Therefore, the 15.10.1815 Sir Humphrey Davy receives fire-damp; on the 19.10.1815 Davy informs Hodgeson he has discovered that explosion will not pass through small tubes. On the 25.10.1815 announces his discovery to the Chemical Society of London. On the 30.10.1815 Davy describes a lamp on the principle of tubes above and below. Following this Davy announces his Tube Lamp to the Royal Society on the 9.11.1815 which was duly reported in the Newcastle Chronicle on the 23.12.1815. The Morning Chronicle announces Davy’s application of wire gauze and which is also reported in the Newcastle Chronicle on 23.12.1815. Stephenson points out that Davy writes in Newcastle on the 9th of November, 1816  that “…whenever workmen etc are exposed to such highly explosive mixtures, double gauze lamps should be used, or a lamp in which the circulation of air is diminished, by a tin plate reflector placed in the inside, or a cylinder of glass reaching as high as the double wire, with an aperture in the inside. Such lamps, likewise, may be more easily cleaned than the simple wire gauze lamp, for the smoke may be wiped off in an instant from the tin plate or glass.”  Stephenson stresses that “…he first embraced the idea, the principle upon which the Tube Lamp is constructed was published, and a plan of it shown in early September…” and furthermore “…that it was actually burning in the mine on the 21st of October.” Then he goes on to affirm that Sir Humphry Davy “…does not announce his discovery of the fact that explosion will not pass down tubes, till the 19th of October.” Rising to his sense of just cause Stephenson continues by saying “…my double perforated plate lamp was certainly ordered some time before the 24th of November, and tried in the mine on the 30th of the same month…” He then points out that “…the earliest notice I had of Sir H. Davy having applied wire gauze for the same purpose was, from the Newcastle Chronicle of the 23rd of December.” Stephenson then goes on to say that, in refutation of the criticisms directed against him, “…I have been actuated solely by a justifiable attention to my own reputation, and a sincere desire to have the truth investigated, and not by any disgraceful feeling of envy at the rewards and honours which have been bestowed upon a gentleman who has directed his talents to the same object…”. After such magnanimity Stephenson then writes of the “…refusal of two subsequent meetings summoned for the purpose of bestowing some mark of approbation of Sir H Davy, to enter upon an investigation of dates and facts, was justified by many gentlemen…”. He refers pointedly to the claim of Davy’s supporters thus “…when at the second meeting, the expression of ‘the invention of his Safety Lamp’ was altered to ‘his invention of the Safety Lamp’, I felt myself called upon to assert my claims.” George Stephenson thus vindicated himself and demonstrated that he was right to complain about the unfair way he had been treated.

Stephenson was eventually exonerated by a local enquiry committee, termed Stephensonians, who awarded him £1000 but this proved unacceptable to Davy’s supporters. They refused to recognise how an uneducated man had arrived at the solution he had. It was only in 1833 that Stephenson was given equal claim to priority by a House of Commons Committee. In the meantime Davy had been awarded the Rumford Medal for his efforts by The Royal Society in 1817

7.  Pit disasters

The earliest reference to gas explosions in mines dates from 1621 and was blamed on ‘Auld Nick’ known otherwise as the devil. Across Durham and Northumberland between 1800 and 1899 there were around 300 major colliery disasters that claimed the lives of more than 1500 men and boys. The major cause was due to fire-damp explosions as well as some mine collapses. See Figure 12, for a Victorian engraving of a pit disaster funeral.

Image (14)

Figure 12. Funeral procession following the New Hartley Pit disaster.

On 16.1.1862 in Northumberland 199 miners died.

Killingworth Pit (where George Stephenson was an engineer) is about 5 miles from Newcastle where 10 were killed on March 28th, 1806, with a further 12 lost on September 14th, 1809. Haswell Pit near Sunderland in October 1844 had 95 killed by gas explosion and New Hartley in Northumberland had no survivors from 204 men and boys. This disaster at Hartley in 1862 had 199 men entombed in a one shaft pit which led to the end of the practice of one shaft workings. See Figure 13.

Image (31)

Figure 13A group of children orphaned by a pit disaster in England.

Between 1708 and 1951 there were 2106 colliery fatalities of men and boys in the north-east. These losses due to explosion include 52 at Wallsend in 1821; 102 at Wallsend in 1835; 164 at Seaham in 1880; 168 at West Stanley in 1909; and 81 at Easington (County Durham) in 1951. See Figure 14.

Image (7)

Figure 14.  Record of major mining disasters in the north east of England.

8.  Pit diseases

There can be a negative impact on the public health of mining communities due to coal mining operations. A study in West Virginia, USA (Hendryck, M. 2006) pointed out that residents “…of coal mining communities have long complained of impaired health.” and these “…residents are at an increased risk of developing chronic heart, lung and kidney diseases.” Therefore coal production has a bearing on the incidence of cardiovascular, lung and kidney disease in mining communities.

            Mines, especially coal mines, are the “…most difficult lighting environment in the world.” (I.E.S.N.A. 1993). For mining engineers the coal face “…is one of the most difficult environments to illuminate, due to the low reflectivity of the coal roof, walls and floor…and the need for flameproof equipment.” (Pardoe, D. R. G. 1994). The most used and probably most important of all human senses is that of sight. At the coal face it becomes paramount that adequate illumination is needed for reasons of health, safety, and productivity. A common disorder in regard to miners’ eyesight is called ‘miner’s nystagmus’ which means the eye is unable to maintain visual fixation in certain conditions or circumstances. This form of nystagmus can result from long periods of constrained viewing in poorly illuminated mining working areas, and the effort made to see small objects. It is found to be more common in miners who have worked below ground for more than 20 years. Attributed to such poorly illuminated spaces this eye disorder presents with impaired dark vision, dizziness, headaches, eye pain, lachrymation, excessive sensitivity to glare, and peri-corneal congestion.  Dim light or the partial gloom or darkness of a coal mine can have acute and chronic effects on health. Such high visual demands on coal miners can cause eye strain and fatigue, especially over a period of 8 hours.

Occupational respiratory disease in mining presents as lung disease commonly in the form of CWP (or coal workers pneumoconiosis), asbestos related diseases, lung cancer and other lung conditions. Many of these respiratory diseases in the mining industry have a long latency and may only become apparent after some time. Such latency therefore in both individual and community terms “…remains of considerable importance after mining operations cease.” (Ross, M. H. 2004). Historically, since the 1500’s, a relationship between lung disease and mining has been recognised and documented.

Coal workers pneumoconiosis (CWP) is also called ‘black lung disease’ and is caused by long term inhalation of coal dust. An intitial or preceding and milder form is known as ‘anthracosis’. Coal dust inhalation builds up progressively in the lungs from where the body cannot remove it. Coal miners exposed to the dust develop industrial bronchitis. This eventually leads to inflammation, pulmonary fibrosis, and in worst case scenarios to actual lung necrosis. Black lung represents and arises in a specific set of conditions with coal miners having an occurrence of 16 to 17%. In the late 1990’s some 10,000 coal miners in America died of CWP. Another occupational respiratory disease associated with mining is ‘silicosis’ – a major disease with world-wide distribution affecting other occupations as well as mining, and which has long been recognised as having a connection with tuberculosis. The condition presents as chronic obstructive airways disease (emphysema) with chronic bronchitis – both of which are “…common manifestations of long term occupational exposure to silica dust…” (Ross, M. H. 2004). See Figure 15.

Image (5)

Figure 15.  Glossary of mine diseases

A connection between metalliferous mining and lung disease has long been known, with a particular association between the lung condition known as ‘pthisis’ with mining for copper, tin, gold, and mica. Stone cutters, millers and miners were particularly susceptible liable to tuberculosis in metal mining (Beddoes, T.  1799). Again, it was surmised early that in miners pulmonary diseases were possibly connected with certain types of dust produced in mines and pits.(Thackrah, C. 1832).

An example of metal mining and associated disease can be found in the Cornish mining industry. Moreover, unlike coal mines, Cornish mines had no artificial ventilation systems so temperatures often rose in excess of 100 degrees Fahrenheit. The Royal Commission of 1842’s report “…showed for the first time the abundance of lung disease amongst Cornish miners…the reason for the excessive mortality…” was due to miner’s pthisis (Proctor, E. S. 1999). In 1904 a Royal Commission into the Health of Cornish Miners despatched Dr J. B. S. Haldane to look into several instances of illness characterised by rashes and anaemia. Haldane “…discovered that most of the current disease was due to an infection by the ankylostoma hookworm that had been brought to the mines by men who had worked in the tropics.” (Proctor, E. S. 1999). Some 90% of all foreign white miners working in the gold fields of Witwatersrand in Transvaal between 1902 and 1903 were from Cornwall. The worm could only survive in hot conditions as provided by the excessive temperatures of Cornish tin mines. Migratory Cornish miners were particularly associated with mining hard rock formations. Miner’s worm was a parasitic nematode worm commonly called a ‘hookworm’ due to its predeliction for attaching itself to the lining of small intestine of its host. Its name is Ankylostoma duodenale and causes the condition termed ankylostomiasis. It is now known that symptoms attributed to ankylostoma were recorded in Egyptian papyri circa 1500 BCE. In the 11th century the Persian physician Avicenna found the worm in a number of his patients and associated it with their condition. Much later the parasitic disease was found in a number of mining communities in England, Germany, Belgium, France, northern Queensland and elsewhere.  In 1877 during the construction of the Cotthard Rail Tunnel it was found that Italian tunnelers had an anaemia and diarheoa. In 1880 the hookworm transmission was due to fact that the workers defecated in the tunnel’s 15 km workings. It was not until 1897 that it was deduced that the infection route was through the skin. The disease which was known as miners worm, tunnel disease, or cachexia of miners by 1906 “…is definitely known to be caused by the nematode worm Ankylostoma duodenale.” (Hickson, S. J. 1906). In Europe there was a serious spread of the parasite through the mines of France, Germany, Belgium and some mines in England. Eradication of the hookworm still left excess mortality due to dust inhalation and secondary tuberculous infection. Cornish miners resorted to their own folk remedies in time of affliction – blaming piskies, pelloes, spriggans, and white witches, as well as a repertoire of local and traditional herbal remedies.

9.  Women in mines

            Earliest records from the 17th and 18th centuries report that women were used to dress and wash down ores in the lead mining areas of the Yorkshire Dales, County Durham, and the Peak District. Copper mines also employed women in Angelsey and Staffordshire, as did the iron mines of Blaenavon and Shropshire. Women and girls worked in the mining industry across Britain. The Cornish metal mines used many women and girls but most were employed in the coal mining industry. In the coal industry of Cumbria, Scotland, Shropshire, Yorkshire, Lancashire, and Northumberland, women were certainly employed underground. Referring to women miners it was said that “…indeed, the mother and her daughters – they work among men rough as Hottentots, and almost, sometimes quite as naked.” (Eddy, T. M. 1854). See Figure 16.

Image (36)

Figure 16Engraving of Victorian women miners.

Parliamentary Papers from 1842 (volumes XVI, p.24-196, and Volumes XV and XVII) pointed out that in England, but exclusive of Wales, only in some parts of Lancashire and Yorkshire were young children regularly allowed to go down and work in coal mines. In the West Riding men worked naked in great numbers being assisted by females aged 6 to 21 years who laboured stripped to the waist. In 1841 some 2350 women were working in mines in the United Kingdom – about a third of these in Lancashire.

The Coal Mines Regulation Act of 1842 made it illegal for women and children in  mining communities to work underground. For the first time women in Scotland were excluded on the basis of gender from following an occupation. This resulted in pit ponies replacing women as underground bearers. However, dressed in the garb of male miners, some women continued working below illegally. Gradually women moved into over-ground jobs at the pits. Of the 2400 women originally estimated to work in the mines only 200 had found employment by 1845. As a source of lower paid labour the pit-owners preferred women to work at the ‘picking tables’ as the men replaced women underground. Some 90% of women employed ‘screening’ at coal mines were sorting coal by the early 1900’s. Screening was the process whereby women, known as screen lassies, and children separated different sizes of coal on a conveyer belt using large sieves called ‘riddles’. Basket women hooked on the tubs and were usually chosen from widows of colliers or men who had met with mine accidents. The “coal bearers” were women or children who were used to carry the coal on their belts (weighing between 0.75 to 3 cwt) down the steep braes and  up the non-railed roadways. See Figure 17.

Image (35)

Figure 17.  Women pit head sorters at a Victorian South Wales Colliery.

In 1887 another Coal Mines Regulation Act raised the minimum working age to 12 years. Women, called ‘pit brow lassies’, some 99% of whom married miners, were young and usually in the 20’s. A family affair these women worked alongside their fathers, mothers, husbands, as well as siblings, at or down the mine. They worked a 6 day week with shifts of 12 to 18 hours. As such these ‘pit brow lassies’ formed a very tight knit group.  The report of the Royal Commission of 1842 pointed out that miners were perceived as wild and hard drinking uneducated immoral men with a godless outlook. Their women and children also did hard and back-breaking jobs down the pit for long hours in dangerous and cramped situations. The report stirred the conscience of the general public and inspired Victorian philanthropists to pressurise Parliament for reform.

Pressure was exerted by the miners’ trade union throughout the 19th century to have women cease employment in the mines. Parliament  witnessed concerted efforts to have this put into effect in 1887 and again in 1911. This was met by organised female protests in 1887 with the result that the Mines and Collieries Acts only banned women from pushing heavy wagons. Over time the employment of women at coal mines was reduced from between 5000 and 6000 in 1841 to less than 1000 in the 1950’s. In 1972 the last two women employed in the British coal industry retired. It has to be recognised that these women mine workers struggled and worked together in common left behind them a trail of tears and sweat. Like the men they also suffered the losses of frequent mining disasters.  For example, the Silkstone Disaster (1805) near Barnsley was where many women and girls died in an explosion. Again, in 1838 a serious flooding of the Moorside Pit drowned 7 girls aged 9 to 17 years. See Figure 18.

Image (24)

Figure 18Pit brow lassies in England.

In Cornwall and Devon women and girls have most likely been working at or in tin mines since antiquity. Medieval written records are the earliest evidence of this working. From around 1770 to 1860 large numbers of women were working in the industry with the last laid off during the 1920’s. In the West country these women mining workers were called ‘Bal Maidens’ and worked with copper, tin, zinc, lead, manganese, antimony and  lead, throughout the industry. The term ‘bal’ is old Cornish and means ‘mining place’. In 1880 the female mining contingent had increased to 2000 and 6000 by 1851 which means that in the two centuries between 1770 and 1870 more than 80,000 women and girls laboured in the Cornish and Devon mining industries (balmaiden.co.uk).

Women have worked in mines worldwide since antiquity. In the 2nd century BC women worked in Egyptian gold mines having arrived as slaves and captives and worked above and below ground. Women and children across the world were routinely recruited to mining industries from the early 18th century. Women are listed for pay in the Mexican and Peruvian silver mines, in Swedish iron mines, as well as the Indian diamond fields. For example, women mined for diamonds in Hyderabad mines that employed nearly 60,000 people. The tin mines of Bolivia in 1884 employed women alongside their husbands and by 1933 were employed in the deeper pits. Women sluiced the Malaysian tin mines in 1935 which employed some 7,800 women. Even in the early 20th century large numbers of women and older girls laboured in coal mines in China, Belgium, France, Malaysia, the Ukraine, and the mica mines of India. See Figure 19.

Image (29)

Figure 19Photograph of modern women miners working in a Japanese colliery.

Stripped to the waist as did the Victorian women miners in England.

10. Children in mines

Prior to industrialisation children of working class and poor families worked for centuries as child labour. Child labour is defined as the employment of children as sustained and regular employment. Child labour implies in the main children used to make a commodity or service saleable in the market place at a profit regardless whether or not their labour is remunerated. As anybody aware of the writings of Charles Dickens (who referred to ‘Dark Satanic Mills’) and Charles Kingsley (of ‘The Water Babies’ fame) are aware the Victorians were notorious for employing young children. Children laboured in factories, coal and other mines, as well as in quarries and as chimney sweeps. See Figure 20.

Image (12)

Figure 20A Victorian child chimney sweep.

Such places of exploitation, including mines, have been referred to as “…places of sexual licence, foul language, cruelty, violent accidents, and alien manners.” (Thompson, E. P. 1966). The practice of putting children to work, as ancillaries to the work done by their parents, was documented first during the medieval period. Estimates of child labour in the work force in metal and coal mining comprised a large number of children in the Victorian mining industry. In the Swedish iron mines girls worked underground in the 1830’s. In the 1840’s boys and girls worked for a while in the copper mines of Glen Osmond, Australia. See Figure 21.

child miners

Figure 21 A Victorian ‘thruster’ pushing a coal tub, and a trapper opening a ventilation door.

From: Report of the 1842 Royal Commission into Children’s Employment (Mines).

Before 1842 it was commonplace for entire families to employed together, working underground, in order to earn enough money to survive. Most children who worked in collieries started at age 8, some as early as 5 years, and were often dead by 25 years of age. Many were carried to the mine still half asleep in the arms of their parents. They laboured long from 4am until 5pm and had to crawl though narrow tunnels too small for adults. These children were directed to transport coal, or ore, along to the horse-path or the main pit shaft. The conditions they worked in were dangerous with many killed by explosions, others in pit collapses, some in flooding, some who fell asleep and were crushed by oncoming carts. In addition many children because of their developmentally young age contracted lung cancer and other respiratory afflictions. An example of the ever present danger was the Huskar Pit Disaster in Yorkshire in July 1838. A total of 26 children were drowned aged 7 to 17 years. The youngest was James Burkinshaw who was 7 along with his 10 year old brother George. The loss included 3 girls aged 8 out of 11 girls between 8 to 17 and 15 boys aged 7 to 17 years.

The youngest child of a family employed underground was usually had the simple job of a “trapper”. Usually sitting alone in total darkness for 12 hours they had to watch and open and close wooden doors known as “traps” that permitted fresh air to circulate through the mine. Trappers were the smallest children in the pit who thus regulated mine ventilation by means of the division doors.

Older children, half-grown girls and women were employed as “hurriers” and “thrusters” (putters) who worked in conditions where they had to crawl on their hands and knees. The large coal tubs were pushed by thrusters and pulled by hurriers along the roadways from the coal face to the pit bottom. A “putter” therefore was a boy or youth or woman who pushed or dragged the coal along the tunnels from the workings to passages where ponies could be used. See Figure 22

Image (22)

Figure 22A Victorian ‘hurrier’ pulling a tub of coal.

Report of the 1842 Royal Commission into Children’s Employment (Mines).

These workers were classified into “trams”, “headsmen”, “foals and “half-marrows”. In these occupations younger children worked in pairs whereas older ones, as did women, worked alone. A “foal” was a boy not yet strong enough to “put” or push from behind on his own but able to do so with the help of another boy. A young “putter” or foal was also known as a “half-marrow”. The coal tubs weighed as much or more than 600 kgm (11.8 cwt) and the roadways were often only between 60 to 120 cm high (24 to 48 inches). The oldest and strongest miners were the grown men and strong youths who worked at the coal face as “hewers” or “getters” and were the only ones who worked continually with a lamp or candle.

A “driver” was a lad used for driving the ponies on the main underground roadway and was aged 14 to 15 years old. A “gin-driver” drove the horses in the engine or “gin” that hoisted the coal from moderately deep pits. Youths aged 16 to 18 were often used as “flat-lads” or crane operators who hoisted the corves of coal – a corf was a wicker basket for pulling the coal and contained some 4 to 7 cwt. The “greasers” were boys who greased the axles of the coal tubs. Girls or boys called “pumpers” were made to descend to the deepest part of the pit to pump rising water to the pump-engine to maintain dry work spaces for the coal face hewers. A first for a boy underground was that of a “wood boy” or “supply boy”. They carried materials to various parts of the mine using ropes or a mono-rail. See Figure 23.

Image (23)

Figure 23.  Children who worked were subject to appalling conditions.

Many who worked in the mines were dead before they reached 25.

Other occupations of children in the coal mining industry included “wailers” who picked out slate, pyrites and other admixtures from the coal. A “water leader” took away water from the horse-ways and helped the deputies with their duties. Finally a “way cleaner” was a lad aged 11 to 15 who cleaned the pit rails using hay or rope. They also removed coal dust accumulations. See Figure 24.

miners-1911

Figure 24Real life Oliver Twists: Child miners were often beaten, abused, hungry

and tired. Their childhood was often over before it began.

Estimates of child labour in Britain’s coal and metal mines are revealing. In 1842 children employed in coal and metal mines ranged from 19 to 42% and by 1851 children totalled 30% of the coal mining population. Similarly in 1838 an estimated 5000 chuldren were employed in the Cornish metal mines which, according to an 1842 report, had risen to 5,378. In 1838 also 85% of the 124 tin and copper mines in Cornwall employed children. Of the 105 mines surveyed for the 1838 report children comprised between 2 and 50% of the mining populations with an average of 20% per pit. After the 1870’s the use of children in Cornish mines began to decline.

Child labour in the mining industries is still common in the modern world. Tens of thousands of children are used above and below ground in small-scale gold mining operations in Africa, Asia, and South America. Like their historical Victorian counterparts they risk death and maiming from explosions, rock falls, and tunnel collapses, as well as inhaling air contaminated with noxious fumes and dust. Children, just like adults, can suffer the deleterious effects of vibration, noise, poor lighting and bad air, as well as over-exertion and exhaustion. Moreover, serious conditions affecting the respiratory system include silicosis and pneumoconiosis. Other ailments include hearing and sight problems, joint disorders and other orthopaedic conditions, constant headaches, deafness, dermatological problems, fractures and wounds.

In Africa in the Sahel region, Burkina Faso, and the Niger, children are working in the gold mines – an occupation called “orpaillage”. Some 30 to 50% of the orpaillar workers are under 18 years of age. That is a total of 200,000 to 500,000 across both countries with approximately 70% of them aged less than 15 years. Small-scale mining in Ghana is called “galamsey” (to gather and sell) with an estimated 10,000 children working in mainly gold extraction. In the Cote d’Ivoire children are trafficked in from Mali, Guinea, and Burkina Faso and made to work in mining in slavery-like conditions. See Figure 25.

Image (13)

Figure 25A girl working in a modern African mine.

There are also rich gold deposits in Mongolia where the average age of a child miner is only 14. Below the age of 13 there are more girls employed at mines than boys. There are gold deposits in the Philippines where a child miner is usually between 15 and 17 years old. They have a particularly risky task of diving into muddy wells some 2 metres wide and 7 metres deep to retrieve the gold bearing soil. In the region of the Andes, Bolivia, Peru and Equador gold mining employs as many as 65,000 children. See Figure 26.

Image (27)

Figure 26.  A boy working underground in a modern African mine.

11. An anthropological perspective on mining

During the last 6000 years mining has employed millions of people throughout the Old and New Worlds. During that time mining has transformed vast regions of the surface of the earth. Mining is an important feature of production and social reproduction and thus it is important to become aware “…of the social, spatial and ideological dimensions of technology and of past or present industrial communities…” (Knapp, A. B. 1998), as well as the environmental impact (including pollution) that resulted from mining. In this context archaeology, anthropology, and ethnographic studies have a contribution to make to the study of mining communities.

From antiquity to medieval times most mining was conducted on an individual basis. The work was carried out by people living in agriculturally based communities. An anthropology and archaeology of mining can be seen as a study of social life taking place in a set of material conditions. This can be described as a social archaeology that necessitates investigating a number of factors that include technical, physical, social and cultural (see: Wylie, A. 1993). The study of historical mining communities demands the recognition of a number of demographic factors that encompass ethnicity, the technological, social class, and ecology. Evidence may be found in the material record or surviving material culture of the mining community concerned (see: Raber, P. 1987). In addition mining can be seen as a form of social organisation that “…is partially conditioned by the physical and or socio-cultural isolation of mining communities, and partially by the harsh working conditions and labour requirements of the extractive and productive phases in mining.” (Roberts, 1996). All too often in the history of mining and mining cultures “…social historians, archaeologists and archaeometallurgists tend to focus on the history and technology of mining.” (Knapp, A. B. 1998). However, mining in modern times is very often labour intensive and therefore requires an inexhaustible and dependable supply of workers.

12. Mining communities

The harsh working conditions of mining combined with the physical isolation and social organisation of mining communities inspires the view that miners are “…best known for their accidents and strikes, that are the inherent part of their daily life…” (Matosevic, A.  2008). It follows that, from an anthropological and ethnographical point of view, mining cultures “…give rise to recurrent patterns of population dynamics, labour recruitment practices, and political organisation.” (Godoy, R.  1985). The combination of teamwork and team spirit, so essential to safety at collieries, with the traditions and pressures associated with underground working created the coal culture of mining communities. In addition mining communities even though “…often socially and spatially remote, they are linked into broader social, communications, transport and economic networks.”  (Knapp, A. B. 1998). During the miners strike of 1984-1985 the pit of Maerdy in South Wales, see Figure 27, was adopted for support by Oxford. During the dispute they were supplied from Oxford with donations of cash, food parcels and Christmas presents for miners children.

Image (15)

Figure 27. Assembly of striking miners from Maerdy Colliery, South Wales.

In Victorian times coal miners, together with their wives and children, were “…subjected to measures of social ostracism, partly on account of the spirit of the times – which in a much greater degree than now regarded all labour as material…” (Barrowman, J.  1897). In other words miners and their dependents were perceived as lacking humanity, as some form of troglodytes. For example in mid-19th century Northumberland miners lived in a long rows (“raas”) of  single storey cottages which had neither toilets, mains water supply or lighting. In such circumstances the colliery, where everybody had to work, dominated the lives of the miners and their families. In the 1920’s there developed the concept that mining communities should be recognised as richly deserving. A coal levy singled out pit villages for the provision of welfare. This levy provided much needed baths, sports facilities, libraries, welfare halls, and community facilities. Nonetheless, due to the dangerous and solitary nature of their work miners forged a sense of separateness due to “…dirty and unattractive work, in darkness and alone, and dissociated from the activities of the outer world, the collier settled into that condition of separateness which is characteristic of the class to the present day.” (Barrowman, J.  1897). An analysis of gender in consideration of the androcentric myths about mining shows that “…both women and men were fully integrated into the socio-cultural mainstream of the mining community.” (Knapp, A. B. 1998). It was the role of women that determined the structural dynamics of mining communities – which made itself apparent during the miner’s strike of 1983 to 1984. See Figure 28.

Image (32)

Figure 28Community support and solidarity during the aftermath of the Easington

Colliery explosion. Waiting at the pit head for news.

Around 700 miners were working in coal mines in the north-east of England in 1787 which rose to about 1,000 1n 1810. By 1919 the region had 223,000 coal miners employed of whom 154,000 were in County Durham. In Durham County this number had increased to 170,000 in 1923. Many of these colliery workers had migrated into the region from Wales, Scotland, Ireland and other parts of England. The majority were local.

The exploitation of coal began in the Forest of Dean in the early 1800’s with large collieries developing after 1830. The mining community of the Forest of Dean had unique mining traditions going back to the 1700’s with rights awarded to “Free Miners”. (Pope, I. 2006). The origin of the free-miners dates back to the middle ages. For example miners “…who were born in the Hundred of St Briavels, and worked for a year and a day in a Forest mine could apply for the right to work a ‘gale’ of coal or iron. A gale being an area of a coal seam or iron ore vein.” (Beard, R. 2011).   As with other pits the Forest of Dean coal mines suffered tragedies and disasters with 600 recorded fatalities between 1797 and the present day.

The exploitation of tin and copper mines in early 19th century Cornwall made up an established mining community. Indeed, surrounding the metal mines of Cornwall in the 19th century was in fact the longest established metal mining community in the world. Working conditions in Cornish tin mines were dangerous, hard and dirty. It was an industry that employed women and boys (who started at age 12) on a large scale. Unlike coal mines women did not work underground but processed the ore at the surface. The work levels were reached by ladders because mechanical lifts were not installed. Men and boys laboured in total darkness their only illumination that of candles often attached to their hats. The candles had to be purchased by the miners themselves. Conditions were improved eventually by the introduction of safety lamps.

Unlike coal mines elsewhere disasters in Cornish mines were not large scale but nonetheless “…the toll of men, both young men through accident and older men through lung disease, was extreme.” (Rule, J. 1998). Moreover, unlike coal mines, Cornish mines did not suffer explosions due to fire-damp. Even so the annual average death rate per 1000 miners in Cornish mines between 1849 and 1853 was greater than the losses of northern coal miners in each age cohort. The presence of women in Cornish mining has to be stressed and tragically “…the proportion of widows to total female population in Cornwall in 1851 was higher than in any other country.” (Rule, 1998). See Figure 29.

Image (28)

Figure 29Cornish tin miners in 1886.

One example of how a terrible disaster can come to an entire mining community was the waste tip slide at Aberfan, near Merthyr Tydvil on Friday the 21st of October, 1966. At 9.15 am a colliery waste tip slid down a mountainside and engulfed Pantglas Junior School and 20 houses in the pit village. The slip destroyed a farm cottage on the way down killing all inside. A total of 144 people lost their lives of which 116 were school children. See Figure 30.

aberfan_2_lg

Figure 30Miners and rescuers attempting to save trapped children and teachers at Abefran.

Five teachers and half of the pupils of Pantglas Junior were killed. Trained mine rescue teams arrived but no survivors were found after 11 am and it was nearly a week before all the bodies were recovered (McLean, I. 1997). See Figure 31.

11_10-Aberfan-memorial

Figure 31.  Graves and memorial for those who died in the Aberfan Disaster.

Most collieries in Britain are now gone but the former mining areas still possess their individuality and embedded community spirit that has long been a feature of mining communities. Pit villages and mining towns have their own hard earned identity that is often reflected in their names – such as Stony Heap, Deaf Hill, Quaking Houses, Pity me, and No Place. See Figure 32.

Image (19)

Figure 32Iconic media image of the conflicts between miners and police during the strike of 1984-85.

13. Conclusion

The period 1880 to 1890 proved to be the most important in the development of miners’ safety lamps. That history shows on examination “…that imperfections and prejudice influenced the popularity of lamps.” (Wedgewood, L. A. 1946). The miners’ safety lamp was an “…icon of the industrial revolution every bit as powerful as Stephenson’s ‘Rocket’ or the Iron Bridge at Coalbrookdale.” (Adams, 2005). The miners’ lamp, to whomever its invention may be credited “…should be regarded as a landmark in the history of civilisation.” (Barnard, 1936). With regard to his lamp Stephenson said it “…might be considered a want of candour were I not to take notice of the lamp constructed my Dr Clanny…” (Stephenson, 1817 b). Whereas it seems “…less than justice to Stephenson, that history seems to accept Davy’s right to priority, when the evidence suggests otherwise.” (Smith, 2001).

After the introduction of the Davy lamp there was an increase in mine explosions for a number of reasons. According to the North of England Institute of Mining and Mechanical Engineers and the Davy lamp lack of instruction “…on its limitations did not lead to an immediate reduction in the number of explosions.” (2011). Firstly mine-owners delayed in installing gas extractors: secondly it encouraged re-opening dangerous pits, and working in methane rich seams was not curtailed. Also lamps were purchased by the miners, as well as the expensive candles from the company store, and not provided by the owners. Stephenson’s lamp became popular in the north east coalfields but Davy’s lamp was introduced elsewhere. In August 1816 144 of Davy’s lamp were in use every day at Walls End Colliery (Paris, J. A. 1831). It is fair to say that there was “…no doubting the advantages of Davy’s gauze over Stephenson’s perforated plate, and the substitution of gauze for the perforated plate led to what we know as Stephenson’s lamp.” (N.E.I.M.M.E. 2010).

The priority controversy continues to reverberate to the present day as it has come to be recognised that “…Davy was not the inventor of the safety lamp…” and that “…his lamp was not really safe.” (Adams, 2005). In the 1830’s the issue grumbled on with a Parliamentary Select Committee on Mining Accidents of 1835 opining “The principle of its construction appears to have been practically known to the witnesses, Clanny and Stephenson [sic], previously to the period when Davy brought his powerful mind to bear upon the subject, and produced an instrument which will hand down his name to the latest ages.” (Papers. 1835). It was this Committee that led eventually to the Victorian movement that banned child labour in the mines. Davy’s lamp was cheaper and thus preferred by the mine-owners. The attitude may mean the “…liberty of laissez-faire might imply the coal-owner was master in his own house; for the collier it merely secured his freedom to die violently by earth, fire or water.” (Duckham, 1973). Also Davy’s lamp, in wet conditions, deteriorated rapidly and rusting metal gauze made it even more unsafe. Both the Davy Lamp and Stephenson’s lamp became “…unsafe in rapidly moving air-currents.” (Barnard, 1936). In effect – fire-damp explosions increased. Nonetheless the wire gauze of Davy’s lamp was eventually used in every subsequent safety lamp, with modifications, for nearly 200 years. It is noteworthy that Stephenson later adopted the principle of Davy’s gauze instead of tubes – it is this revised design that became known in the 19th century as the “Geordie Lamp”.

Regardless of who first invented the ‘first’ safety flame lamp for mines there is an important point to note. Its success was the culmination of principles discovered by three men – William R. Clanny, George Stephenson, and Sir Humphry Davy. Neither Davy or Stephenson patented their lamp designs.  All three inventors worked independently, all around the same time, and each had some knowledge at least of each others work. It was Clanny who separated the flame from the firedamp atmosphere of the mine. It was Davy who first enclosed the flame in wire gauze. It was Stephenson who first left a space above the flame for burnt air. And indeed the lamps of the three were all eventually fitted with wire gauze. The lamps were thus the fruits of work representing an “…untypical conjuncture of requirements of growing industrialism and the resources of scientific enquiry.” (Duckham, 1973). The modified lamps have remained an integral part of the mining industry up to and beyond the demise of most of the coal industry after the colliery closures following the miners’ strike of 1984. Davy’s safety lamp was his finest public achievement and his creation was soon in use in Britain and Europe (Holmes, R.  2008). In tribute see Figure 33.

Image (6)

Figure 33Poem by a north-east miner with the name of Winstanley recalling the coal dust scarring of a collier’s hands.

Appendix: Chronology of the Stephenson and Davy Lamps.

Image (37)

Postscript

The 14th of October 2013 was the 100th anniversary of the worst ever coal mining disaster in Britain, at the Universal Colliery at Senghenydd in the Aber Valley, south Wales. The cause of the explosion was never technically established. The explosion killed 439 men and boys eight of whom were just 14 years old. The disaster simultaneously created 205 widows and 542 orphans. As the result of findings “…the company and its local manager were charged with 17 breaches of the 1911 Act.” Further the “…magistrates dismissed all charges against the former, fining the manager a total of £24 for five offences. On appeal, fines plus costs were extended to the company  and increased to a total of £39 and five shillings. Today’s equivalent would be around £3,300, or £7.50 per life.” [Information from Robert Griffiths, Remember the miners, their families, the lesson. The Morning Star, 14.10.2013].

_61195945_sengh2

References and sources consulted

Adams, Max.  Humphry Davy and the Murder Lamp: Max Adams Investigates the truth behind the Introduction of a Key Invention of the Early Industrial Revolution. History Today, Vol 55, August 2005.  Bod Camera, S.Hist. Per 12.

Barnard, T. R.  Miners’ Safety Lamps.  Sir Isaac Pitman & Sons Ltd, London, 1936.  RSL: 186415.e.34

Beddoes, T.  Essay on the Causes, Early signs and Prevention of Pulmonary Consumption for the use of Parents and Presceptors. Bristol, 1799.

Brandling,   1816.  Newcastle Chronicle, August 29th.

Charnley, E..  A Collection of all the Letters which have appeared in the Newcastle Papers, with other documents, relating to the Safety lamps.  By S. Hodgeson, Newcastle, 1817.  Bod 247828.e.4.

Clanny, William Reid.   Practical observations on safety lamps for coal mines.  Garbutt, G. Sunderland, 1816.

Davies, H.  George Stephenson.  Weidenfeld & Nicolson,  London, 1975.

Davy, Humphry. (a)  On the Fire-Damp of Coal Mines and on Methods of Lighting the Mines So as to Prevent Its Explosion.  Philosophical Transactions of the Royal Society of London.   Vol 106, 1816, 1-22

Davy, Humphry.  (b)  An Account of an Invention for Giving light in Explosive Mixtures of Fire-Damp in Coal Mines, by Consuming the Fire-Damp. Philosophical Transactions of the Royal Society of London.  Vol 106, 1816, 23-24.

Davy, Humphry.   (c)  Philosophical Magazine.  47 (212).  1816

Davy, Humphry. On the safety lamp for preventing explosions in mines… Hunter, R. London, 1825.

Davy, J. Memoirs of Sir Humphry Davy. In : H. Davy. Collected Works, Vol 1. 1939).

Dictionary of National Biography.  http://www.oxforddnb.com/articles

Duckham, H. & B.  Great Pit Disasters: Great Britain 1700 to the Present day.  David & Charles, 1973.  Bod Stack 1795.e.569.

Eddy, Rev. T. M. Women in British Mines. The Ladies repository, Vol 1. (4). Issue 7.  Cincinnatti, 1854.

Encyclopaedia Britannica, London, 1962.  Vol 19 (809d).

Godoy, R.  Mining: Anthropological Perpectives.  Ann. Rev. Anthrop. 14. 199-217. 1985.

Hartley, Sir H.  Humphry Davy.  S.R. Publishers Ltd, 1971.

Hendrick, D.J. & Sizer, K. E.  “Breathing” coal mines and surface asphyxiation from stythe (blackdamp).  BMJ. 305, August 29, 1992.

Hickson, S. J.  Miner’s Worm.  Nature. No 1893. Vol 73. Feb 8, 1906.

Holmes, R.  The Age of Wonder.  Harper Press, London, 2008.

I.E.S.N.A.  Illuminating Engineering Society of North America.  Lighting Handbook, 1993.

James, F. How Big is a Hole?  The Problems of the Practical Application of Science in the Invention of the Miner’s Safety Lamp by Humpry Davy and George Stephenson in Late Regency England.  In: Trans. Newcomen Society. 75. 175-227. 2005.

Knight, D.  Humphry Davy.  Cambridge U P, 1996.

Knapp, A. B.  Social approaches to the archaeology and anthropology of mining. In Social Approaches to an Industrial Past.  Routledge, London, 1998.

Lawrence, C. The Power and the Glory: Humphry Davy and Romanticism. In Cunningham, A & Jardine, N.  Romanticism and the Sciences. CUP, 1990.

McLean, I. The Unpolitics of Aberfan.  Twentieth Century British History.  Vol 8, December, 1997.

Matosevic, A.  Underground community: anthropology of mining and the underground culture in Rasa and its surroundings.  Etnoloska tribina.  37 (30), 2008.

Newcastle.  Report upon the claims of Mr. George Stephenson, relative to the invention of his safety lamp.  Constable and Co. Edinburgh, 1817.

Newcastle Chronicle, 2.11.1815.

Newcastle Courant, 26.10.1815

N.E.I.M.M.E.  North of England Institute of Mining and mechanical Engineers. See: http://www.mininginstitute.org.uk.

North of England Institute of Mining and Mechanical Engineers. See: www.mininginstitute.org.uk/lamps/Clanny.

Northumberland Record Office.  ZAN/M.14/A.1.

Oxford Dictionary of National Biography.  OxfordUniversity Press, Oxford, 2004.

Papers.  Parliamentary Papers. Vol5. September 1835.

Paris, J. A. The Life of Sir Humphrey Davy. Vol 2. 1831.

Pope, I.  Forest of Dean Coal: Mining.  Lightmoor.co.uk.  2006.

Proctor, E. S.  The health of the Cornish tin miner, 1840-1914.  J. Roy. Soc. Med. 92. 596-599. November, 1999.

Raber, P.  Early Copper Production in the Polis Region, Western Cyprus.  J. of Field Archaeology.  14 (1987). 297-312.

Roberts, B. K.  Landscapes of Settlement.  1996.

Ross, M. H. & Murray, J. Occupational respiratory Disease in Mining.  Occupational Medicine.  54, 304-10, 2004.

Rule, J.  A risky business: death, injury and religion in Cornish mining, c. 1780-1870. In Knapp, A. B. 1998).

Smith, Alan.  Newcomen Bulletin.  September 1998  (cited in Smith, J. 2001).

Smith, Jeffrey.  George Stephenson and the Miner’s Lamp Controversy.  North East History, 34, 2001.   Bod Stack  P.F.04009

Stephenson, G. (a) A Description of the Safety Lamp, invented by George Stephenson, and now in use in Killingworth Colliery.  2nd Edition.  Constable and Co, Edinburgh, 1817.  Bod 247828.e.4

Stephenson, G.  (b)  Philosophical Magazine.  March, 1817.

Thackrah, C.  The Effects of Arts, 303875369

Trades and Professions, and Civic States and Habits on Living and Health and Longevity.  London, 1832.

Thompson, E. P.  The making of the English Working Class.  Vintage books, 1966.

Tyne and Wear Archives, TWAS 1589.

Wedgewood, Lt Col L. A.  Catalogue of the Collection of Miners’ Safety Lamps at the North of England Institute of Mining and Mechanical Engineers, Neville Hall, Newcastle upon Tyne. 1946.

www.guardian.co.uk/notesandqueries

www.minerslamps.net/homepage/safetylampshistory

Wylie, A.  Invented lands….  Historical Archaeology.  27 (4). 1-19.

All illustrations used were found in the public domain.

Researched and written by Eric W. Edwards (b. 1944)  BA Hons (Oxon), MA (Oxf), MPhil..

Given as a lecture at the Ethnicity and Identity Seminar of Friday, March 11th, 2011. Held within the auspices of the Institute of Social and Cultural Anthropology, University of Oxford. Part of the series entitled Earthworkers: Living and Working on and Under the Ground.

This article was originally printed on line as a contribution to the England: The Other Within Project, Pitt-Rivers Museum, Oxford, in 2010.

Dedicated to my great grand uncle, and mining engineer, Frederic Henry Edwards (1852-1919), of Forth House, Newcastle upon Tyne and Bath Street, Newcastle. Member of the Institute of Mining Engineers (from 1867), and local explosives agent for Alfred Nobel.

 

4 Comments

Filed under Volume 2