English: Right and Wrong
Ever since English became established as a national language, there have been arguments over every aspect of it, from vocabulary to spelling, from punctuation to pronunciation. The debate is as lively today as it was in the age of Dr Johnson (1709–84).
A Facebook site numbering members in the hundreds of thousands is entitled ‘I Judge You When You Use Poor Grammar’ and encourages its members to ‘seek out the infidels (grammar offenders) and . . . document their acts of terror’. Other people are indifferent or even hostile to notions of correct English. The majority may just be confused. The battle can be described as one between the prescriptivists, who believe in rules, and the descriptivists, who believe language goes its own way, regardless of attempts to regulate it. What exactly are the areas of disagreement between these two sides?
The Prescriptivists
Jeff Deck and Benjamin Herson are two young Americans who committed themselves to an unusual mission. Having established the Typo Eradication Advancement League (TEAL), these two graduates of Dartmouth College toured the United States at the beginning of 2008, equipped with permanent markers and ink-erasers and a determination to do their bit for the nation’s errant grammar. If they saw a misspelled shop sign or a public notice with a misplaced or an absent apostrophe (for example, a warning that any vehicle parked without authorization would be removed ‘at owners expense’), Deck and Herson set about correcting it.
Usually the grammarians obtained approval from the shopkeeper or other appropriate authority but sometimes they acted guerrilla-style and without permission. They went an apostrophe too far when they decided to amend a handwritten sign at a historic watchtower in the Grand Canyon. After shifting an apostrophe and adding a comma, they were found guilty of vandalizing government property, fined and put on probation for a year. Unfortunately for Deck and Herson, the sign had been penned by the architect of the watchtower and so, regardless of any grammatical flaws, was itself a historic artefact.
Those who are not bothered or simply don’t know about such things – and the not-bothered together with the don’t-knows make up the great majority of any English-speaking population – might be surprised at the intensity and passion of the would-be regulators of the language. Such people are sometimes called ‘prescriptive’ because they think that rules for the right use of language can not only be laid down or prescribed but also enforced (or encouraged) through education and example. The term ‘prescriptive’ describes not only those who pounce on mistakes in the newspapers or on radio and TV, and write, phone or e-mail to complain, but also those who feel anxiety about ‘falling standards’.
The Descriptivists
The prescriptive label does not apply much to the professionals who compile dictionaries and usage guides, and who tend these days not so much towards prescription as description. These descriptivists, when professionally involved with language as lexicographers or commentators, see it as their job to observe and describe changes in language rather than to pass judgement on them. The descriptive group shades into the laissez-faire group, which believes, in effect, that any attempt to enforce linguistic regulation is self-defeating or absurd. There is animosity on both sides, with the descriptive or laissez-faire group referring to those trying to impose rules as ‘language police’, and the prescriptive group putting all the blame on lax education or the mass media or a general dumbing-down.
But this is a somewhat lop-sided battle, since the descriptive school, believing that language development is an almost impersonal process, rarely trouble to fight back other than by pointing out that attempts to regulate language are doomed. One might imagine them as two distinct camps on opposing sides of the river of English. On the ‘purist’ bank are those who want to protect the language and put down irregular behaviour (absent or misused apostrophes, for example). On the opposite bank is the ‘anything goes’ battalion, whose argument can be summarized as follows: language is a living organism, shaped daily and almost unawares by users who pay little attention to the minutiae of what they say and, in many cases, what they write. General usage is, by definition, the right usage. Go with the flow, they say, pointing to the river of language as it runs on, carving out its own course, steady and unperturbed.
Who’s Right?
But the analogy that compares English or any language to an unstoppable river doesn’t quite fit the bill. Whatever the scientific and philosophical discussions about the sources of language, it is not a natural feature of the landscape like a river or a mountain range shaped by the elements. Although a propensity for language seems to be an in-built feature of the human brain, and to that extent something that could be described as natural, any spoken language is an artificial construct, unique to mankind.
The power of speech is acquired soon after birth by processes that are still not fully understood, and then shaped by years of learning whether at a subconscious level from parents and others or more deliberately through education. Nor does the process stop with the end of any formal learning. We go through life experimenting with language. We modify our vocabularies, picking up new words and discarding others. We say and write things differently according to the varying company we keep. We make choices.
So, although the arguments of the laissez-faire, anything-goes group have a certain appeal, they also have limitations. Anything doesn’t go. If it did, if we felt free to use any words we wanted – even invented ones – in any order we wanted, it’s an open question as to who would be more quickly exhausted, we or our listeners. It’s tough talking gibberish. In reality, the choice of words and the word-arrangement (or syntax) of even the most ‘uneducated’ speaker conform to certain tacit, cultural rules about linguistic selection and construction.
This can easily be seen through a couple of examples. In standard English, and in contrast to several other languages such as French, the adjective is placed before the noun that it qualifies. A more general in-built rule of language is that the subject of a sentence goes before the verb, the object coming afterwards. The man bit the dog means something quite different from The dog bit the man, but the two sentences employ exactly the same five words. It is only their order that specifies meaning. These rules are familiar at an instinctive level to even the most uneducated speaker. Knowledge also comes at an early stage. Quite a young child, looking at a photo of itself, knows the pronoun difference between saying ‘That’s me!’ and, pointing to a toy in the same picture, ‘That’s mine.’
So rules exist. On the other hand, those who want to uphold very rigorous standards of English and protest against minor infringements are bound to be frustrated and disappointed. Indeed, one can’t help feeling sometimes that the frustration and disappointment are themselves part of the pleasure of the protest. Those two young Americans who went hunting for typos and correcting signs with their markers might have been acting out of a high-minded determination to improve literacy standards in the US but it’s a fair bet that they were also having their own kind of fun.
The great bulk of rules controlling the spoken language are, like the iceberg, hidden from sight. They are also acquired before we begin to be formally taught and they are largely unchanging. The visible bits, which (some) people get so exercised about, can change. Not so much as a result of individual action as by a collective shift at a moment impossible to pinpoint. When did ‘disinterested’ stop meaning ‘impartial’ – its original sense – and switch its meaning to ‘bored by’. Come to that, when and why was that little phrase ‘bored by’ replaced by ‘bored of’?
For more than five centuries attempts have been made to regularize the language, both spoken and written. Spelling, grammar, punctuation, choice of words, pronunciation, all have been the subjects of innumerable dictionaries, guides and primers. Yet the continuing tension between the different camps, the purists and the anything-goers, is actually very healthy. It is one of the signs of a living and developing language. A brief outline of the various battlegrounds is given below.
New Words
The emergence of new words into English or, more usually, the adaptation of old ones for new uses is one of the minor battlegrounds between the prescriptive and descriptive groups. Objections tend to centre on the transformation of nouns into verbs or vice versa, on the ugliness of new formations, or the belief that the new words aren’t necessary because there are perfectly good ones already in place.
Nouns such as progress, transition, impact are frequently used as verbs (‘How are we going to progress this?’), especially in a business context. Other verbs such as commentate and enthuse have been created via the process called back-formation from pre-existing nouns (commentator, enthusiasm). Language purists argue that it is better to keep the word in its ‘proper’ part of speech with expressions such as ‘make progress’, ‘have an impact on’. Related objections are to ‘ugly’ verbal creations like prioritize or incentivize. From the other direction, there is a tendency to turn verbs into nouns, as with spend or ask (‘It’s a big ask’) where more traditional English would use expenditure or demand.
Like all linguistic debates, this is an old one. And it is more a matter of preference than correctness. Back in the 1940s, the distinguished language expert Eric Partridge made a mild objection to the verb-use of contact, calling it an ‘American synonym for “to establish contact with”’. In his famous guide Usage and Abusage, which was being issued with revisions until the 1970s, Partridge also took exception to the use of productivity (‘a horrible word; use output’). Elsewhere he commented that educated speakers would regard a phrase like ‘It’s us’ as ‘vulgar or dialectal’ although he accepted it was ‘justifiable when its use is exclamatory’. Highlighting these objections is not to run Partridge down. Rather, it is to show how fashions in language change. Who thinks twice about saying to contact now, or pauses to wonder whether ‘It’s us’ is truly exclamatory?
Regardless of the objections to new words, whether on the grounds of redundancy or ugliness, history will sort out which are entitled to survive and which are not.
Spelling
Of all the areas of dispute over correct English, spelling is the most contentious although not the most sensitive (that prize goes to the question of correct pronunciation). The spelling issue is a high-profile one for straightforward reasons. Mistakes tend to stand out since, almost invariably, only one way of spelling a word is regarded as correct. There have been different approaches to the teaching of spelling over the years but, underlying any approach is the belief that it can (and should) be taught in schools. Another, and not so minor, point is that spelling is a favourite media topic, particularly in newspapers, whenever the question of the ‘decline of standards’ emerges.
English is notorious for being a language that employs illogical, even perverse, forms of spelling. Those arguing for simplification point to inconsistencies. For example, why should the British English spelling of the noun humour (humor, in the US) become humorous as an adjective? There is not even consistency in the inconsistency: colour has an identical suffix and sound to humour yet its adjectival form is colourful in British English.
A more powerful weapon in the simplifiers’ armoury is the gap between the way a word looks and the way it is sounded. The long ‘ee’ sound can be represented not only by the obvious doubled ‘e’, as in seem or teem, but by combinations that contain quite different vowels, as in quay, ski, debris, people. A phonetic system, one which spells according to sound, would adjust the last four to ‘kee’, ‘skee’, ‘debree’ and ‘peepul’. This would be more rational, so the argument goes, and it would also make it easier for those mastering the language not just as students but also as native speakers. English-speaking adults come near the bottom of the table in international studies of literacy and this is often ascribed to the vagaries of the way in which words are spelled (or spelt).
This is not a new campaign. Agitation to straighten out spelling goes back many decades (see George Bernard Shaw and the New Alphabet, page 243). What gives it current force is the support of some academics and experts who, either out of despair at the scripts they have to read or impatience with the illogicalities of English, believe we should waste less time on teaching spelling. The rise of texting, with abbreviations like TLK2UL8R, and the increasing influence of Americanized spellings (program, thru, center) have also had a minor effect.
But the campaign will not succeed. A few small changes may occur but there can be no root-and-branch revolution in English spelling. There are several highly practical reasons for this. People who already know how to spell the words they use every day (give or take the occasional error) are not going to sit down and learn how to spell their language all over again. This would be the case even if spelling could be organized along simpler lines as the result of some diktat from an imagined Ministry of English.
The most substantial flaw in the phonetic argument is that, although spelling is regularized, pronunciation is not. To take a couple of examples: think is spelled in only one way but it is pronounced as ‘fink’ by plenty of English speakers and as ‘tink’ by Irish ones. The same th-f-t variation applies to three. In these cases, instead of one uniform spelling, we would end up with three regional ones. The point was made forcefully by Jonathan Swift 300 years ago:
Another Cause [ . . . ] which hath contributed not a little to the maiming of our Language is a foolish Opinion, advanced of late Years, that we ought to spell exactly as we speak; which beside the obvious Inconvenience of utterly destroying our Etymology, would be a thing we should never see an End of. Not only the several Towns and Countries of England, have a different way of Pronouncing, but even here in London, they clip their Words after one Manner about the Court, another in the City, and a third in the Suburbs; and in a few Years, it is probable, will all differ from themselves, as Fancy or Fashion shall direct: All which reduced to Writing would entirely confound Orthography.
Finally, the attempt to simplify spelling would actually make things much more complicated. Any half-way experienced user of English recognizes the difference between quay and key when he or she sees it on the page; similarly with bow/bough, caught/court, towed/toad/toed, and countless other groups of homophones. Adopting a simplified single form would make the language much the poorer as well as causing confusion. We recognize words not syllable by syllable but as a ‘whole’. Under a simplified system we might have difficulty distinguishing at a glance between whole and hole. The latter is the more logical spelling for both concepts but it does not convey the idea of ‘wholeness’ straightaway.
There is also a plausible cultural argument against phonetic simplification. The often odd forms of English spelling are a testament to the historical sources of the language. The ‘b’ in doubt is not sounded but it harks back to the word’s Latin origins (dubitare). The silent ‘k’ in knowledge refers to the roots of the word in Old English and Norse. Many words are thus little parcels of history, their origins of interest mostly to specialists perhaps, but arguably worth preserving in the form in which we have them. One could also claim that the erratic, peculiar byways of English spelling are a fitting reflection of a language which, whatever else it may be, is not homogenous or orderly, and is never going to be either of those things. Yes, spelling changes over time but the changes come from the bottom up and they are piecemeal and small-scale.
George Bernard Shaw and the New Alphabet
One man who took a somewhat idiosyncratic approach to the regulation of English was the writer and dramatist George Bernard Shaw (1856–1950), who produced more than 50 plays as well as volumes of music criticism and varieties of political writing. He enjoyed setting the cat among the pigeons, being controversial and sometimes playfully perverse in his views. But when he launched the attempt to create a new alphabet, he was all seriousness.
Shaw was impatient with the inconsistencies and absurdities of English spelling. He is supposed to have come up with a well-known example to prove his point by claiming that the invented word ghoti should be pronounced as ‘fish’, using the following parallels: ‘gh-’ often has an ‘f’ sound as in ‘rough’; ‘-o-’ can be pronounced like an ‘-i-’ as in ‘women’; while ‘-ti’ is frequently given a ‘sh’ sound (‘nation’). Therefore ghoti = ‘fish’. There’s some doubt about whether ghoti was actually Shaw’s idea, but it fits with his comments about language generally and the need for simplicity and harmonization. He made two key proposals: to discard useless grammar and to spell phonetically. According to his prescription, one could say ‘I thinked’ instead of ‘I thought’, or spell tough and cough as tuf and cof.
Shaw suggested a new alphabet and a new orthography (way of spelling). The purpose was to save time and trouble, although he admitted that initially it would be costly and unpopular – something of an understatement. However, the benefits were clear: ‘With a new English alphabet replacing the old Semitic one with its added Latin vowels I should be able to spell t-h-o-u-g-h with two letters, s-h-o-u-l-d with three, and e-n-o-u-g-h with four: nine letters instead of eighteen: a saving of a hundred per cent of my time and my typist’s time and the printer’s time, to say nothing of the saving in paper and wear and tear of machinery.’
Shaw left money in his will for the advance of spelling reform and during the 1950s Kingsley Read, a typographer, was the winner of a competition to come up with a new orthography. The result has the appearance of a cross between shorthand and runic symbols. Like the attempts to create new languages, it never caught on. In fact, the only book ever to be produced using the Shavian alphabet was Shaw’s own play, Androcles and the Lion (1912) with the standard script and the new one on facing pages.
Grammar and other matters
Several newspapers employ a readers’ editor, someone whose job it is to respond to complaints about the content of the paper. Many complaints address such topics as biased coverage, intrusive interviews, sensational photographs and so on, but a substantial minority are to do with the use of language and with what are perceived as common, and therefore inexcusable, linguistic slips. Typical comments will query the habit of beginning sentences with a conjunction (usually And or But), turning nouns into verbs (to impact), the misuse of words (crescendo to mean a ‘climax’ rather than, correctly, a ‘steady rise’, reign instead of rein), and so on. Some newspapers actually make a feature of such queries and complaints, showing that they are responsive and open-minded – and also, no doubt, as a means of filling space at no great cost.
Complaints are less frequent in the broadcast media, given the more ephemeral nature of radio and TV, but a glance at online message-boards shows that, here again, plenty of listeners and viewers are ready to air their opinions on what constitutes correct usage.
Almost invariably, the protestors are on the ‘conservative’ side. They fear that language is changing, which it is, and object to the details of that process. The impression may be given that everything is going to the dogs, that a misplaced apostrophe is a herald of the end of civilization, and that nobody apart from them cares about it. This is to overlook the attention given by the British and American mainstream media, especially newspapers, to getting things right.
The reader may not like a particular usage but the odds are that various alternatives will have been considered and rejected, especially where any usage is likely to cause protest. Even in an age of faster-paced news, the writing of journalists is checked and sub-edited. Newspapers, like publishers, have in-house style guides and preferred ways of expressing things. The Guardian newspaper style guide, for example, tells us that the approved way of spelling the name of the queen of the rebellious Iceni in Roman Britain is Boudicca, not Boadicea, or that the writer should ‘use Ms for women on second mention unless they have expressed a preference for Miss or Mrs’, or that the word ‘terrorist’ needs to be used with care.
History is not on the side of the protestors. Sometimes the complaints will be justified. It is incorrect to write of a person being given free reign rather than free rein. But, more often, the complaints are wrong-headed. There is no reason not to begin a sentence by putting And or But, just as there is no reason not to split an infinitive or to go to great lengths to avoid putting a preposition at the end of a sentence. No reason as long as the meaning is clear and the sentence reads well. And even when the complaints are somewhere between right and wrong, as in the misuse of crescendo or putting few instead of less, there is little chance of reversing majority usage although it is technically wrong. No chance at all, in fact. But there is something valiant – and valuable – about the protest because it keeps the subject alive.
Punctuation
There was a time when punctuation might literally be a matter of life-and-death. According to a Times report of 1837 two law professors from Paris University fought a duel with swords in a dispute over the point-virgule or the semicolon. The paper claimed: ‘The one who contended that the passage in question ought to be concluded by a semicolon was wounded in the arm. His adversary maintained that it should be a colon.’
Punctuation, whether by its mere presence, its absence or its misuse, still raises strong feelings in a large minority. For all that, it is an area of written English in which there has been a definite trend towards simplification over the last century or so. The result is that the majority of people use only the basic bits of punctuation: the comma and the full stop, the question and exclamation marks, and perhaps the dash. The more rarefied punctation marks are notable either by their timid application or their complete absence.
The traditional use of brackets (to indicate a considered, parenthetical remark) is less common than what might be called their rhetorical use to show surprise (!) or to suggest confusion or doubt (??). There is widespread confusion over the apostrophe, with a 2008 survey showing almost half the UK population unable to use it correctly, a figure that may strike some as being on the modest side. Few individuals, apart from professional writers and journalists, have any need to use speech/quotation marks other than for ironic effect, as in He asked me for a ‘loan’. Colons and semicolons have always excited anxiety and sometimes irritation. Professors may no longer fight duels over them but others find them fiddly and pretentious. The US writer Kurt Vonnegut once told a university audience that ‘All [semicolons] do is show that you’ve been to college’.
When it comes to what one could call mainstream punctuation, there is widespread uncertainty over where to insert commas or what constitutes a sentence, and hence when to put a full stop. Language purists would and do claim that the exclamation mark is overused, particularly in the era of texting and e-mailing, while the question mark takes on a random life of its own.
It could be argued that punctutation is a more significant area of English compared to other forms of correct usage. Misspelling doesn’t usually lead to misunderstanding, only annoyance among those who are likely to get annoyed. Formulations that are perceived as grammatical errors, like using who for whom or putting a preposition at the end of a sentence, are either not errors at all or have little or no effect on the overall clarity of a sentence. Pronunciation is quite a complex issue, involving class, education, aspiration and fashion, and not easily reducible to a right or wrong way of doing it.
By contrast, poor or casual punctuation can cause problems. At best, it may mean that the reader has to go back and work out what the writer is trying to say. In the worst cases, it may subvert meaning altogether. In 1991 an American court case for defamation in a magazine article turned on the extent to which speech marks indicated that the words inside them were actually said or whether they were an acceptable approximation (by the journalist) of what was said. In 2007 the US Circuit Court of Appeals for the District of Columbia ruled that locals could keep guns ready to shoot in their homes. They were interpreting the second amendment to the US Constitution, which reads: ‘A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed’. By a majority of two to one, the judges decided that the meaning hinges on the second comma, which ‘divides the Amendment into two clauses; the first is prefatory, and the second operative’.
Leaving aside legal questions arising from the commas in the US Constitution, there is a more humble but genuine ambiguity in a written phrase such as the wearers clothes. Here the absence of the apostrophe leaves it unclear whether the writer is referring to a single owner (wearer’s) or to more than one (wearers’). The context will probably make it obvious but, even if it does, using the correct punctuation is a quicker and neater way of accomplishing the task.
Many teachers of English would claim that punctuation is the hardest aspect of formal English to teach, yet it is arguably the most important. Anyone doubting this should try to make sense of a completely unpunctuated piece of prose.
Pronunciation
Pronunciation is another area in which there has been a shift towards diversity and a greater tolerance of variation. Anybody watching a British film or television programme dating back several decades is likely to be struck by the clear diction and cut-glass pronunciation of the actors. A clip from a film like Brief Encounter (1945) or an excerpt from an old BBC radio broadcast shows the participants speaking in tones that now seem almost comically refined and plummy. There was even an official term for this style of speech, particularly as it was employed by the BBC: Received Pronunciation or RP. This was the style of speaking and pronunciation used by educated people in the south of England, the most affluent part of the country and the home of most of its powerful institutions, including the BBC. From the Corporation’s foundation in 1922 and for many years afterwards, only newscasters who spoke RP were given jobs.
As significant as the idea that there was a gold standard of speech was the corollary that any other way of speaking was not just a deviation from Received Pronunciation but probably inferior to it. To an extent, exceptions were made for educated speakers who had Welsh, Scots or Irish accents but there is no doubt that RP was seen as the king of all other ways of speaking English – or the Queen’s English as it became after the accession of Elizabeth II.
Regional accents could be heard on the stage, screen or radio, but they tended to be confined to comic or minor characters. Received Pronunciation, if you didn’t already use it and wanted to get on in some public forum, was generally seen as a desirable aspiration. It took the arrival of ‘kitchen-sink’ drama and the neo-realist films of the late 1950s and 1960s, like Room at the Top (1958) and Saturday Night and Sunday Morning (1960), to make regional accents widely acceptable. And it took a lot longer for the major UK broadcasters, the BBC and ITV, to employ newscasters and reporters with markedly regional accents on the national networks. A claim in 2008 by a senior BBC figure that he wanted ‘an increase in the range of regional accents on BBC shows as part of a drive to end the domination of the Standard English accent’ indicated a continued bias in broadcasting in favour of RP. Even so, there is some justification in the claims made for RP that it is clear and easily understood.
In Britain in the early 1980s a kind of reverse process occurred whereby those people, particularly the young, who would once have naturally employed RP began to use less ‘refined’ pronunciation. This is easily heard by listening to the difference between Prince Charles (born 1948) and his two sons, William (born 1982) and Harry (1984). Although all three are distinctly upper-class in their speech, the voices of William and Harry are closer to their contemporaries both in pronunciation and diction. A similar process has happened with some British politicians, who are afraid of being seen as ‘posh’ – a likely vote-loser – and so drop ‘T’s at the end of words or insert a glottal stop (e.g., ‘le’ah’ for ‘letter’).
When people deliberately adopt the accent and vocabulary of a different (lower) class than their own they are trying to gain what sociolinguists call ‘covert prestige’. Because working-class or ethnic minority speech may be seen as more authentic or ‘tougher’, the speaker hopes that those qualities may be perceived in him or her too. Whether in the USA, Britain or elsewhere, the adoption by white speakers of black terms and speech patterns is a clear sign of the attractions of a style of English that was once the property of ‘outsiders’.
Pronunciation is perhaps the most sensitive area of English since it involves speaking and listening, while errors of spelling and grammar are mostly confined to the page. There are few adults who would ‘correct’ another’s pronunciation face-to-face, even though the way people say things is unquestionably a large factor in how they are regarded. Surveys show that those who speak with certain varieties of northern accent or Scottish lilt are perceived as being warm or trustworthy. This may make them more employable than others (Birmingham or Glaswegian accents, however, are seen as particularly disadvantageous).
A not so minor generational distinction in UK pronunciation is over the sound of the letter ‘h’ when pronounced by itself, whether as part of an abbreviation or spelled out as part of a name. Received Pronunciation is to say ‘aitch’ as in the NHS (enn-aitch-ess), but this is being pushed aside by the preference for ‘haitch’ (as in ‘Haitch-Pee Sauce’), especially among those under 40. The ‘haitch’ version is usual in Ireland and parts of the north but has traditionally been seen as ‘uneducated’ in the south of England. This might seem insignificant – although not in Northern Ireland where saying ‘aitch’ marks you as a Protestant and saying ‘haitch’ as a Catholic – but it generates real irritation in newspaper columns and callers to radio programmes dealing with language.
Yet a wider look at the ‘H’ question suggests that pronunciation is a minefield of class and prejudice. Whatever the current status of ‘aitch’ versus ‘haitch’, sounding the ‘h’ at the beginning of a word has long been taken as an indicator of education or, England being England, an indicator of class and status. The educated say Hackney and Henry, not ’ackney and ’enry. But to sound the ‘h’ at the start of a small minority of words would, confusingly, be regarded as ignorant: so in Received Pronunciation honour has to lose its first letter (onour) as does hour (our). Still heard occasionally are pronunciations of hotel or historian also without their opening h’s (‘an otel’, ‘an istorian’) although these forms now sound slightly affected. In standard versions of the Bible the use of ‘an’ rather than ‘a’ in front of a spectrum of ‘h’-words from habitation to hypocrite indicates that they too were once sounded without their initial letter (‘an abitation’, ‘an ypocrite’).
The issue of Received Pronunciation has never taken hold in America. Whereas in Britain, accent and pronunciation can still affect a person’s chances in a career, let alone the way he or she is perceived by others, the US has shown a more egalitarian attitude. That’s not to say that there is no discrimination, but that it is more likely to be made on the basis of well-used and correct forms of English rather than the accent in which they are uttered. In other words, it is a distinction of education rather than background.