American English – The language of Shakespeare?

Hello my dear Anglophones!

I’m going to create some generic internet banter for you:

Person 1
– Look here at the differences between American English and British English, crazy stuff! (with the addition of some image or list)


Person 2
– *Something along the lines of*:

Person 3
– *Something along the lines of*:

Person 4, referring to the ‘u’-spellings in British English (colour, favour, etc.):

Then, usually, person 5 comes along with something like:

Person 5, let’s call them Taylor, has read somewhere that the American English accent shares more features with English as it was spoken in the 17th century, when America was settled by the British, and therefore argues that American English is more purely English than British English is. Taylor’s British friend, Leslie, may also join the conversation with something like “America retained the language we gave them, and we changed ours.”1

In this post, I will try to unpack this argument:
Is American English really a preserved Early Modern English accent?2

Firstly, however, I want to stress that one big flaw to this argument is that American English being more similar to an older version of English doesn’t mean it’s any better or purer than another English variety – languages change and evolve organically and inevitably. (We have written several posts on the subject of prescriptivism, resistance to language change, and the idea that some varieties are better than others, for example here, here and here.)

Now that we’ve got that out of the way, let’s get to the matter at hand. The main argument for why American English would be more like an early form of English is that it is modelled on the language of the first English-speaking settlers, which in the 17th century would be Early Modern English (EModE, i.e., the language of Shakespeare). In fact, there is some truth in that features of EModE are found in American English, while they’ve changed in (Southern) British English, such as:

  • Pronouncing /r/ in coda position, i.e. in words like farm and bar.
    This feature is called rhoticity, if an accent pronounces these /r/’s it is called a rhotic accent.
  • Pronouncing the /a/ in bath the same as the /a/ in trap, rather than pronouncing it like the /a/ in father which is what we usually associate with British English.
  • Using gotten as a past participle, as in “Leslie has gotten carried away with their argumentation”.
  • Some vocabulary, such as fall (meaning autumn), or mad (meaning angry).
  • The <u>-less spelling of color-like words.

So far Taylor does seem to have a strong case, but, of course, things are never this simple. Famously, immigration to America did not stop after the 17th century (shocker, I know), and as the British English language continued to evolve, newer versions of that language will have reached the shores of America as spoken by hundreds of thousands of British settlers. Furthermore, great numbers of English-speaking migrants were from Ireland, Scotland, and other parts of the British islands which did not speak the version of British English which we associate with the Queen and BBC (we call this accent RP, for Received Pronunciation). Even though the RP accent remained prestigious for some time in America, waves of speakers of other English varieties would soon have outnumbered the few who still aimed to retain this way of speaking. Finally, of course: Taylor not only (seemingly) assumes here that British English is one uniform variety, but also that American English would have no variation – a crucial flaw especially when we talk about phonetics and phonology.

If we look at rhoticity, for example, English accents from Ireland, Scotland and the South-West of England are traditionally rhotic. Some of these accents also traditionally pronounce the /a/ in bath and trap the same. Where settlers from these regions arrived in great numbers, the speech in those regions would have naturally shifted towards the accents of the majority of speakers. Furthermore, there are accents of American English that are not traditionally rhotic, like the New England accent, and various other accents across the East and South-East, such as in New York, Virginia and Georgia. This is to do with which accents were spoken by the larger numbers of settlers there; e.g., large numbers of settlers from the South-East of England, where the accents are non-rhotic, would have impacted the speech of these regions.

Finally, while the /a/ in bath and trap is pronounced the same in American English, it is not the same vowel as is used for these words in, for example, Northern British English. You see, American English went through its very own sound changes, one of these is the Northern Cities Vowel Shift, which affected such vowels as the mentioned /a/ so that it became pronounced more ‘ey-a’ in words such as man, bath, have, and so on. Also, let’s not forget that American English also carries influences from all the other languages that have played a part, to a lesser or larger extent, in settling the North American continent from Early Modern times until today, including but not limited to: French, Italian, Spanish, German, Slavic languages, Chinese, Yiddish, Arabic, Scandinavian languages, and Native American languages.

In sum, while American English has some retention of features from EModE which have changed in British English, the flaws of Taylor’s, and Leslie’s, argument are many:

  • Older isn’t necessarily better
  • Large numbers of English speakers of various dialects migrated to America during centuries after the original settlers, their speech making up the beautiful blend we find today’s American English accents.
  • British English was not the only language involved in the making of American English!
  • British English is varied, some accents still retain the features which are said to be evidence of American English being more “original”, such as rhoticity and pronouncing the /a/ in ‘trap’ and ‘bath’ the same. American English is also varied, and the most dominant input variety in different regions can still be heard in the regional American accents, such as the lack of rhoticity in some Eastern and Southern dialects.
    In sum: Let’s not assume that a language is uniform.
  • American English underwent their very own changes, which makes it just as innovative as British English.
  • No living language is static, Leslie, so your argument that American English never changed is severely flawed.

So the next time you encounter some Taylors or Leslies online, you’ll know what to say! And, of course, let’s not forget what the speakers of both British and American English have in common in these discussions – for example, forgetting that these are not the only types of English in the world.


More on this in a future blog post!

Footnotes
1This is actually a direct quote from this forum thread, read at your own risk: https://forums.digitalspy.com/discussion/1818966/is-american-english-in-fact-closer-to-true-english-than-british-english

2A lot of the material used for this post is based on Dr. Claire Cowie’s material for the course LEL2C: English in Time and Space at the University of Edinburgh.


Who told the first lie?

Hello there, faithful followers!

As you may have noticed, we have recently been running a bit of a series, called ‘Lies your English teacher told you’. Our ‘lies’ have included the prescriptive ideas such as (1) you should never split an infinitive; (2) you shouldn’t end a sentence with a preposition; and (3) two double negatives becomes a positive (in English). We’ve also taken a look at the ‘lies’ told to those taught English as a second, or foreign, language.

Now, dear friends, we have reached the conclusion of this little series and we will end it with a bang! It’s time, or rather overdue, that the truth behind these little stories be unveiled… Today, we will therefore unveil the original ‘villain’, if you will (though, of course, none of them were really villainous, just very determined) and tell you the truth of who told the very first lie.

Starting off, let’s say a few words about a man that might often be recognized as the first source of (most) of the grammar-lies told by your English teachers: Robert Lowth, a bishop of the Church of England and an Oxford professor of Poetry.

Robert Lowth, after RE Pine.jpg

Bishop Robert Lowth

Lowth is more commonly known as the illustrious author of the extremely influential A Short Introduction to English Grammar, published in 1762. The traditional story goes that Lowth, prompted by the absence of a simple grammar textbook to the English language, set out to remedy the situation by creating a grammar handbook which “established him as the first of a long line of usage commentators who judge the English language in addition to describing it”, according to Wikipedia. As a result, Lowth became the virtual poster-boy (poster-man?) for the rise of prescriptivism and a fascinating amount of prescriptivist ‘rules’ are attributed to Lowth’s writ – including the ‘lies’ mentioned in today’s post. The image of Lowth as a stern bishop with strict ideas about the use of the English language and its grammar may, however, not be well-deserved. So let’s take a look at three ‘rules’ and see who told the first lie.

Let’s start with: you should never split an infinitive. While often attributed to Lowth, this particular ‘rule’ doesn’t gain prominence until nearly 41 years later, in 1803 when John Comly, in his English Grammar Made Easy to the Teacher and Pupil, notes:

“An adverb should not be placed between a verb of the infinitive mood and the preposition to which governs it; as Patiently to wait — not To patiently wait.1

A large number of authorities agreed with Comly and, in 1864, Henry Alford popularized the ‘rule’ (although Alford never stated it as such). Though a good number of other authorities, among them Goold Brown, Otto Jespersen, and H.W. Fowler and F. G. Fowler, disagreed with the rule, it was common-place by 1907 when the Fowler brothers note:  

“The ‘split’ infinitive has taken such hold upon the consciences of journalists that, instead of warning the novice against splitting his infinitives, we must warn him against the curious superstition that the splitting or not splitting makes the difference between a good and a bad writer.” 2

Of course, to split an infinitive is quite common in English today; most famously in Star Trek, of course, and we doubt that most English-speakers would hesitate to boldly go against this 19th century prescriptivist rule.

Now, let’s deal with out second ‘rule’: don’t end a sentence with a preposition. This neat little idea comes from a rather fanatic conviction that English syntax (sentence structure) should conform to that of Latin syntax, where the ‘problem’ of ending a sentence with a preposition is a lot less likely to arise due to the morphological complexity of the Latin language. But, of course, English is not Latin.

Still, in 1672, dramatist John Dryden decided to criticize Ben Jonson for placing a preposition at the end of a sentence rather than before the noun/pronoun to which it belonged (see what we did there? We could have said: … the noun/pronoun which it belonged to, but  the rule is way too ingrained and we automatically changed it to a style that cannot be deemed anything but overly formal for a blog). Anyway.

The idea stuck and Lowth’s grammar enforced it. Despite his added note that the fanaticism about Latin was an issue in English, the rule hung around and the ‘lie’, while certainly not as strictly enforced as it used to be, is still alive and well (but not(!) possible to attribute to Lowth).

Last: two double negatives becomes a positive (in English). First: no, they don’t. Or at least not necessarily. In the history of English, multiple negators in one sentence or clause were common and, no, they do not indicate a positive. Instead, they often emphasize the negative factor, an effect commonly called emphatic negation or negative concord, and the idea that multiple negators did anything but form emphatic negation didn’t show up until 1762. Recognise the year? Yes, indeed, this particular rule was first observed by Robert Lowth in his grammar book, in which it is stated (as noted in the Oxford Dictionaries Blog):

“Two Negatives in English destroy one another, or are equivalent to an Affirmative.”

So, indeed, this one rule out of three could be attributed to Lowth. However, it is worth noting that Lowth’s original intention with his handbook was not to prescribe rules to the English language: it was to provide his son, who was about to start school, with an easy, accessible aid to his study.

So, why have we been going on and on about Lowth in this post? Well, first, because we feel it is rather unfair to judge Lowth as the poster-boy for prescriptivism when his intentions were nowhere close to regulating the English language, but, more importantly, to tell you, our faithful readers, that history has a tendency to change during the course of time. Someone whose intentions were something completely different can, 250 years later, become a ‘villain’; a ‘rule’ that is firmly in place today may not have been there 50 years ago (and yes, indeed, sometimes language does change that fast); and last, any study of historical matter, be it within history; archeology; anthropology or historical linguistics, must take this into account. We must be aware, and practice that awareness, onto all our studies, readings and conclusions, because a lie told by those who we reckon should know the truth might be well-meaning but, in the end, it is still a lie.

 

Sources and references

Credits to Wikipedia for the picture of Lowth; find it right here

1 This quote is actually taken from Comly’s 1811 book A New Spelling Book, page 192, which you can find here. When it comes to the 1803 edition, we have trusted Merriam-Websters usage notes, which you can find here.

2 We’ve used The King’s English, second edition, to confirm this quote, which occurs also on Wikipedia. The book is published in 1908 and this particular quote is found on page 319, or, right here.

In regards to ending a sentence with a preposition, our source is the Oxford Dictionaries Blog on the topic, found here.

Regarding the double negative becoming positive, our source remains the Oxford Dictionaries Blog on that particular topic, found here.

Lies your English teacher told you – Second language edition

Hi there! Remember how we go on and on about prescriptivism, and how these weird language norms are stressed in classrooms despite them having no basis in how we actually speak?
Well, language attitudes and norms do not only affect native English speakers, but also interferes with the way English is taught as a second language.

If you’ve read my posts about Standardisation and Bad English, you will be familiar with the idea that some varieties of English are perceived to be better than others – standard British English is usually considered particularly desirable. When I started learning English, 15-20 years ago (gulp!), it was still the norm in Swedish schools to teach this variety. This lead to some interesting prescriptive teaching: Being brought up in Sweden, where foreign-language tv and films are subtitled rather than dubbed, we primary-schoolers were already quite proficient in American English lexicon and expressions. However, we were taught that some of the things we had learned were not correct, for example that we should say flat instead of apartment or trousers instead of pants (although, we did not know yet that the latter meant underwear in British English). We were given these British words not to use as an alternative, but to use instead of the American words we already had a comfortable grasp of. This even stretched to pronunciations; instead of pronouncing the weekdays in the, for us, intuitive way, ending with a diphthong, as in Monday (/mʌnd/), we were told to use the, now quite archaic, RP pronunciation Mondi’ (‘mʌndi’).

Image source.

Some other things taught could be plainly wrong. A friend from Germany was told to not use constructions like “I’ll give you the book” but always use the construction with a preposition “I’ll give the book to you”. This is, of course, bonkers: the first construction is a double object construction, perfectly grammatical and frequently used in English! In fact, double object constructions have been a feature of English going back to the time when nouns still had cases and could go just about anywhere in the sentence.

Another friend from Hong Kong (where English is actually an official language and many are bilingual), recalls being told in English class that you must not use the expression ‘long time no see’ as it is “Chinglish” and therefore not proper. Of course this expression is well established in English, even if its origin is likely to be a mapping of English words onto some Chinese variety1:

好久 = long time
不 = no
见 = see1

This example shows some of the problematic attitudes towards post-colonial English varieties, and how these attitudes can even be internalised by the speakers themselves; the fact that this expression has its origins in Chinese overshadows how fixed the expression is in standard English, so much so that this English teacher wanted their students to distance themselves from it. In general, post-colonial English varieties such as Chinese or Indian English do not have the same status as, for example, British or Australian English, and this is often due to mere ignorance: linguistic innovations in such varieties are often seen as imperfections, features of foreign accents, because many do not understand that they are spoken as a first language.

Image source.

Even if American English is much more accepted in Swedish schools today, the idea that one form of English is more appropriate to be taught still remains. Sure, there is a point in teaching one style of English when it comes to formal writing, but this is a much later stage in most people’s English education. Teaching English-learning children that certain forms of English are wrong, despite that they’ve heard them being used and already have acquired them, might affect their confidence in speaking English – and may have more severe confidence effects for those who speak a post-colonial English variety as a first language. As always, prescriptivism disallows variation, and thus makes languages way more boring.

Footnotes

1The expression first appears in American English.

2Thanks Riccardo for providing the Mandarin translation! The mapping works on Cantonese as well, and it is unclear which language is the origin.

Don’t never use no double negatives

Multiple negation? I ain’t never heard nothing about that!

“Two negatives make a positive,” your friend may primly reply to such a statement. Even if you’re not exactly fond of math, you surely remember enough to acknowledge the wisdom and veracity of such sound logic.

But the funny thing about languages? They have a logic all their own, and it doesn’t always play by the same rules as our conscious minds.

Take, for example, this phenomenon of the double negative. Like the other formal, prescriptive rules we’ve been exploring with this series, the distaste for double negatives is relatively new to English.

Back in Old and Middle English (roughly AD 1000-1450), English wasn’t particularly fussed about multiple elements of negation in a sentence. If anything, they were used for emphasis, to drive home the negation. This trick of negatives supporting each other (rather than canceling each other out) is called negative concord. Far from being frowned upon, some languages crave it. Spanish, for example, regularly crams several negation words into a single sentence without a second thought:

¡No toques nada!
‘Don’t touch anything!’

This isn’t merely the preferred method of negation. In languages like Spanish and French, negative concord isn’t for emphasis; it’s mandatory. That’s just how they express negation.

The idea that two negatives grammatically make a positive in English was first recorded in the 1700s along with most of the other prescriptive rules. Unlike the other rules, there is some evidence to suggest that negative concord was naturally beginning to disappear in mainstream varieties of English even before the early grammarians codified the rule. This really isn’t too surprising. Languages like to change, and among the other moving parts they scramble around, they commonly go through phases of double negation (we linguists know this as Jespersen’s Cycle).

Math has naught to do with language, but it’s certainly true that in our Modern English, double negatives have the potential to leave a lot of ambiguity. Do they cancel? Do they intensify each other? It’s all about that context. This is one rule that might be here to stay1 (at least in formal English).

Notes
1 At least for now!

A preposition is not a good word to end a sentence with

Lies your English teacher told you: You can’t end a sentence with a preposition

Hello and welcome to the third episode in our ongoing series on stuff about the English language people in positions of authority misled you into thinking was true! Last time, Lisa showed us why it is perfectly fine (and in some cases, even preferable!) to split an infinitive.

Today, I will tackle a “rule” that’s every bit as well-known as it is routinely disregarded: “you can’t end a sentence with a preposition”.

This rule is interesting, as far as prescriptive rules go, in that its is hardly ever observed in practice. We all end sentences with prepositions, and it’s no use denying it. But don’t worry: the grammar police will not come busting down your door just yet. The reason we do it is because it’s perfectly natural in English, and in many cases even unavoidable!

The process of ending sentences with prepositions is technically known as preposition stranding, or P-stranding, and it is fairly common amongst Germanic languages.

This phenomenon is due to something we in the biz call wh- movement. Let me explain quickly what it is.

When you turn a statement into a question, you unconsciously perform a series of operations that transform that statement. In the case of wh- questions (what?, who?, when? etc.), the steps you follow are these:

  1. Take the statement.
    The boy ate the apple.
  2. Turn the part you want to question into a wh- word.
    The boy ate what?
  3. Move the wh- word to the beginning of the sentence.
    What the boy ate?
  4. For a series of hellishly complicated reasons I won’t go into here, transform the verb into it’s do-supported form (i.e. with “do”).
    What the boy did eat?
  5. Invert the subject and the verb.
    What did the boy eat?

And Bob’s your uncle! Pretty insane that you do this all the time and don’t even realise it, huh?

The process is basically the same for relative clauses (i.e. “The apple (which) the boy ate”), except without steps 4 and 5 (because it’s not a question), and with an extra step where you copy the “questioned” part to the start of the sentence before turning it into the wh- word. So:

  1. The boy ate the apple.
  2. The apple the boy ate the apple.
  3. The apple the boy ate which.
  4. The apple which the boy ate.

What interests us is what happens when this process takes place in a sentence where the moved object (or constituent, to use the proper lingo) is preceded by a preposition.

  1. The boy went to the cinema with the girl.
  2. The girl the boy went to the cinema with the girl.
  3. The girl the boy went to the cinema with who(m).

And here we hit the point of contention. What should be done on step 4? Until the 18th century, the answer was easy: the most natural option was to move the wh- word and leave the preposition where it is. Stranded, if you like.

  1. The girl who(m) the boy went to the cinema with.

The same applied to questions (“Who(m) did the boy go to the cinema with?”). However, there was a second option, in which the wh- word dragged the preposition along with itself to the start of the sentence or clause, so that step 4 would look like

  1. The girl with who(m) the boy went to the cinema.

This particular construction is technically known as pied-piping, from the German fairy tale “The Pied Piper of Hamelin”, where a magic piper freed the city of troublesome mice by playing his flute and mesmerising them into following him out. He applied the same procedure later to kidnap all the city’s children to punish the inhabitants for their ingratitude. Talk about overreacting.

This option, while always possible, was seen as rather cumbersome, and therefore dispreferred. Until the 18th century, when a sustained campaign by a number of intellectuals flipped the status of the two constructions in the public consciousness. What happened?

Well, as you might remember from many of our posts about the history of prescriptivism, people in the 18th and 19th century displayed an unhealty obsession over Latin. Since Latin was The Perfect Language™, each and every aspect of the English language that didn’t look like Latin was, of course, wrong and barbaric, and had to be eliminated. I’ll give you one guess as to what Latin didn’t do with its prepositions during wh- movement.

If you guessed “stranding them”, then congratulations! You guessed right.

In Latin (and all the languages which descend from it), only pied-piping is acceptable when applying wh- movement to a sentence with a preposition. Our example sentence in Latin would go like this (cum = with, quā = who(m)):

  1. Puer ad cinematographeum cum puellā īvit.
  2. Puella puer ad cinematographeum cum puellā īvit.
  3. Puella puer ad cinematographeum cum quā īvit.
  4. Puella cum quā puer ad cinematographeum īvit.

Needless to say, the prescriptivist scholars twisted themselves into logic pretzels to justify why this should be true of English as well. Some just openly admitted that it was because English should be similar to Latin, others tried to be clever and argued that a “preposition” is called that because it goes before a word (pre- = before + position), and must have thought themselves exceedingly smart, notwithstanding the fact that the word “preposition” comes from Latin, where P-stranding is impossible, so of course they would call it that.

Some got caught in their own circular reasoning and inevitably found sentences in which preposition stranding is obligatory, giving rise to comically frustrated rants like the following, courtesy of one Philip Withers, from 1789:

“It may be said, it is absolutely unavoidable on particular occasions. v.g. The Stock was disposed OF BY private contract. But an elegant writer would rather vary the phrase, or exchange the verb than admit so awkward a concurrence of prepositions.”

A little tip, kids: if someone tells you he would rather avoid or ignore pieces of data that they dislike, or actively tells you to do so, they’re not a scientist. In the case of linguistics, you’ve spotted a prescriptivist! Mark it on your prescriptivist-spotting book and move on.

What of the writers that came before them and regularly stranded prepositions? Robert Lowth (a name you’ll become wearily familiar with by the end of this series) commented that they too were somehow universally speaking bad English, and a guy named John Dryden even went so far as to rewrite some of Shakespeare’s plays to remove some of the unsightly and atrocious “errors” he found in them, preposition stranding included.

Such are the lengths fanatism goes to.

Stay tuned for next time, when Rebekah will explain to you why a negative plus a negative doesn’t necessarily imply a positive.

 

To boldly split what no one should split: The infinitive.

Lies your English teacher told you: “Never split an infinitive!”

To start off this series of lies in the English classroom, Rebekah told us last week about a common misconception regarding vowel length. With this week’s post, I want to show you that similar misconceptions also apply to the level of something as fundamental as word order.

The title paraphrases what is probably one of the most recognisable examples of prescriptive ungrammaticality – taken from the title sequence of the original Star Trek series, the original sentence is: To boldly go where no man has gone before. In this sentence, to is the infinitive marker which “belongs to” the verb go. But lo! Alas! The intimacy of the infinitive marker and verb is boldly hindered by an intervening adverb: boldly! This, dear readers, is thus a clear example of a split infinitive.

Or rather, “To go boldly”1

Usually an infinitive is split with an adverb, as in to boldly go. This is one of the more recognisable prescriptive rules we learn in the classroom, but the fact is that in natural speech, and in writing, we split our infinitives all the time! There are even chapters in syntax textbooks dedicated to explaining how this works in English (it’s not straightforward though, so we’ll stay away from it for now).

In fact, sometimes not splitting the infinitive leads to serious changes in meaning. Consider the examples below, where the infinitive marker is underlined, the verb it belongs to is in bold and the adverb is in italics:

(a) Mary told John calmly to leave the room

(b) Mary told John to leave the room(,) calmly

(c) Mary told John to calmly leave the room

Say I want to construct a sentence which expresses a meaning where Mary, in any manner, calm or aggressive, tells John to leave the room but to do so in a calm manner. My two options to do this without splitting the infinitive is (a) and (b). However, (a) expresses more strongly that Mary was doing the telling in a calm way. (b) is ambiguous in writing, even if we add a comma (although a little less ambiguous without the comma, or what do you think?). The only example which completely unambiguously gives us the meaning of Mary asking John to do the leaving in a calm manner is (c), i.e. the example with the split infinitive.

This confusion in meaning, caused by not splitting infinitives, becomes even more apparent depending on what adverbs we use; negation is notorious for altering meaning depending on where we place it. Consider this article title: How not to raise a rapist2. Does the article describe bad methods in raising rapists? If we split the infinitive we get How to not raise a rapist and the meaning is much clearer – we do not want to raise rapists at all, not even using good rapist-raising methods. Based on the contents of the article, I think a split infinitive in the title would have been more appropriate.

So you see, splitting the infinitive is not only commonly done in the English language, but also sometimes actually necessary to truly get our meaning across. Although, even when it’s not necessary for the meaning, as in to boldly go, we do it anyway. Thus, the persistence of anti-infinitive-splitting smells like prescriptivism to me. In fact, this particular classroom lie seems like it’s being slowly accepted for what it is (a lie), and current English language grammars don’t generally object to it. The biggest problem today seems to be that some people feel very strongly about it. The Economist’s style guide phrases the problem eloquently3:

“Happy the man who has never been told that it is wrong to split an infinitive: the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it.”

We will continue this little series of classroom lies in two weeks. Until then, start to slowly notice split infinitives around you until you start to actually go mad.

Footnotes

I’ve desperately searched the internet for an original source for this comic but, unfortunately, I was unsuccessful. If anyone knows it, do let me know and I will reference appropriately.

This very appropriate example came to my attention through the lecture slides presented by Prof. Nik Gisborne for the course LEL1A at the University of Edinburgh.

This quote is frequently cited in relation to the split infinitive, you can read more about their stance in the matter in this amusing post: https://www.economist.com/johnson/2012/03/30/gotta-split

Standardisation of languages – life or death?

Hello and happy summer! (And happy winter to those of you in the Southern Hemisphere!)

In previous posts we’ve thrown around the term ‘standard’, as in Standard English, but we haven’t really gone into what that means. It may seem intuitive to some, but this is actually quite a technical term that is earned through a lengthy process and, as is often the case, it is not awarded easily or to just any variety of a language. Today, I will briefly describe the process of standardising a variety and give you a few thoughts for discussion1. I want to stress that though we will discuss the question, I don’t necessarily think we need to find an answer to whether standardisation is “good” or “bad” – I don’t think either conclusion would be very productive. Still, it’s always good to tug a little bit at the tight boundaries we often put around the thought space reserved for linguistic concepts.

The language bohemian, at it again.

There are four processes usually involved in the standardisation of a language: selection, elaboration, codification, and acceptance.

Selection

It sure doesn’t start easy. Selection is arguably the most controversial of the processes as this is the step that involves choosing which varieties and forms the standard will be based on. Often in history we find a standard being selected from a prestigious variety, such as the one spoken by the nobility. In modern times this is less comme il faut as nobility don’t have monopoly on literacy and wider communication anymore (thankfully). This can make selection even trickier, though: as the choice of a standard variety becomes more open there is a higher need for sensitivity regarding who is represented by that standard and who isn’t. Selection may still favour an elite group of speakers, even if they may no longer be as clear-cut as a noble class. For example, a standard is often based on the variety spoken in the capital, or the cultural centre, of a nation. The selection of standard forms entails non-selection of others, and these forms are then easily perceived as worse, which affects the speakers of these non-standard forms negatively – this particularly becomes an issue when the standard is selected from a prestigious variety.

In my post about Scots , I briefly mentioned the problem of selection we would face in a standardisation of Scots as a variety which has great variation both within individual speakers and among different speakers (e.g. in terms of lects). Battling this same tricky problem, Standard Basque was mostly constructed from three Basque varieties, mixed with features of others. This standard was initially used mainly by the media and in formal writing with no “real” speakers. However, as more and more previously non-Basque-speaking people in the Basque country started to learn the language, they acquired the standard variety, with the result that this group and their children now speak a variety of Basque which is very similar to the standard.

Elaboration

Standardisation isn’t all a prestigious minefield. A quite fun and creative process of standardisation is elaboration, which involves expanding the language to be appropriate for use in all necessary contexts. This can be done by either adapting or adopting words from other varieties (i.e. other languages or nonstandard lects), by constructing new words using tools (like morphology) from within the variety that’s becoming a standard, or by looking into archaic words from the history of the variety and putting them back into use.

When French was losing its prestige in medieval England, influenced no doubt by the Hundred Years’ War, an effort was initiated to elaborate English. During the Norman Conquest, French had become the language used for formal purposes in England, while English survived as spoken by the common people. This elaboration a few hundred years later involved heavy borrowing of words from French (e.g. ‘government’ and ‘royal’) for use in legal, political, and royal contexts (and from Latin, mainly in medical contexts) – the result was that English could now be used in those situations it previously didn’t have appropriate words for (or where such words had not been in use for centuries)2.

source

Codification

Once selection and elaboration have (mostly) taken place, the process of codification cements the selected standard forms, through, for example, the compilation of dictionaries and grammars. This does not always involve pronunciation, although it can, as it famously does in the British Received Pronunciation (usually just called RP), a modern form of which is still encouraged for use by teachers and other public professions. Codification is the process that ultimately establishes what is correct and what isn’t within the standard – this makes codification the sword of the prescriptivist, meaning that codification is used to argue what the right way to use the language is (y’all know by know what the HLC thinks of prescriptivism).

When forms are codified they are not easily changed, which is why we still see some bizarre spellings in English today.  There are of course not only limitations to codification (as with the spelling example)– there is obvious benefit for communication if we all spell certain things the same way or don’t vary our word choices too much for the same thing or concept. Another benefit, and a big one at that, is that codified varieties are perceived more as real, and this is very important for speakers’ sense of value and identity.


Codification does not a standard make – most of you will know that many varieties have dictionaries without having a standard, Scots being one example. Urban Dictionary is another very good example of codification of non-standard forms.

Acceptance

The final process is surely the lengthiest and perhaps the most difficult to achieve: acceptance. It is crucial that a standard variety receives recognition as such, more especially by officials or other influential speakers but also by the general public. Speakers need to see that there is a use for the standard and that there is a benefit to using it (such as benefiting in social standing or in a career). Generally though, people don’t respond very well to being prescribed language norms, which we have discussed previously, so when standard forms have been selected and codified it does not necessarily lead to people using these forms in their speech (as was initially the case with Standard Basque). Further, if the selection process is done without sensitivity, some groups may feel they have no connection to the standard, sometimes for social or political reasons, and may actively choose to not use it. Again, we find that a sense of identity is significant to us when it comes to language; it is important for us to feel represented by our standard variety.

What’s the use?

Ideally, a standard language could be seen as a way to promote communication within a nation or across several nations. Despite the different varieties of Arabic, for example, Arabic speakers are able to switch to a standard when communicating with each other even if they are from different countries far apart. Likewise, a Scottish person can use Standard English when talking to someone from Australia, while if the same speakers switched back to their local English (or Scots) varieties, they wouldn’t necessarily understand each other. Standardisation certainly eases communication within a country also, and a shared standard variety can provide a sense of shared nationality and culture. There is definitely a point in having a written standard used for our laws, education, politics, and other official purposes which is accessible for everyone. On the other side of this, however, we find a counterforce with speaker communities wanting to preserve their lects and actively opposing using a standard if they can’t identify with it.

So, a thought for discussion I want to leave with you today: Do you think the process of standardisation essentially kills language, or does it it keep it alive? An argument for the first point is that standardisation limits variation3 – this means that when a standard has been established and accepted, the varieties of that standard will naturally start pulling towards the standard as its prestige and use increases. However, standardising is also a way to officially recognise minority varieties, which gives speakers an incentive to keep their language alive. It is also a way to ease understanding between speakers (as explained earlier), and in some cases (like Basque), standardisation gives birth to a new variety acquired as a first language. As I said from the start, maybe we won’t find an answer to this, and maybe we shouldn’t, but it’s worth thinking about these matters in a more critical way.

Footnotes

1 I’ve used the contents of several courses, lectures, and literatures as sources for this post. The four processes of standardisation are credited to Haugen (1996): ‘Dialect, language, nation’.

2 In fact, a large bulk of French borrowings into English comes from this elaboration, rather than from language contact during the Norman Conquest.

On a very HLC note, historical standardisation makes research into dialectal variation and language change quite difficult. The standard written form of Old English is based on the West Saxon variety, and there are far fewer documents to be found written in Northumbrian, which was a quite different variety and has played a huge part in the development of the English we know today.

 

Phonaesthetics, or “The Phrenology of Language”

 

Stop me if you’ve heard this before: French is a beautiful, romantic language; Italian sounds like music; Spanish is passionate and primal; Japanese is aggressive; Polish is melancholic; and German is a guttural, ugly, unpronounceable mess (Ha! Tricked you! You couldn’t stop me because I’ve written all of this down way before now. Your cries and frantic gesticulations were for naught.)

We’ve all heard these judgements (and many others) repeated multiple times over the course of our lives; not only in idle conversation, but also in movies, books, comics, and other popular media. There’s even a series of memes dedicated to mocking how German sounds in relation to other European languages:

“Ich liebe dich” is a perfectly fine and non-threatening way of expressing affection towards another human being

What you might not know is that this phenomenon has a technical name in linguistics: phonaesthetics.[1]

Phonaesthetics, in short, is the hypothesis that languages are objectively more or less beautiful or pleasant depending on various parameters, such as vowel to consonant ratio, presence or absence of certain sounds etc., and, not to put too fine a point on it, it’s a gigantic mountain of male bovine excrement.

Pictured: phonaesthetics

Let me explain why:

A bit of history

Like so many other terrible ideas, phonaesthetics goes way back in human history. In fact, it may have been with us since the very beginning.

The ancient Greeks, for example, deemed their language the most perfect and beautiful and thought all other languages ugly and ungainly. To them, these foreign languages all sounded like strings of unpleasant sounds: a mocking imitation of how they sounded to the Greeks, “barbarbarbar”, is where we got our word “barbarian” from.

In the raging (…ly racist) 19th century, phonaesthetics took off as a way to justify the rampant prejudice white Europeans had against all ethnicities different from their own.

The European elite of the time arbitrarily decided that Latin was the most beautiful language that ever existed, and that the aesthetics of all languages would be measured against it. That’s why Romance languages such as Italian or French, which descended from Latin[2], are still considered particularly beautiful.

Thanks to this convenient measuring stick, European languages were painted as euphonious ( ‘pleasant sounding’), splendid monuments of linguistic accomplishment, while extra-European languages were invariably described as cacophonous (‘unpleasant sounding’), barely understandable masses of noise. This period is when the common prejudice that Arabic is a harsh and unpleasant language arose, a prejudice that is easily dispelled once you hear a mu’adhin chant passages from the Qur’an from the top of a minaret.

Another tool in the racist’s toolbox, very similar to phonaesthetics, and invented right around the turn of the 19th century, was phrenology, or racial-biology, the pseudoscience which alleged to be able to discern a person’s intelligence and personality from the shape of their head. To the surprise of no one, intelligence, grace and other positive characteristics were all associated with the typical form of a European white male skull, while all other shapes indicated shortcomings in various neurological functions. What a pleasant surprise that must have been for the European white male inventors of this technique![3] Phrenology was eventually abandoned and widely condemned, but phonaesthetics, unfortunately, wasn’t, and it’s amazingly prevalent even today.

To see how prevalent this century-old model of linguistic beauty is in popular culture, a very good example are Tolkien’s invented languages. For all their amazing virtues, Tolkien’s novels are not exactly known for featuring particularly nuanced moral actors: the good guys might have some (usually redeemable) flaws, but the bad guys are just bad, period.

Here’s a brief passage in Quenya, the noblest of all Elven languages:

Ai! Laurië lantar lassi súrinen,

Yéni únótimë ve rámar aldaron!

Yéni ve lintë yuldar avánier

Mi oromardi lissë-miruvóreva

[…]

Notice the high vowel-to-consonant ratio, the prevalence of liquid (“l”, “r”), fricative (“s”, “v”) and nasal (“n”, “m”) sounds, all characteristic of Latinate languages.

Now, here’s a passage in the language of the Orcs:

Ash nazg durbatulûk, ash nazg gimbatul

Ash nazg thrakatulûk, agh burzum-ishi krimpatul

See any differences? The vowel-to-consonant ratio is almost reversed, and most syllables end with a consonant. Also, notice the rather un-Latinate consonant combinations (“zg”, “thr”), and the predominance of stops (“d”, “g”, “b”, “k”). It is likely that you never thought about what makes Elvish so “beautiful” and “melodious”, and Orcish (or Klingon, for that matter), so harsh and unpleasant: these prejudices are so deeply ingrained that we don’t even notice they’re present.

So why is phonaesthetics “wrong”?

Well, the reason is actually very simple: beauty is subjective and cannot be scientifically defined. As they say, beauty is in the eye of the beholder.

Not this beholder.
Image copyright: Wizards of the Coast

What one finds “beautiful” is subject to change both in space and in time. If you think German’s relatively low vowel-to-consonant ratio is “harsh”, then you have yet to meet Nuxálk.

Welcome to phonaesthetic hell.

Speaking of German, it is actually a very good example of how these supposedly “objective” and “common sense” criteria of phonetic beauty can change with time, sometimes even abruptly. You see, in the 19th century, German was considered a very beautiful language, on par with Italian or French. A wealth of amazing prose and poetry was written in it: it was probably the main language of Romantic literature. It was also the second language of opera, after Italian, and was routinely described as melodious, elegant and logical.

Then the Nazis came.

Nazis: always ruining everything.

Suddenly, Germans were the bad guys. No longer the pillars of European intellectual culture, their language became painted as harsh, aggressive, unfriendly and cold, and suddenly every Hollywood villain and mad scientist acquired a German accent.

So, what’s the takeaway from this long and rambling rant?

No language is more, or less, beautiful than any other language. All languages have literature, poetry, song and various other ways to beautifully use their sounds for artistic purposes, and the idea that some are better at this than others is a relic from a prejudiced era better left behind. So next time you feel tempted to mock German for how harsh and unpleasant it sounds, stop and think that maybe this is not actually what you think, and that you’ve been programmed by a century of social prejudice into thinking so.

And read some Goethe, you’ll like it.

Stay tuned for next week, when the amazing Rebekah will bring you on the third leg of our lightning trip through Phonphon!

  1. Phonaesthetics also has a different meaning, which is the study of how certain combinations of sounds evoke specific meanings in a given language. Although this form of phonaesthetics has its problems, too, it is not what I’m talking about in this post, so keep that in mind as we go forward.
  2. See our post on language families here.
  3. First the men assumed that the female skull was smaller than the male, and this was obviously a sign of their inferior intelligence. Later, however, they found that the female skull was larger, so they came up with the idea that this meant females were closer to children, and thus the male was still more intelligent! – Lisa

The myth of language decay: Do youths really not know how to speak?

Hi everyone!

My name is Sabina, I’m 28 years old, from rainy Gothenburg, Sweden (unlike Riccardo from sunny Bologna). Why am I here? Well, to talk about linguistics, obviously! Specifically, I’ll be talking about a persistent and prevalent language myth: the myth of language decay.

This is the idea that modern forms of language are somehow steadily getting “worse” in comparison to previous stages of the language. The thought that there was, somewhere, somehow, a “golden age” of the language, after which it became unstructured, uninformative or just plain “bad”. This idea is a form of prescriptivism, as described by Riccardo in last week’s post, and perhaps the most widespread one at that.

You might think that this is not as common a myth as I say, but consider: have you ever heard someone claim that “young people” don’t know how to write? How to talk “properly”? Maybe even how to read? These are, indeed, examples of this myth.

However, is it true? Do young people really not know how to write/speak/read their native tongue? Of course not, they just do it in a different way.

The myth of language decay is intimately connected to the phenomenon known as language change. Now, language change is often described by linguists as a necessary, vital and continuous part of the language’s development and survival. Just imagine if we spoke English the same way as in the Middle Ages, or even as in Shakespeare’s time! English today is certainly different from back then, but it is in no way worse. Think about it, would you really want everyone to speak like Shakespeare did? Or Chaucer? Or perhaps as in Beowulf?

It is interesting to note, however, that the idea of language decay rarely touches the history of the language. Chaucer and Shakespeare lived approximately 200 years apart yet no one really claiming that Chaucer’s English was “bad” in comparison to Shakespeare’s, do they? (As a matter of fact, Chaucer has earned himself the nickname “Father of English literature” so it really can’t be, can it?).

Let’s take a more recent example: Charles Dickens (1812-1870) to J.R.R. Tolkien (1892-1973) to George R.R. Martin (1948-). Now, if you sit down and read through the works of these three authors, all of whom have been hailed for their writing skills, you will probably notice a rather distinct difference in not only style, but perhaps also in lexicon and grammar. Yet no one is arguing that Dickens and Tolkien didn’t know how to write, do they?

But guess what? Someone probably did when Tolkien started writing! Someone probably did when Martin started out. Someone probably even said it about Dickens, Austen, Woolf, Brontë, Shakespeare, Chaucer, etc, etc.

In fact, people have been complaining about language “decay” for a long, long time, specifically since the time of Sumerian, a language spoken in the region of Sumer in ancient Mesopotamia. Now, you might be thinking: “Sabina, surely you’re exaggerating things just a bit?”.

I am not.

Sumerian is the first language from which there is surviving written material1 and in 1976, a researcher named Lloyd-Jones2 published a piece of work detailing inscriptions made on clay tablets. Among other things, these contained an agonized complaint made by a senior scribe regarding the junior scribes’ sudden drop in writing ability.

Basically: “Young people can’t write properly!”.

Consider that for a second. People have been complaining about supposed language decay for, literally, as long as we have evidence of written language.

Given this, you can imagine that people tend to have a strong reaction to language “decay”. Consider the case of Jean Aitchison, an Emeritus Professor of language and communication at the University of Oxford. In 1996, Professor Aitchison participated in the BBC Reith Lectures, a series of annual radio lectures given by leading figures of a particular field. Professor Aitchison lectured on the naturalness of language change, stating that there was nothing to worry about.

The result of this? Professor Aitchison received hostile letters to her home. Consider that for just a second: people took the trouble of sitting down, writing a threat, posting it, wait for the post to reach her, just to get their sense of accomplishment.3 That’s a pretty good indication of how strongly some people feel about this.

So, why are we reacting that way?

Well, we spend year upon year, in school, in newspapers, even in social media (with its “grammar Nazi” phenomenon), teaching people that there is a “correct” way of using language. We work hard to achieve this standard. Think of it as learning how to ride a bike. All your life, you’ve been told that you should sit on the bike in a certain way. It’s very uncomfortable, but you work and work and work to apply the right technique. When you’ve finally mastered the skill (and are feeling quite proud of yourself), someone comes along and tells you that you can sit on the bike anyway you want. Risk of you lashing out? Probably at least somewhat high.

But see, the thing is that, when it comes to language, there really is no “correct way”. Take the word “irregardless” for example. Many immediately get this kind of stone-faced expression and thunderously proclaim that there is no such word. But actually, there is. It’s a non-standard dialectal variant, used with a specific meaning and in specific contexts (in this particular case, irregardless is a way to shut a conversation down after already having said “regardless” in those varieties4, isn’t that interesting?).

But people think that there is somehow something “wrong” with this word, and those who use it (or other non-standard forms) will often be judged as speaking “bad English”, throwing more fuel on the fire for the myth of language decay. Especially since the older generations, for example, may retain their ideas about what is “correct” usage, while younger generations may have a different idea about what is “correct” and use the language in a different way.

So, what’s my point with all this? Well, my point is that the moment that a word from a non-standard dialect makes its way into the standard language, it’s going to raise some discussion about the “decay” of the language. This is really particularly true of the younger generations today who actually introduced a whole new form of language into their standard vocabulary: internet and/or texting slang!

This is fascinating! We’re introducing a new form of language! But… When young people start using, I don’t know, “brb”, “afk”, “lol”, etc. in their everyday speech, other people may condemn this as “lazy, uneducated, wrong”, etc., etc. and the myth of language decay rejuvenates.

But the thing is that languages change to match the times in which they exist. It may change due to political readjustments that have occurred or to reflect the different attitudes of the people. And sometimes, we can’t point to anything that made the language change – it simply did. Regardless, the language reflects its time, not a glorified past. And that is a good thing.

Unless, of course, you would perhaps prefer to remove most -ed past tense endings, especially on strong verbs, and go back to the good old days of ablaut (that is, vowel gradation carrying grammatical information, e.g. sing, sang, sung)? Or perhaps lower all your vowels again and skip the diphthongs? Or perhaps… yeah, you see where I’m going with this.

No? Didn’t think so. In that case, let’s celebrate the changes, both historical and current, without accusing them to somehow make the language worse.

Because, truly, the only difference between changes that made the language into the “glorious standard” of yesteryear and the changes that are happening now, is time.

Tune in to Rebekah’s post next week where she will explain the different periods of English and make it clear why Shakespeare did not write in Old English!

Bibliography

1 Check out the 5 oldest written languages recorded here.

2 Lloyd-Jones, Richard. 1976. “Is writing worse nowadays?”. University of Iowa Spectator. April 1976.
Quoted by Daniels, Harvey. 1983. Famous last words: The American language crisis revisited. Carbondale, IL: Southern Illinois University Press. pp. 33.

3Aitchison, Jean. 1997. The Language Web. Cambridge: The Press Syndicate of the University of Cambridge.

4Check out Kory Stamper, a lexicographer for Merriam-Webster, explaining “irregardless” here.

Introduction to the blog and some words on Descriptivism

Hello everyone! Welcome to our shiny new blog! My name is Riccardo, I’m 25 years old, from Bologna, Italy (homeland of good food and jumping moustached plumbers) and I’m here to talk about linguistics. Well, we all are, really. That’s why we’re the Historical Linguist Channel™!

So, “what is a linguist?” I hear you ask through my finely-honed sense for lingering doubts. Well, a linguist is someone who studies language, duh. What’s that? You want more detail? I can understand that. After all, few academic fields are as misunderstood by the general public as the field of linguistics. People might think that the Earth is flat, or that aspirin turns frogs into handsome, muscular princes (or was it kisses?), but at least they know what an astronomer or a doctor is and what they do. No such luck for linguists, I’m afraid. Misconceptions about what we do and absurdly wrong notions about what we study are rife even within the academic community itself. We’re here to dispel those misconceptions.

In the series of articles that follows, each of us will debunk one myth or misconception which he or she (mostly she) finds particularly pernicious and wants out of the way immediately before we even start regularly updating the blog’s content. In this introductory article, I will explain the most fundamental source of myths and misconceptions about linguistics there is: the difference between descriptive and prescriptive linguistics.

But first, let me begin with an unfortunately not-so-exaggerated portrayal of the popular perception of linguists: the Movie Linguist.

Scene: an unexplored Mayan ruin, deep in the jungles of Central America. Three explorers cautiously walk in a dark hallway, torches blazing over their heads. Philip, the dashing young adventurer, leads forward, cutting the vines that grow in the ancient corridors with his machete. He is followed by Beatrice, a beautiful young woman he naturally will end up kissing towards the end of the movie. Trailing behind them is a bespectacled, nervous man, awkwardly trying to hold onto a ream of papers and charts. He is Nigel, the linguist. Suddenly, they break into an enormous room. The group leader raises his torch with a sweeping motion. The music swells: the walls of the chamber are covered with inscriptions.

Philip: My God… look at this.

Beatrice: What is it?

Philip: Look at the inscriptions on the walls.

Beatrice: [gasps] Could it really be…?

Philip: Egyptian hieroglyphs… in a Mayan pyramid!!

Beatrice: But it’s impossible! How could they have arrived here?

Philip: I don’t know. Nigel! You’ve got to see this.

Nigel enters the chamber, and immediately drops his papers in astonishment.

Nigel: I- it’s incredible! The theories of professor McSweeney on cultural cross-pollination were true!

Beatrice: Can you read it?

Nigel: Well, given the nature of the expedition, I was presumably hired for my expertise in Meso-American languages. Fortunately, I am a Linguist™, and that means I can read every language ever spoken by every human being that ever lived.

Nigel kneels next to the closest inscription. He thoughtfully adjusts his glasses.

Nigel: Hmmm… I recognise this. It’s an obscure dialect of Middle Egyptian spoken in a village exactly 7.6 km due East of Thebes in the year 1575 BC. I can tell just by superficially looking at it.

Philip: What does it say?

Nigel: Unfortunately, this dialect is so obscure that it wasn’t covered in the 72 years of back-breaking grad school every linguist must undergo to learn every language ever spoken. I will need time to decipher it.

Beatrice: How much time? This place gives me the creeps.

Nigel: Just a few hours, and I will do it with no help from any dictionary, reference grammar or corpus of similar dialects to which I could compare it. After I decipher it, I will, of course, be able to read, write, and speak it natively with no doubt or hesitation whatsoever.

A skittering sound echoes in one of the hallways.

Philip: Be quick about it. I have a feeling we’re not alone…

In the end, it turns out the inscriptions on the wall warn intruders that an ancient Egyptian god slumbers in the tomb and that he will not be appeased by anything except fat-free, low-calorie double bacon cheeseburgers which taste as delicious as their horribly unhealthy counterparts, which is, of course, a dream far beyond the reach of our puny human science. A thrilling battle with the minions of this god ensues, until the explorers come face-to-face with the burger-hungry divinity himself. They manage to escape his clutches thanks to Nigel, who now speaks the Middle Egyptian dialect so well that he manages to embarrass the god by pointing out that he ended a sentence with a preposition.

Somewhere along the way, Philip and Beatrice kiss.

Our objective here at the Historical Linguist Channel is to bring your image of linguists and linguistics as far as possible from the one I just painted above. Said image is unfortunately very prevalent in the public’s consciousness, a state of affairs which makes linguistics possibly one of the most misunderstood academic disciplines out there.

So, without further ado, I will get into the meat of my own post: the distinction between descriptive and prescriptive linguistics.

What is descriptivism?

Most people know at least some basic notions about many sciences: most of us know that matter in the universe is made of atoms, that atoms bond together to form molecules, and so on. Most people know about gravity, planets and stars.

Yet, remarkably few people, even amongst so-called “language enthusiasts”, know the most basic fact about linguistics: that it is a descriptive, and not a prescriptive, discipline.

What does it mean to be a descriptive discipline? As the name suggests, a descriptive discipline concerns itself with observing and describing a phenomenon, making no judgements about it. For a descriptive science, there are no superior or inferior facts. Facts are just facts. A planet that goes around its star once every 365 days is not any better or worse than one which takes, say, 220. As an academic science, linguistics merely concerns itself with studying language in all its forms and variety, without ascribing correctness or value on some forms over others. To a linguist, “I ain’t done nuffin’ copper!” is as good an English sentence as “The crime of which you regretfully accuse me has not taken place by my hand, and I resent the implication, good sir!”

Now, you might be thinking: Riccardo, doesn’t every scientific discipline work that way? To which I answer: yes, yes they do. Linguistics, however, is slightly different from pretty much all other scientific disciplines (with the possible exception of sociology and perhaps a few others) in that, for most of its early history, it was a prescriptive discipline.

A prescriptive discipline is basically just the opposite of what I just described. Prescriptive disciplines judge some forms of what they study to be better or “correct”, and others to be “wrong” or inferior to others. Sound familiar? That’s probably because it’s how most people approach the study of language. Since the dawn of civilisation, language has been seen as something to be tightly controlled, of which one and only one form was the “right” and “correct” one, all others being corruptions that needed to be stamped out. Another very prevalent prescriptive idea is that language is decaying, that young people are befouling the language of their parents, transforming it into a lazy mockery of its former glory, but that’s a story for another post.

Prescriptive linguistics is concerned with formulating and imposing a series of rules that determine which form of a language is correct and which forms are not (in Humean terms, descriptivism is concerned with “is”, prescriptivism is concerned with “ought”. And you thought this wasn’t going to be an exquisitely intellectual blog).

In general, if you ask most people on the street to cite a “rule of grammar” to you, they will come up with a prescriptive rule. We’ve all heard many: “don’t end a sentence with a preposition”, “it’s you and I, not you and me”, “a double negative makes a positive”, the list goes on.

If you ask a linguist, on the other hand, you’ll get descriptive rules, such as “English generally places its modifiers before the head of the phrase” or “English inflects its verbs for both tense and aspect”.

A very useful way to think about the difference between a descriptive and a prescriptive rule is comparing it to the difference between physical laws and traffic laws. A physical law is a fact. It can’t be broken: it simply is. I can no more contravene the law of gravity than I can purposefully will my own heart to beat in rhythm to Beethoven. But I can contravene traffic laws: I am absolutely physically capable of driving against the flow of traffic, of running a red light or not switching on my headlights during poor visibility conditions.

In general, if a rule says that I shouldn’t do something, that means that I am capable of doing it. Even more damningly, if someone felt the need to specify that something should not be done, it means that someone has been doing it. So, completing the analogy, the paradoxical reason you hear your teacher say that you can’t end a sentence with a preposition in English is that you CAN end a sentence with a preposition in English. In fact, it is far more common than the so-called “correct” way.

What you will never hear is an English teacher specifically instructing you not to decline an English noun in the locative case. Why? Because English has no locative case. It lost it in its rebellious youth, when it went by the name of Proto-Germanic and it had just split from Indo-European because that’s what all the cool kids were doing. Finnish, which is not an Indo-European language, is a proper hoarder: it has no less than six locative cases.

Academic linguistics is exclusively concerned with the “physical laws” of language, the fundamental rules that determine how each language differs from all others. It takes no interest in offering value-judgements. Which is why a linguist is the last person you should ask about whether something you said is “good grammar” or not, incidentally.

So, are descriptivism and prescriptivism radically and fundamentally opposed?

Well, yes and no.

A limited form of prescriptivism has its uses: since languages are not uniform and vary wildly even over relatively short geographical distances, it is very important for a country to have a standardised form of language taught in school, with regulated forms so that it doesn’t veer too much in any particular direction. This makes communication easy between inhabitants of the country, and allows bureaucratic, governmental and scientific communication to happen with the greatest amount of efficiency.

The problem with prescriptivism is that it is very easily misused. Only a frighteningly short step is needed to go from establishing a standard form of language to ease communication between people in the same nation to defining all varieties of the language which do not correspond to this standard form as debased trash worthy only of stamping out, and any speakers of those varieties as uneducated churls, or worse, traitors and villains. For centuries, some languages (such as Latin) have been touted as “logical”, “superior”, the pinnacle of human thought, while other languages (mainly the languages of indigenous peoples in places conquered by Western colonialists, surprise surprise) were reviled as “primitive”, incapable of complex expression on the level of European languages.

Linguistic discrimination is a woefully widespread and tragically unreported phenomenon which is rife even in what would otherwise be socially progressive countries. In my native Italy, more than 20 local languages are spoken over the whole territory, some as different from Italian as French is. Yet, if you ask most people, even cultured ones, the only language spoken in Italy is Italian (the standardised form based on the language of Florence). All the other local languages are reduced to the status of “dialects”, and often reviled as markers of lack of education or provinciality, and described as less “rich” than Italian, or even as ugly and vulgar. The Italian state doesn’t even recognise them as separate languages.

Even comparatively minor variation is a target for surprisingly virulent hate: one need only think about the droves of people foaming at the mouth just thinking about people speaking English with the intonation pattern known as “uptalk”, characteristic of some urban areas in the USA and Australia.

Be descriptive!

So, what’s the takeaway from this disjointed ramble of mine?

Simple: linguistics is the scientific study of language, and sees all forms of language as equally fascinating and worthy of study and preservation.

In our posts and our podcasts you will never hear us ranting about “bad grammar”, or describe certain languages as superior or inferior to others. Our mission is transmitting to you the wonder and joy that is the immense variety inherent in human language.

Along the trip, you’ll discover languages in which double negatives are not only accepted, but encouraged; in which sentences MUST end with a preposition, when the need arises; languages with a baffling number of cases, baroque verb systems, and grammatical categories you haven’t even heard of.

We hope you’ll enjoy it as much as we do.

Tune in next Thursday for the next introductory post on the thorny question of language evolution, where Sabina will set the record straight: are youths these days ruining language?

Bibliography

Most introductory linguistics textbooks begin with a section on descriptivism, but if you want something free and online, the introductory section for The Syntax of Natural Language by Beatrice Santorini and Anthony Kroch is thorough and full of examples. You can find it here: http://www.ling.upenn.edu/~beatrice/syntax-textbook/