Who told the first lie?

Hello there, faithful followers!

As you may have noticed, we have recently been running a bit of a series, called ‘Lies your English teacher told you’. Our ‘lies’ have included the prescriptive ideas such as (1) you should never split an infinitive; (2) you shouldn’t end a sentence with a preposition; and (3) two double negatives becomes a positive (in English). We’ve also taken a look at the ‘lies’ told to those taught English as a second, or foreign, language.

Now, dear friends, we have reached the conclusion of this little series and we will end it with a bang! It’s time, or rather overdue, that the truth behind these little stories be unveiled… Today, we will therefore unveil the original ‘villain’, if you will (though, of course, none of them were really villainous, just very determined) and tell you the truth of who told the very first lie.

Starting off, let’s say a few words about a man that might often be recognized as the first source of (most) of the grammar-lies told by your English teachers: Robert Lowth, a bishop of the Church of England and an Oxford professor of Poetry.

Robert Lowth, after RE Pine.jpg

Bishop Robert Lowth

Lowth is more commonly known as the illustrious author of the extremely influential A Short Introduction to English Grammar, published in 1762. The traditional story goes that Lowth, prompted by the absence of a simple grammar textbook to the English language, set out to remedy the situation by creating a grammar handbook which “established him as the first of a long line of usage commentators who judge the English language in addition to describing it”, according to Wikipedia. As a result, Lowth became the virtual poster-boy (poster-man?) for the rise of prescriptivism and a fascinating amount of prescriptivist ‘rules’ are attributed to Lowth’s writ – including the ‘lies’ mentioned in today’s post. The image of Lowth as a stern bishop with strict ideas about the use of the English language and its grammar may, however, not be well-deserved. So let’s take a look at three ‘rules’ and see who told the first lie.

Let’s start with: you should never split an infinitive. While often attributed to Lowth, this particular ‘rule’ doesn’t gain prominence until nearly 41 years later, in 1803 when John Comly, in his English Grammar Made Easy to the Teacher and Pupil, notes:

“An adverb should not be placed between a verb of the infinitive mood and the preposition to which governs it; as Patiently to wait — not To patiently wait.1

A large number of authorities agreed with Comly and, in 1864, Henry Alford popularized the ‘rule’ (although Alford never stated it as such). Though a good number of other authorities, among them Goold Brown, Otto Jespersen, and H.W. Fowler and F. G. Fowler, disagreed with the rule, it was common-place by 1907 when the Fowler brothers note:  

“The ‘split’ infinitive has taken such hold upon the consciences of journalists that, instead of warning the novice against splitting his infinitives, we must warn him against the curious superstition that the splitting or not splitting makes the difference between a good and a bad writer.” 2

Of course, to split an infinitive is quite common in English today; most famously in Star Trek, of course, and we doubt that most English-speakers would hesitate to boldly go against this 19th century prescriptivist rule.

Now, let’s deal with out second ‘rule’: don’t end a sentence with a preposition. This neat little idea comes from a rather fanatic conviction that English syntax (sentence structure) should conform to that of Latin syntax, where the ‘problem’ of ending a sentence with a preposition is a lot less likely to arise due to the morphological complexity of the Latin language. But, of course, English is not Latin.

Still, in 1672, dramatist John Dryden decided to criticize Ben Jonson for placing a preposition at the end of a sentence rather than before the noun/pronoun to which it belonged (see what we did there? We could have said: … the noun/pronoun which it belonged to, but  the rule is way too ingrained and we automatically changed it to a style that cannot be deemed anything but overly formal for a blog). Anyway.

The idea stuck and Lowth’s grammar enforced it. Despite his added note that the fanaticism about Latin was an issue in English, the rule hung around and the ‘lie’, while certainly not as strictly enforced as it used to be, is still alive and well (but not(!) possible to attribute to Lowth).

Last: two double negatives becomes a positive (in English). First: no, they don’t. Or at least not necessarily. In the history of English, multiple negators in one sentence or clause were common and, no, they do not indicate a positive. Instead, they often emphasize the negative factor, an effect commonly called emphatic negation or negative concord, and the idea that multiple negators did anything but form emphatic negation didn’t show up until 1762. Recognise the year? Yes, indeed, this particular rule was first observed by Robert Lowth in his grammar book, in which it is stated (as noted in the Oxford Dictionaries Blog):

“Two Negatives in English destroy one another, or are equivalent to an Affirmative.”

So, indeed, this one rule out of three could be attributed to Lowth. However, it is worth noting that Lowth’s original intention with his handbook was not to prescribe rules to the English language: it was to provide his son, who was about to start school, with an easy, accessible aid to his study.

So, why have we been going on and on about Lowth in this post? Well, first, because we feel it is rather unfair to judge Lowth as the poster-boy for prescriptivism when his intentions were nowhere close to regulating the English language, but, more importantly, to tell you, our faithful readers, that history has a tendency to change during the course of time. Someone whose intentions were something completely different can, 250 years later, become a ‘villain’; a ‘rule’ that is firmly in place today may not have been there 50 years ago (and yes, indeed, sometimes language does change that fast); and last, any study of historical matter, be it within history; archeology; anthropology or historical linguistics, must take this into account. We must be aware, and practice that awareness, onto all our studies, readings and conclusions, because a lie told by those who we reckon should know the truth might be well-meaning but, in the end, it is still a lie.

 

Sources and references

Credits to Wikipedia for the picture of Lowth; find it right here

1 This quote is actually taken from Comly’s 1811 book A New Spelling Book, page 192, which you can find here. When it comes to the 1803 edition, we have trusted Merriam-Websters usage notes, which you can find here.

2 We’ve used The King’s English, second edition, to confirm this quote, which occurs also on Wikipedia. The book is published in 1908 and this particular quote is found on page 319, or, right here.

In regards to ending a sentence with a preposition, our source is the Oxford Dictionaries Blog on the topic, found here.

Regarding the double negative becoming positive, our source remains the Oxford Dictionaries Blog on that particular topic, found here.

Don’t never use no double negatives

Multiple negation? I ain’t never heard nothing about that!

“Two negatives make a positive,” your friend may primly reply to such a statement. Even if you’re not exactly fond of math, you surely remember enough to acknowledge the wisdom and veracity of such sound logic.

But the funny thing about languages? They have a logic all their own, and it doesn’t always play by the same rules as our conscious minds.

Take, for example, this phenomenon of the double negative. Like the other formal, prescriptive rules we’ve been exploring with this series, the distaste for double negatives is relatively new to English.

Back in Old and Middle English (roughly AD 1000-1450), English wasn’t particularly fussed about multiple elements of negation in a sentence. If anything, they were used for emphasis, to drive home the negation. This trick of negatives supporting each other (rather than canceling each other out) is called negative concord. Far from being frowned upon, some languages crave it. Spanish, for example, regularly crams several negation words into a single sentence without a second thought:

¡No toques nada!
‘Don’t touch anything!’

This isn’t merely the preferred method of negation. In languages like Spanish and French, negative concord isn’t for emphasis; it’s mandatory. That’s just how they express negation.

The idea that two negatives grammatically make a positive in English was first recorded in the 1700s along with most of the other prescriptive rules. Unlike the other rules, there is some evidence to suggest that negative concord was naturally beginning to disappear in mainstream varieties of English even before the early grammarians codified the rule. This really isn’t too surprising. Languages like to change, and among the other moving parts they scramble around, they commonly go through phases of double negation (we linguists know this as Jespersen’s Cycle).

Math has naught to do with language, but it’s certainly true that in our Modern English, double negatives have the potential to leave a lot of ambiguity. Do they cancel? Do they intensify each other? It’s all about that context. This is one rule that might be here to stay1 (at least in formal English).

Notes
1 At least for now!

A preposition is not a good word to end a sentence with

Lies your English teacher told you: You can’t end a sentence with a preposition

Hello and welcome to the third episode in our ongoing series on stuff about the English language people in positions of authority misled you into thinking was true! Last time, Lisa showed us why it is perfectly fine (and in some cases, even preferable!) to split an infinitive.

Today, I will tackle a “rule” that’s every bit as well-known as it is routinely disregarded: “you can’t end a sentence with a preposition”.

This rule is interesting, as far as prescriptive rules go, in that its is hardly ever observed in practice. We all end sentences with prepositions, and it’s no use denying it. But don’t worry: the grammar police will not come busting down your door just yet. The reason we do it is because it’s perfectly natural in English, and in many cases even unavoidable!

The process of ending sentences with prepositions is technically known as preposition stranding, or P-stranding, and it is fairly common amongst Germanic languages.

This phenomenon is due to something we in the biz call wh- movement. Let me explain quickly what it is.

When you turn a statement into a question, you unconsciously perform a series of operations that transform that statement. In the case of wh- questions (what?, who?, when? etc.), the steps you follow are these:

  1. Take the statement.
    The boy ate the apple.
  2. Turn the part you want to question into a wh- word.
    The boy ate what?
  3. Move the wh- word to the beginning of the sentence.
    What the boy ate?
  4. For a series of hellishly complicated reasons I won’t go into here, transform the verb into it’s do-supported form (i.e. with “do”).
    What the boy did eat?
  5. Invert the subject and the verb.
    What did the boy eat?

And Bob’s your uncle! Pretty insane that you do this all the time and don’t even realise it, huh?

The process is basically the same for relative clauses (i.e. “The apple (which) the boy ate”), except without steps 4 and 5 (because it’s not a question), and with an extra step where you copy the “questioned” part to the start of the sentence before turning it into the wh- word. So:

  1. The boy ate the apple.
  2. The apple the boy ate the apple.
  3. The apple the boy ate which.
  4. The apple which the boy ate.

What interests us is what happens when this process takes place in a sentence where the moved object (or constituent, to use the proper lingo) is preceded by a preposition.

  1. The boy went to the cinema with the girl.
  2. The girl the boy went to the cinema with the girl.
  3. The girl the boy went to the cinema with who(m).

And here we hit the point of contention. What should be done on step 4? Until the 18th century, the answer was easy: the most natural option was to move the wh- word and leave the preposition where it is. Stranded, if you like.

  1. The girl who(m) the boy went to the cinema with.

The same applied to questions (“Who(m) did the boy go to the cinema with?”). However, there was a second option, in which the wh- word dragged the preposition along with itself to the start of the sentence or clause, so that step 4 would look like

  1. The girl with who(m) the boy went to the cinema.

This particular construction is technically known as pied-piping, from the German fairy tale “The Pied Piper of Hamelin”, where a magic piper freed the city of troublesome mice by playing his flute and mesmerising them into following him out. He applied the same procedure later to kidnap all the city’s children to punish the inhabitants for their ingratitude. Talk about overreacting.

This option, while always possible, was seen as rather cumbersome, and therefore dispreferred. Until the 18th century, when a sustained campaign by a number of intellectuals flipped the status of the two constructions in the public consciousness. What happened?

Well, as you might remember from many of our posts about the history of prescriptivism, people in the 18th and 19th century displayed an unhealty obsession over Latin. Since Latin was The Perfect Language™, each and every aspect of the English language that didn’t look like Latin was, of course, wrong and barbaric, and had to be eliminated. I’ll give you one guess as to what Latin didn’t do with its prepositions during wh- movement.

If you guessed “stranding them”, then congratulations! You guessed right.

In Latin (and all the languages which descend from it), only pied-piping is acceptable when applying wh- movement to a sentence with a preposition. Our example sentence in Latin would go like this (cum = with, quā = who(m)):

  1. Puer ad cinematographeum cum puellā īvit.
  2. Puella puer ad cinematographeum cum puellā īvit.
  3. Puella puer ad cinematographeum cum quā īvit.
  4. Puella cum quā puer ad cinematographeum īvit.

Needless to say, the prescriptivist scholars twisted themselves into logic pretzels to justify why this should be true of English as well. Some just openly admitted that it was because English should be similar to Latin, others tried to be clever and argued that a “preposition” is called that because it goes before a word (pre- = before + position), and must have thought themselves exceedingly smart, notwithstanding the fact that the word “preposition” comes from Latin, where P-stranding is impossible, so of course they would call it that.

Some got caught in their own circular reasoning and inevitably found sentences in which preposition stranding is obligatory, giving rise to comically frustrated rants like the following, courtesy of one Philip Withers, from 1789:

“It may be said, it is absolutely unavoidable on particular occasions. v.g. The Stock was disposed OF BY private contract. But an elegant writer would rather vary the phrase, or exchange the verb than admit so awkward a concurrence of prepositions.”

A little tip, kids: if someone tells you he would rather avoid or ignore pieces of data that they dislike, or actively tells you to do so, they’re not a scientist. In the case of linguistics, you’ve spotted a prescriptivist! Mark it on your prescriptivist-spotting book and move on.

What of the writers that came before them and regularly stranded prepositions? Robert Lowth (a name you’ll become wearily familiar with by the end of this series) commented that they too were somehow universally speaking bad English, and a guy named John Dryden even went so far as to rewrite some of Shakespeare’s plays to remove some of the unsightly and atrocious “errors” he found in them, preposition stranding included.

Such are the lengths fanatism goes to.

Stay tuned for next time, when Rebekah will explain to you why a negative plus a negative doesn’t necessarily imply a positive.

 

To boldly split what no one should split: The infinitive.

Lies your English teacher told you: “Never split an infinitive!”

To start off this series of lies in the English classroom, Rebekah told us last week about a common misconception regarding vowel length. With this week’s post, I want to show you that similar misconceptions also apply to the level of something as fundamental as word order.

The title paraphrases what is probably one of the most recognisable examples of prescriptive ungrammaticality – taken from the title sequence of the original Star Trek series, the original sentence is: To boldly go where no man has gone before. In this sentence, to is the infinitive marker which “belongs to” the verb go. But lo! Alas! The intimacy of the infinitive marker and verb is boldly hindered by an intervening adverb: boldly! This, dear readers, is thus a clear example of a split infinitive.

Or rather, “To go boldly”1

Usually an infinitive is split with an adverb, as in to boldly go. This is one of the more recognisable prescriptive rules we learn in the classroom, but the fact is that in natural speech, and in writing, we split our infinitives all the time! There are even chapters in syntax textbooks dedicated to explaining how this works in English (it’s not straightforward though, so we’ll stay away from it for now).

In fact, sometimes not splitting the infinitive leads to serious changes in meaning. Consider the examples below, where the infinitive marker is underlined, the verb it belongs to is in bold and the adverb is in italics:

(a) Mary told John calmly to leave the room

(b) Mary told John to leave the room(,) calmly

(c) Mary told John to calmly leave the room

Say I want to construct a sentence which expresses a meaning where Mary, in any manner, calm or aggressive, tells John to leave the room but to do so in a calm manner. My two options to do this without splitting the infinitive is (a) and (b). However, (a) expresses more strongly that Mary was doing the telling in a calm way. (b) is ambiguous in writing, even if we add a comma (although a little less ambiguous without the comma, or what do you think?). The only example which completely unambiguously gives us the meaning of Mary asking John to do the leaving in a calm manner is (c), i.e. the example with the split infinitive.

This confusion in meaning, caused by not splitting infinitives, becomes even more apparent depending on what adverbs we use; negation is notorious for altering meaning depending on where we place it. Consider this article title: How not to raise a rapist2. Does the article describe bad methods in raising rapists? If we split the infinitive we get How to not raise a rapist and the meaning is much clearer – we do not want to raise rapists at all, not even using good rapist-raising methods. Based on the contents of the article, I think a split infinitive in the title would have been more appropriate.

So you see, splitting the infinitive is not only commonly done in the English language, but also sometimes actually necessary to truly get our meaning across. Although, even when it’s not necessary for the meaning, as in to boldly go, we do it anyway. Thus, the persistence of anti-infinitive-splitting smells like prescriptivism to me. In fact, this particular classroom lie seems like it’s being slowly accepted for what it is (a lie), and current English language grammars don’t generally object to it. The biggest problem today seems to be that some people feel very strongly about it. The Economist’s style guide phrases the problem eloquently3:

“Happy the man who has never been told that it is wrong to split an infinitive: the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it.”

We will continue this little series of classroom lies in two weeks. Until then, start to slowly notice split infinitives around you until you start to actually go mad.

Footnotes

I’ve desperately searched the internet for an original source for this comic but, unfortunately, I was unsuccessful. If anyone knows it, do let me know and I will reference appropriately.

This very appropriate example came to my attention through the lecture slides presented by Prof. Nik Gisborne for the course LEL1A at the University of Edinburgh.

This quote is frequently cited in relation to the split infinitive, you can read more about their stance in the matter in this amusing post: https://www.economist.com/johnson/2012/03/30/gotta-split

So you’re a linguist…

“…how many languages do you speak?”

Every linguist on the planet knows and dreads this question, known simply as The Question™. The fact that it’s the first question most people ask when hearing of a linguist’s occupation certainly doesn’t help.

Right now you’re probably thinking “Give me a break, Riccardo. It’s quite a natural question to ask when you learn someone works with languages, isn’t it?”

Well, yes. Yes it is a very natural question. The problem is that it springs from a very common misunderstanding of a linguist’s job, and, to make things worse, it’s one of the most difficult questions to answer for a linguist.

Let me explain in a bit more detail what I mean.

Dammit Jim, I’m a linguist, not a linguist!

One of the reasons The Question™ is so popular amongst laypeople is semantic ambiguity. To our eternal annoyance as academic linguists, the word “linguist” has two different meanings in the English language. The meaning we use on this blog, and the one most people who call themselves “linguists” intend, is “a person engaged in the academic study of human language”. As you’ve probably gathered if you read our blog, this doesn’t necessarily involve the study of any particular language: while there are many linguists which specialise in one language only, many (perhaps even most) specialise in linguistic branches or whole families, and some specialise in particular fields of linguistics, like phonetics or semantics, and work with multiple completely unrelated languages.

Crucially, the job of an academic linguist doesn’t involve learning any of the languages we study, a point which I’ll talk about in more detail in the next section.

Unfortunately, this first meaning of the word “linguist” is not the one the public knows best. Not by a long shot.

The second meaning of “linguist” comes from military jargon, and it’s the one most familiar to laypeople due to its being spread far and wide by films, TV series, books and other popular entertainment media. In the military, a “linguist” is the person tasked with learning the language of the locals during a foreign campaign, with the goal of helping his fellow soldiers interact with them. In short, they’re what in any other field would be called an interpreter. Why the military had to go and rain on our lovely linguistic parade by stealing our name instead of using the proper name for what they do is a mystery, but they’re probably snickering about it as we speak. Regrettably, due to the greater popularity of films and stories set in a military/combative milieu, as opposed to the far superior and more engaging world of academics, with its nail-biting, edge-of-your-seat deadlines and paper-writing all-nighters, the second meaning of the word “linguist” has been cemented in the popular imagination as the primary one, and the rest is history.

It certainly doesn’t help that Hollywood likes to portray their “linguists” as knowing every single language they come into contact with, which has gone a long way towards making The Question™ as popular as it is.

Knowledge is relative, and numbers even more so

If our problem with The Question™ were only a matter of misunderstanding of our job description, it would be no big deal. We’d just list out all the languages we speak and then explain what a linguist actually is to whoever is asking. Problem is, while for a wuggle (non-linguist) listing the languages they know is an easy task, for a linguist it’s absurdly difficult. If you’ve ever exposed a linguist to The Question™, you’ve probably already seen the symptoms of ALLA (Acute Language Listing Anxiety): panicking, profuse sweating, stammering, making of excuses, epistemological asides (“Well, it depends on what you mean by know…”), and existential dread about the possibility of The Followup™ (“So you speak X? Say something in X!”).

What is the reason for this affliction? Well, it all comes down to what I said in the previous section: a linguist might very well study a language, but they are by no means expected to speak it. This gives rise to the apparent paradox of a linguist knowing the grammar of some language extremely well, while not being able to have anything more than the most basic of conversations in it, if even that. Some linguists manage to muscle through the pragmatics of The Question™ and only list the languages they speak fluently (which is what most people are asking, really), but many get stumped by it, because what a linguist means by “knowing a language” is very different from what a wuggle intends.

For example, by a linguist’s conception of “knowing”, I could be said to “know” a couple dozen languages. But before you go all wide-eyed with awe at my intellectual might, know that of those couple dozen I can be said to really speak only five or six. And of those five or six, I’m only really fluent in two, with a decent degree of fluency in a third. To make matters even worse, even the meaning of speaking is vague for a linguist: does “speaking” a language mean I can hold my own in basic conversation, or does it mean I can read a newspaper? Or a novel? Or a treatise on quantum physics?

You see, from a linguistic point of view, “speaking” a language isn’t a binary question: fluency is a spectrum. I can order stuff in a restaurant in German and read some basic texts, but I would never be able to read a novel in it. Do I speak German? I’ve translated an entire comic from Finnish to English for fun with the help of a dictionary, but I wouldn’t be able to talk to a Finnish person in Finnish to save my life. Do I speak Finnish? As you can see, it’s extremely difficult for a linguist to accurately gauge what “speaking” or “knowing” a language actually entails, which is why it takes them an impressively long time to come up with a list, to the puzzlement of wuggles who could list the languages they speak in a heartbeat.

Conversation tactics for wuggles

So, what should you ask a linguist upon meeting them? Well, the safest question is probably a simple “what do you do?”

Us linguists, like most academics, like explaining our jobs very much, and we’d be very happy to have the opportunity to geek out about what we study with an interested person.

Be sure to know when to stop us, though, unless you want to be regaled with a half-hour lecture on the pragmatics of Mixtecan questions.

You’ve been warned.

Phonaesthetics, or “The Phrenology of Language”

 

Stop me if you’ve heard this before: French is a beautiful, romantic language; Italian sounds like music; Spanish is passionate and primal; Japanese is aggressive; Polish is melancholic; and German is a guttural, ugly, unpronounceable mess (Ha! Tricked you! You couldn’t stop me because I’ve written all of this down way before now. Your cries and frantic gesticulations were for naught.)

We’ve all heard these judgements (and many others) repeated multiple times over the course of our lives; not only in idle conversation, but also in movies, books, comics, and other popular media. There’s even a series of memes dedicated to mocking how German sounds in relation to other European languages:

“Ich liebe dich” is a perfectly fine and non-threatening way of expressing affection towards another human being

What you might not know is that this phenomenon has a technical name in linguistics: phonaesthetics.[1]

Phonaesthetics, in short, is the hypothesis that languages are objectively more or less beautiful or pleasant depending on various parameters, such as vowel to consonant ratio, presence or absence of certain sounds etc., and, not to put too fine a point on it, it’s a gigantic mountain of male bovine excrement.

Pictured: phonaesthetics

Let me explain why:

A bit of history

Like so many other terrible ideas, phonaesthetics goes way back in human history. In fact, it may have been with us since the very beginning.

The ancient Greeks, for example, deemed their language the most perfect and beautiful and thought all other languages ugly and ungainly. To them, these foreign languages all sounded like strings of unpleasant sounds: a mocking imitation of how they sounded to the Greeks, “barbarbarbar”, is where we got our word “barbarian” from.

In the raging (…ly racist) 19th century, phonaesthetics took off as a way to justify the rampant prejudice white Europeans had against all ethnicities different from their own.

The European elite of the time arbitrarily decided that Latin was the most beautiful language that ever existed, and that the aesthetics of all languages would be measured against it. That’s why Romance languages such as Italian or French, which descended from Latin[2], are still considered particularly beautiful.

Thanks to this convenient measuring stick, European languages were painted as euphonious ( ‘pleasant sounding’), splendid monuments of linguistic accomplishment, while extra-European languages were invariably described as cacophonous (‘unpleasant sounding’), barely understandable masses of noise. This period is when the common prejudice that Arabic is a harsh and unpleasant language arose, a prejudice that is easily dispelled once you hear a mu’adhin chant passages from the Qur’an from the top of a minaret.

Another tool in the racist’s toolbox, very similar to phonaesthetics, and invented right around the turn of the 19th century, was phrenology, or racial-biology, the pseudoscience which alleged to be able to discern a person’s intelligence and personality from the shape of their head. To the surprise of no one, intelligence, grace and other positive characteristics were all associated with the typical form of a European white male skull, while all other shapes indicated shortcomings in various neurological functions. What a pleasant surprise that must have been for the European white male inventors of this technique![3] Phrenology was eventually abandoned and widely condemned, but phonaesthetics, unfortunately, wasn’t, and it’s amazingly prevalent even today.

To see how prevalent this century-old model of linguistic beauty is in popular culture, a very good example are Tolkien’s invented languages. For all their amazing virtues, Tolkien’s novels are not exactly known for featuring particularly nuanced moral actors: the good guys might have some (usually redeemable) flaws, but the bad guys are just bad, period.

Here’s a brief passage in Quenya, the noblest of all Elven languages:

Ai! Laurië lantar lassi súrinen,

Yéni únótimë ve rámar aldaron!

Yéni ve lintë yuldar avánier

Mi oromardi lissë-miruvóreva

[…]

Notice the high vowel-to-consonant ratio, the prevalence of liquid (“l”, “r”), fricative (“s”, “v”) and nasal (“n”, “m”) sounds, all characteristic of Latinate languages.

Now, here’s a passage in the language of the Orcs:

Ash nazg durbatulûk, ash nazg gimbatul

Ash nazg thrakatulûk, agh burzum-ishi krimpatul

See any differences? The vowel-to-consonant ratio is almost reversed, and most syllables end with a consonant. Also, notice the rather un-Latinate consonant combinations (“zg”, “thr”), and the predominance of stops (“d”, “g”, “b”, “k”). It is likely that you never thought about what makes Elvish so “beautiful” and “melodious”, and Orcish (or Klingon, for that matter), so harsh and unpleasant: these prejudices are so deeply ingrained that we don’t even notice they’re present.

So why is phonaesthetics “wrong”?

Well, the reason is actually very simple: beauty is subjective and cannot be scientifically defined. As they say, beauty is in the eye of the beholder.

Not this beholder.
Image copyright: Wizards of the Coast

What one finds “beautiful” is subject to change both in space and in time. If you think German’s relatively low vowel-to-consonant ratio is “harsh”, then you have yet to meet Nuxálk.

Welcome to phonaesthetic hell.

Speaking of German, it is actually a very good example of how these supposedly “objective” and “common sense” criteria of phonetic beauty can change with time, sometimes even abruptly. You see, in the 19th century, German was considered a very beautiful language, on par with Italian or French. A wealth of amazing prose and poetry was written in it: it was probably the main language of Romantic literature. It was also the second language of opera, after Italian, and was routinely described as melodious, elegant and logical.

Then the Nazis came.

Nazis: always ruining everything.

Suddenly, Germans were the bad guys. No longer the pillars of European intellectual culture, their language became painted as harsh, aggressive, unfriendly and cold, and suddenly every Hollywood villain and mad scientist acquired a German accent.

So, what’s the takeaway from this long and rambling rant?

No language is more, or less, beautiful than any other language. All languages have literature, poetry, song and various other ways to beautifully use their sounds for artistic purposes, and the idea that some are better at this than others is a relic from a prejudiced era better left behind. So next time you feel tempted to mock German for how harsh and unpleasant it sounds, stop and think that maybe this is not actually what you think, and that you’ve been programmed by a century of social prejudice into thinking so.

And read some Goethe, you’ll like it.

Stay tuned for next week, when the amazing Rebekah will bring you on the third leg of our lightning trip through Phonphon!

  1. Phonaesthetics also has a different meaning, which is the study of how certain combinations of sounds evoke specific meanings in a given language. Although this form of phonaesthetics has its problems, too, it is not what I’m talking about in this post, so keep that in mind as we go forward.
  2. See our post on language families here.
  3. First the men assumed that the female skull was smaller than the male, and this was obviously a sign of their inferior intelligence. Later, however, they found that the female skull was larger, so they came up with the idea that this meant females were closer to children, and thus the male was still more intelligent! – Lisa

The Sapir-Whorf Hypothesis

 

“the Sapir-Whorf hypothesis is the theory that the language you speak determines how you think”

 

So says the fictive linguist Louise Banks (ably played by Amy Adams) in the sci-fi flick ‘Arrival’ (2016). The movie’s plot relies rather heavily on the Sapir-Whorf hypothesis, also known as the principle of linguistic relativity, so heavily in fact that the entire plot would be undone without it.

But what is the Sapir-Whorf hypothesis, really? Before digging into why ‘Arrival’ may have gotten it a bit… well, off, a word of caution: If you haven’t seen the movie (and intend to do so), go ahead and do that before reading the rest of this post because there will be SPOILERS!!!


Now that you have been duly warned, let’s get going.

The Sapir-Whorf hypothesis is, in a way, what Louise Banks describes: it is in part a hypothesis claiming that language determines the way you think. This idea is called linguistic determinism and is actually only one half of the Sapir-Whorf hypothesis.

Commonly known as the “strong” version of Sapir-Whorf, linguistic determinism holds that language limits and determines cognitive categories, thereby limiting our worldview to that which can be described in the words of whatever language we speak. Our worldview, and our way of thinking, is thus determined by our language.

That sounds pretty technical, so let’s use the example provided by ‘Arrival’:

The movie’s plot revolves around aliens coming to earth, speaking a language that is completely unknown to mankind. To try to figure out what they want, the movie linguist is called in. She manages to figure out their language pretty quickly (of course), realising that they think of time in a non-linear way.

This is quite a concept for a human to grasp since our idea of time is very linear. In western societies, we commonly think of time as a timeline going from left to right, as below.


 

Let’s say that we are currently at point C of our timeline. We can probably all agree that, as humans, we cannot go back in time to point A, right? However, in ‘Arrival’, we are given the impression that the reason we can’t do that is because our language doesn’t let us think about time in a non-linear way. That is, because our language doesn’t allow us, we can’t go back in time. Sounds a bit wonky, doesn’t it?

Well, you might be somewhat unsurprised to hear that this “strong” version has been discredited in linguistics for quite some time now and, for most modern-day linguists, it is a bit silly. Yet, we can’t claim that language doesn’t influence our way of thinking, can we?

Consider the many bi/multilinguals who has stated that they feel kinda like a different person when speaking their second language. If you’ve never met one, we bilinguals at the HLC agree that we could vouch for that fact.

Why would they feel that way, if language doesn’t affect our way of thinking? Well, of course, language does affect our way of thinking, it just doesn’t determine it. This is the ‘weak’ version of Sapir-Whorf, also known as linguistic relativism.

The weak version may be somewhat more palatable to you (and us): it holds that language influence our way of thinking but does not determine it. Think about it: if someone were to point out a rainbow to you and you had no word for the color red, you would still be able to perceive that that color was different from the others.

If someone were to discover a brand-new color (somewhat mind-boggling, I know, but just consider that), you would be able to explain that this is a color for which you have no word but you would still be able to see it just fine.

That might be the most clear distinction between linguistic determinism and linguistic relativism: the former would claim that you wouldn’t be able to perceive the color while the latter would say that you’ll see it just fine, you just don’t have a word for it.

So, while ‘Arrival’ was (at least in my opinion) a pleasant waste of time, when it comes to the linguistics of it, I’d just like to say:


(Oh, and on a side note, the name of the hypothesis (i.e. Sapir-Whorf), is actually quite misleading since Sapir and Whorf never did a collaborate effort to formalise the hypothesis)

Tune in for more linguistic stuff next week when the marvellous Rebekah will dive into the phonology of consonants (trust me, you have a treat coming)!

 

“A language is a dialect with an army and a navy”

Hello HLC readers! I’m Lisa, I’m a Swede (this kind, not this kind, and hopefully never this kind) but I live in Scotland, and I’m here to talk to you about the differences between languages and dialects. Now, the title of this post, “A language is a dialect with an army and navy”, should have made everything clear, so that will be my contribution for today.

Joking!

I’m so not done. The title quote was made popular by the sociolinguist, and Yiddish scholar, Max Weinreich (in Yiddish, with Roman letters: a shprakh iz a dialekt mit an armey un flot)1. This particular quote has been passed down to me on average once per each course I’ve taken in my four years of studying linguistics, which either tells you 1. Linguists are in serious need of new content, or 2. This is probably important for budding linguists to discuss. Both might be true in some cases, but most of the time 2 is the correct answer. We will need to tread carefully, and I don’t intend to make any political statements, but simply to shine some light on the complexity of the matter which, in fact, is often highly political. One final disclaimer: This is a really difficult topic to summarise. Bear with me.

For some of you reading, the question of what is and isn’t a language is probably something you haven’t thought about a lot. Some of you may think that the distinction is clear-cut; a language is distinct, it’s not similar to or dependent on anything else, and a dialect isn’t. You may even say that dialects are clearly sub-languages, because of the very way we phrase “dialects of a language” to imply that dialects belong to a language and not vice versa. Further, dialects are mutually intelligible (i.e. speakers of different dialects of one language can understand each other), which is not the case with languages. This is not exactly wrong, it’s just overly simplified.

First of all, if mutual intelligibility is a dialect criterion then my native Swedish could arguably be a Scandinavian dialect rather than a proper language – I, like most Swedes, understand Norwegian very well, and to some extent Danish, if spoken slowly (I’m currently working on my spoken Danish comprehension by watching both the Bridge and the Killing… My crime vocabulary is looking pretty solid by now). However, a lot of Swedes would not be thrilled to be told that their language is a dialect, and it does feel counter-intuitive to call it one.

On the other hand, there are agreed-upon dialects that are not mutually intelligible. Why are the dialects of, for example, Italian still called dialects, despite speakers of, for example, Emilian and Sicilian not being able to understand each other2 , while Norwegian and Swedish are officially agreed upon to be different languages? Also, what makes people call Catalan a dialect of Spanish (Don’t shoot the messenger!), or Cantonese a dialect of Chinese? Can you see a pattern forming? I’ll spell it out: The term language is most often, but not always, awarded to those “dialects” that have, or have had, official language status in a country, i.e. the dialect of those in power. The term dialect, or lect, is sometimes used neutrally in linguistics to cover both official languages and dialects, but there is  another term which is also used that I like more: variety. Variety is less socio-politically charged, and I use it all the time to avoid having to make a language/dialect distinction when I talk about linguistics.

There are, however, exceptions to the ‘official language’-criterion. If we go back to Spain, for example, no one would argue that Basque is a dialect of Spanish because Basque looks and sounds nothing like Spanish at all (or maybe some would argue this, but could we all agree that this is an unusual opinion?). So, there must be an element of likeness, or similarity, involved. Preferably the variety in question would be a part of the same language family3  – this could be why no one argues the language status of indigenous varieties, like Sami varieties in northern Scandinavia or the various native American varieties like Navajo and Cree.

My take on the issue is this: What people choose to call a language is largely based on four criteria:

  1. Is this variety an official language of a country?
  2. Is the variety distinct in terms of likeness to the official language of that region? Recall what was said above about indigenous languages.
  3. Is this variety considered an example of how that variety should be spoken, i.e. a standard variety, that also has sub-varieties (dialects) that diverge from that standard? An example: British English has a standard, sometimes called BBC English, or RP, but also a plethora of quirky dialects like Geordie, Scouse, Scottish English, Brummie, etc., all still considered to be English.
  4. Does it have an army and a navy?
  5. I jest.
  6. The real number 4: Is the variety standardised? Can we study it with the help of grammars and lexicons? Is it taught in schools? (Language standardisation is a whole topic of its own, which we will come back to in a later post.)

We can see that the term language is strongly connected to the status a variety has in a nation, it is a term that is awarded or given. When we attempt linguistic distinctions between languages and dialects, things get confusing really quickly. Is differing syntax, for example word order differences, more distinguishing than differing vocabulary? Norwegian and Danish have largely similar vocabularies, but very distinct pronunciations, so how does that factor in when we determine whether they are distinct languages or dialects of one variety? How much is the mutual intelligibility due to close contact, rather than actual similarities4 – do I understand Norwegian well because I grew up a couple of hours from the border to Norway, or because Norwegian and Swedish are so similar?

It is also relevant to talk about the historical perspective (after all this is is the Historical Linguist Channel). To throwback to Rebekah’s post last week, we know that English has changed a lot since the Anglo-Saxon times. We all tend to agree that Latin is one language distinct from Spanish, French, Italian, Portuguese and Romanian, but we also know that these languages all originate from Latin. What about English then? Old English and Present Day English look different enough that we could happily call them distinct languages, but what about Early Modern English? When do we say a variety has diverged enough from its parent language to be considered a language in its own right? Is my grandmother’s sister, my great-aunt, a part of my immediate or extended family? Well, that often depends on my relationship to my great-aunt, which brings us back to the subjectivity of the question.

The point I’m trying to make with these confused ramblings is that the term language cannot be defined linguistically, but is a wholly social and political term. The people of Montenegro generally refuse to recognise their variety’s similarity to Serbian, despite the varieties being largely indistinguishable – they speak Montenegrin. Knowing the history of the region though, we might be able to see where the Montenegrians are coming from, why it feels important for them to distinguish themselves as a people through their language5 . When we discuss what a language is, it’s important to keep in mind what the term means for the people who use it. Our language is tightly connected to our sense of identity; this is one reason why we’re so reluctant to see it changing or being used in a way we perceive as wrong (throwback to Sabina’s and Riccardo’s posts). The term dialect is somehow seen as inferior to language, and thus the terminology becomes a much larger issue than any linguistic definitions we can make.

Related to this issue are topics like standardisation (mentioned above), minority languages, and the idea of debased English. The latter two are also upcoming topics. In future posts, I will be addressing a variety that is my special interest, Scots 6, which is particularly affected by the issues discussed here. Scots is a Germanic variety spoken in Scotland, which is closely related to English but is still distinct from English (much like Swedish and Norwegian). First, however, I will be back next week to outline the main disciplines that fall under the umbrella of linguistics.

Footnotes

1He didn’t utter the quote first though, but an auditor in one of his lectures said it to him. I recommend reading about the situation on Wikipedia.

2Ask Riccardo about this issue and your evening entertainment is sorted.

3“Language family” is the name given to a group of languages which share an ancestor. We will dedicate more time to this topic at a later point. Meanwhile, you may admire this beautiful Indo-European and Uralic family tree.

4These and other questions are addressed by linguistic typologists, who try to map the languages of the world, categorise them and determine their relatedness.

5This fact was brought to my attention by a student from Montenegro during the course Scots and Scottish English, taught by Dr. Warren Maguire at the University of Edinburgh. A lot of the discussions we had in that course have provided background for the arguments and questions presented here.

6The Angus Macintosh Centre for Historical Linguistics have made brilliant videos explaining the history of Scots, in both Scots and English. I strongly recommend watching these!

The myth of language decay: Do youths really not know how to speak?

Hi everyone!

My name is Sabina, I’m 28 years old, from rainy Gothenburg, Sweden (unlike Riccardo from sunny Bologna). Why am I here? Well, to talk about linguistics, obviously! Specifically, I’ll be talking about a persistent and prevalent language myth: the myth of language decay.

This is the idea that modern forms of language are somehow steadily getting “worse” in comparison to previous stages of the language. The thought that there was, somewhere, somehow, a “golden age” of the language, after which it became unstructured, uninformative or just plain “bad”. This idea is a form of prescriptivism, as described by Riccardo in last week’s post, and perhaps the most widespread one at that.

You might think that this is not as common a myth as I say, but consider: have you ever heard someone claim that “young people” don’t know how to write? How to talk “properly”? Maybe even how to read? These are, indeed, examples of this myth.

However, is it true? Do young people really not know how to write/speak/read their native tongue? Of course not, they just do it in a different way.

The myth of language decay is intimately connected to the phenomenon known as language change. Now, language change is often described by linguists as a necessary, vital and continuous part of the language’s development and survival. Just imagine if we spoke English the same way as in the Middle Ages, or even as in Shakespeare’s time! English today is certainly different from back then, but it is in no way worse. Think about it, would you really want everyone to speak like Shakespeare did? Or Chaucer? Or perhaps as in Beowulf?

It is interesting to note, however, that the idea of language decay rarely touches the history of the language. Chaucer and Shakespeare lived approximately 200 years apart yet no one really claiming that Chaucer’s English was “bad” in comparison to Shakespeare’s, do they? (As a matter of fact, Chaucer has earned himself the nickname “Father of English literature” so it really can’t be, can it?).

Let’s take a more recent example: Charles Dickens (1812-1870) to J.R.R. Tolkien (1892-1973) to George R.R. Martin (1948-). Now, if you sit down and read through the works of these three authors, all of whom have been hailed for their writing skills, you will probably notice a rather distinct difference in not only style, but perhaps also in lexicon and grammar. Yet no one is arguing that Dickens and Tolkien didn’t know how to write, do they?

But guess what? Someone probably did when Tolkien started writing! Someone probably did when Martin started out. Someone probably even said it about Dickens, Austen, Woolf, Brontë, Shakespeare, Chaucer, etc, etc.

In fact, people have been complaining about language “decay” for a long, long time, specifically since the time of Sumerian, a language spoken in the region of Sumer in ancient Mesopotamia. Now, you might be thinking: “Sabina, surely you’re exaggerating things just a bit?”.

I am not.

Sumerian is the first language from which there is surviving written material1 and in 1976, a researcher named Lloyd-Jones2 published a piece of work detailing inscriptions made on clay tablets. Among other things, these contained an agonized complaint made by a senior scribe regarding the junior scribes’ sudden drop in writing ability.

Basically: “Young people can’t write properly!”.

Consider that for a second. People have been complaining about supposed language decay for, literally, as long as we have evidence of written language.

Given this, you can imagine that people tend to have a strong reaction to language “decay”. Consider the case of Jean Aitchison, an Emeritus Professor of language and communication at the University of Oxford. In 1996, Professor Aitchison participated in the BBC Reith Lectures, a series of annual radio lectures given by leading figures of a particular field. Professor Aitchison lectured on the naturalness of language change, stating that there was nothing to worry about.

The result of this? Professor Aitchison received hostile letters to her home. Consider that for just a second: people took the trouble of sitting down, writing a threat, posting it, wait for the post to reach her, just to get their sense of accomplishment.3 That’s a pretty good indication of how strongly some people feel about this.

So, why are we reacting that way?

Well, we spend year upon year, in school, in newspapers, even in social media (with its “grammar Nazi” phenomenon), teaching people that there is a “correct” way of using language. We work hard to achieve this standard. Think of it as learning how to ride a bike. All your life, you’ve been told that you should sit on the bike in a certain way. It’s very uncomfortable, but you work and work and work to apply the right technique. When you’ve finally mastered the skill (and are feeling quite proud of yourself), someone comes along and tells you that you can sit on the bike anyway you want. Risk of you lashing out? Probably at least somewhat high.

But see, the thing is that, when it comes to language, there really is no “correct way”. Take the word “irregardless” for example. Many immediately get this kind of stone-faced expression and thunderously proclaim that there is no such word. But actually, there is. It’s a non-standard dialectal variant, used with a specific meaning and in specific contexts (in this particular case, irregardless is a way to shut a conversation down after already having said “regardless” in those varieties4, isn’t that interesting?).

But people think that there is somehow something “wrong” with this word, and those who use it (or other non-standard forms) will often be judged as speaking “bad English”, throwing more fuel on the fire for the myth of language decay. Especially since the older generations, for example, may retain their ideas about what is “correct” usage, while younger generations may have a different idea about what is “correct” and use the language in a different way.

So, what’s my point with all this? Well, my point is that the moment that a word from a non-standard dialect makes its way into the standard language, it’s going to raise some discussion about the “decay” of the language. This is really particularly true of the younger generations today who actually introduced a whole new form of language into their standard vocabulary: internet and/or texting slang!

This is fascinating! We’re introducing a new form of language! But… When young people start using, I don’t know, “brb”, “afk”, “lol”, etc. in their everyday speech, other people may condemn this as “lazy, uneducated, wrong”, etc., etc. and the myth of language decay rejuvenates.

But the thing is that languages change to match the times in which they exist. It may change due to political readjustments that have occurred or to reflect the different attitudes of the people. And sometimes, we can’t point to anything that made the language change – it simply did. Regardless, the language reflects its time, not a glorified past. And that is a good thing.

Unless, of course, you would perhaps prefer to remove most -ed past tense endings, especially on strong verbs, and go back to the good old days of ablaut (that is, vowel gradation carrying grammatical information, e.g. sing, sang, sung)? Or perhaps lower all your vowels again and skip the diphthongs? Or perhaps… yeah, you see where I’m going with this.

No? Didn’t think so. In that case, let’s celebrate the changes, both historical and current, without accusing them to somehow make the language worse.

Because, truly, the only difference between changes that made the language into the “glorious standard” of yesteryear and the changes that are happening now, is time.

Tune in to Rebekah’s post next week where she will explain the different periods of English and make it clear why Shakespeare did not write in Old English!

Bibliography

1 Check out the 5 oldest written languages recorded here.

2 Lloyd-Jones, Richard. 1976. “Is writing worse nowadays?”. University of Iowa Spectator. April 1976.
Quoted by Daniels, Harvey. 1983. Famous last words: The American language crisis revisited. Carbondale, IL: Southern Illinois University Press. pp. 33.

3Aitchison, Jean. 1997. The Language Web. Cambridge: The Press Syndicate of the University of Cambridge.

4Check out Kory Stamper, a lexicographer for Merriam-Webster, explaining “irregardless” here.

Introduction to the blog and some words on Descriptivism

Hello everyone! Welcome to our shiny new blog! My name is Riccardo, I’m 25 years old, from Bologna, Italy (homeland of good food and jumping moustached plumbers) and I’m here to talk about linguistics. Well, we all are, really. That’s why we’re the Historical Linguist Channel™!

So, “what is a linguist?” I hear you ask through my finely-honed sense for lingering doubts. Well, a linguist is someone who studies language, duh. What’s that? You want more detail? I can understand that. After all, few academic fields are as misunderstood by the general public as the field of linguistics. People might think that the Earth is flat, or that aspirin turns frogs into handsome, muscular princes (or was it kisses?), but at least they know what an astronomer or a doctor is and what they do. No such luck for linguists, I’m afraid. Misconceptions about what we do and absurdly wrong notions about what we study are rife even within the academic community itself. We’re here to dispel those misconceptions.

In the series of articles that follows, each of us will debunk one myth or misconception which he or she (mostly she) finds particularly pernicious and wants out of the way immediately before we even start regularly updating the blog’s content. In this introductory article, I will explain the most fundamental source of myths and misconceptions about linguistics there is: the difference between descriptive and prescriptive linguistics.

But first, let me begin with an unfortunately not-so-exaggerated portrayal of the popular perception of linguists: the Movie Linguist.

Scene: an unexplored Mayan ruin, deep in the jungles of Central America. Three explorers cautiously walk in a dark hallway, torches blazing over their heads. Philip, the dashing young adventurer, leads forward, cutting the vines that grow in the ancient corridors with his machete. He is followed by Beatrice, a beautiful young woman he naturally will end up kissing towards the end of the movie. Trailing behind them is a bespectacled, nervous man, awkwardly trying to hold onto a ream of papers and charts. He is Nigel, the linguist. Suddenly, they break into an enormous room. The group leader raises his torch with a sweeping motion. The music swells: the walls of the chamber are covered with inscriptions.

Philip: My God… look at this.

Beatrice: What is it?

Philip: Look at the inscriptions on the walls.

Beatrice: [gasps] Could it really be…?

Philip: Egyptian hieroglyphs… in a Mayan pyramid!!

Beatrice: But it’s impossible! How could they have arrived here?

Philip: I don’t know. Nigel! You’ve got to see this.

Nigel enters the chamber, and immediately drops his papers in astonishment.

Nigel: I- it’s incredible! The theories of professor McSweeney on cultural cross-pollination were true!

Beatrice: Can you read it?

Nigel: Well, given the nature of the expedition, I was presumably hired for my expertise in Meso-American languages. Fortunately, I am a Linguist™, and that means I can read every language ever spoken by every human being that ever lived.

Nigel kneels next to the closest inscription. He thoughtfully adjusts his glasses.

Nigel: Hmmm… I recognise this. It’s an obscure dialect of Middle Egyptian spoken in a village exactly 7.6 km due East of Thebes in the year 1575 BC. I can tell just by superficially looking at it.

Philip: What does it say?

Nigel: Unfortunately, this dialect is so obscure that it wasn’t covered in the 72 years of back-breaking grad school every linguist must undergo to learn every language ever spoken. I will need time to decipher it.

Beatrice: How much time? This place gives me the creeps.

Nigel: Just a few hours, and I will do it with no help from any dictionary, reference grammar or corpus of similar dialects to which I could compare it. After I decipher it, I will, of course, be able to read, write, and speak it natively with no doubt or hesitation whatsoever.

A skittering sound echoes in one of the hallways.

Philip: Be quick about it. I have a feeling we’re not alone…

In the end, it turns out the inscriptions on the wall warn intruders that an ancient Egyptian god slumbers in the tomb and that he will not be appeased by anything except fat-free, low-calorie double bacon cheeseburgers which taste as delicious as their horribly unhealthy counterparts, which is, of course, a dream far beyond the reach of our puny human science. A thrilling battle with the minions of this god ensues, until the explorers come face-to-face with the burger-hungry divinity himself. They manage to escape his clutches thanks to Nigel, who now speaks the Middle Egyptian dialect so well that he manages to embarrass the god by pointing out that he ended a sentence with a preposition.

Somewhere along the way, Philip and Beatrice kiss.

Our objective here at the Historical Linguist Channel is to bring your image of linguists and linguistics as far as possible from the one I just painted above. Said image is unfortunately very prevalent in the public’s consciousness, a state of affairs which makes linguistics possibly one of the most misunderstood academic disciplines out there.

So, without further ado, I will get into the meat of my own post: the distinction between descriptive and prescriptive linguistics.

What is descriptivism?

Most people know at least some basic notions about many sciences: most of us know that matter in the universe is made of atoms, that atoms bond together to form molecules, and so on. Most people know about gravity, planets and stars.

Yet, remarkably few people, even amongst so-called “language enthusiasts”, know the most basic fact about linguistics: that it is a descriptive, and not a prescriptive, discipline.

What does it mean to be a descriptive discipline? As the name suggests, a descriptive discipline concerns itself with observing and describing a phenomenon, making no judgements about it. For a descriptive science, there are no superior or inferior facts. Facts are just facts. A planet that goes around its star once every 365 days is not any better or worse than one which takes, say, 220. As an academic science, linguistics merely concerns itself with studying language in all its forms and variety, without ascribing correctness or value on some forms over others. To a linguist, “I ain’t done nuffin’ copper!” is as good an English sentence as “The crime of which you regretfully accuse me has not taken place by my hand, and I resent the implication, good sir!”

Now, you might be thinking: Riccardo, doesn’t every scientific discipline work that way? To which I answer: yes, yes they do. Linguistics, however, is slightly different from pretty much all other scientific disciplines (with the possible exception of sociology and perhaps a few others) in that, for most of its early history, it was a prescriptive discipline.

A prescriptive discipline is basically just the opposite of what I just described. Prescriptive disciplines judge some forms of what they study to be better or “correct”, and others to be “wrong” or inferior to others. Sound familiar? That’s probably because it’s how most people approach the study of language. Since the dawn of civilisation, language has been seen as something to be tightly controlled, of which one and only one form was the “right” and “correct” one, all others being corruptions that needed to be stamped out. Another very prevalent prescriptive idea is that language is decaying, that young people are befouling the language of their parents, transforming it into a lazy mockery of its former glory, but that’s a story for another post.

Prescriptive linguistics is concerned with formulating and imposing a series of rules that determine which form of a language is correct and which forms are not (in Humean terms, descriptivism is concerned with “is”, prescriptivism is concerned with “ought”. And you thought this wasn’t going to be an exquisitely intellectual blog).

In general, if you ask most people on the street to cite a “rule of grammar” to you, they will come up with a prescriptive rule. We’ve all heard many: “don’t end a sentence with a preposition”, “it’s you and I, not you and me”, “a double negative makes a positive”, the list goes on.

If you ask a linguist, on the other hand, you’ll get descriptive rules, such as “English generally places its modifiers before the head of the phrase” or “English inflects its verbs for both tense and aspect”.

A very useful way to think about the difference between a descriptive and a prescriptive rule is comparing it to the difference between physical laws and traffic laws. A physical law is a fact. It can’t be broken: it simply is. I can no more contravene the law of gravity than I can purposefully will my own heart to beat in rhythm to Beethoven. But I can contravene traffic laws: I am absolutely physically capable of driving against the flow of traffic, of running a red light or not switching on my headlights during poor visibility conditions.

In general, if a rule says that I shouldn’t do something, that means that I am capable of doing it. Even more damningly, if someone felt the need to specify that something should not be done, it means that someone has been doing it. So, completing the analogy, the paradoxical reason you hear your teacher say that you can’t end a sentence with a preposition in English is that you CAN end a sentence with a preposition in English. In fact, it is far more common than the so-called “correct” way.

What you will never hear is an English teacher specifically instructing you not to decline an English noun in the locative case. Why? Because English has no locative case. It lost it in its rebellious youth, when it went by the name of Proto-Germanic and it had just split from Indo-European because that’s what all the cool kids were doing. Finnish, which is not an Indo-European language, is a proper hoarder: it has no less than six locative cases.

Academic linguistics is exclusively concerned with the “physical laws” of language, the fundamental rules that determine how each language differs from all others. It takes no interest in offering value-judgements. Which is why a linguist is the last person you should ask about whether something you said is “good grammar” or not, incidentally.

So, are descriptivism and prescriptivism radically and fundamentally opposed?

Well, yes and no.

A limited form of prescriptivism has its uses: since languages are not uniform and vary wildly even over relatively short geographical distances, it is very important for a country to have a standardised form of language taught in school, with regulated forms so that it doesn’t veer too much in any particular direction. This makes communication easy between inhabitants of the country, and allows bureaucratic, governmental and scientific communication to happen with the greatest amount of efficiency.

The problem with prescriptivism is that it is very easily misused. Only a frighteningly short step is needed to go from establishing a standard form of language to ease communication between people in the same nation to defining all varieties of the language which do not correspond to this standard form as debased trash worthy only of stamping out, and any speakers of those varieties as uneducated churls, or worse, traitors and villains. For centuries, some languages (such as Latin) have been touted as “logical”, “superior”, the pinnacle of human thought, while other languages (mainly the languages of indigenous peoples in places conquered by Western colonialists, surprise surprise) were reviled as “primitive”, incapable of complex expression on the level of European languages.

Linguistic discrimination is a woefully widespread and tragically unreported phenomenon which is rife even in what would otherwise be socially progressive countries. In my native Italy, more than 20 local languages are spoken over the whole territory, some as different from Italian as French is. Yet, if you ask most people, even cultured ones, the only language spoken in Italy is Italian (the standardised form based on the language of Florence). All the other local languages are reduced to the status of “dialects”, and often reviled as markers of lack of education or provinciality, and described as less “rich” than Italian, or even as ugly and vulgar. The Italian state doesn’t even recognise them as separate languages.

Even comparatively minor variation is a target for surprisingly virulent hate: one need only think about the droves of people foaming at the mouth just thinking about people speaking English with the intonation pattern known as “uptalk”, characteristic of some urban areas in the USA and Australia.

Be descriptive!

So, what’s the takeaway from this disjointed ramble of mine?

Simple: linguistics is the scientific study of language, and sees all forms of language as equally fascinating and worthy of study and preservation.

In our posts and our podcasts you will never hear us ranting about “bad grammar”, or describe certain languages as superior or inferior to others. Our mission is transmitting to you the wonder and joy that is the immense variety inherent in human language.

Along the trip, you’ll discover languages in which double negatives are not only accepted, but encouraged; in which sentences MUST end with a preposition, when the need arises; languages with a baffling number of cases, baroque verb systems, and grammatical categories you haven’t even heard of.

We hope you’ll enjoy it as much as we do.

Tune in next Thursday for the next introductory post on the thorny question of language evolution, where Sabina will set the record straight: are youths these days ruining language?

Bibliography

Most introductory linguistics textbooks begin with a section on descriptivism, but if you want something free and online, the introductory section for The Syntax of Natural Language by Beatrice Santorini and Anthony Kroch is thorough and full of examples. You can find it here: http://www.ling.upenn.edu/~beatrice/syntax-textbook/