We’re not so different, you know?

Insights from the ISLE Summer School, 24-28 June 2019

This week, I had the pleasure of attending the International Society for the Linguistics of English (ISLE) Summer School. The summer school is bi-annual, and explores different themes each time – this year, the theme was using the past to explain the present, with the description: “A special focus will be on evidence for past states of English and Scots, with reference to the functioning of writing systems in manuscript and printed contexts.”

With a theme like that, there’s no wonder that this summer school caught the interest of two HLC:ers: Sabina and myself (Lisa)!

The summer school was organised at the University of Glasgow by the ISLE president, Professor Jeremy Smith. On the first day, he held a workshop which led us to think more about how the past can help us explain the present, and he emphasised the importance of considering that the old languages and writing systems we study were produced by people who were as much conditioned by social factors as we are today. In fact, the name of this year’s theme is a scrambled version of a pioneering publication by famous sociolinguist William Labov, On the use of the present to explain the past, which explored the idea that humans are not so different in history and today, and thus we can use our knowledge of today’s languages, and the people who speak them, to make inferences about history. Likewise, through looking at material culture (for example scribal practices, and the look and material of manuscripts), and through exploring the social context in which they operate, we can learn more about what drives language change. 

The exploration of manuscripts continued into the workshops in the morning of the second day. Professor Wendy Scase from the University of Birmingham held a workshop about writing systems, and made us aware of the social factors which may condition how we write. The traditional view of spelling is that it follows pronunciation, but it’s not usually that straightforward, and there are often social cues in what spelling systems we adhere to. 

One simple example is, of course, the differences between British and American English; the use of colour or color says nothing about pronunciation, but reading one or the other immediately tells you something about the writer. Consider also things like “heavy metal umlaut”, as found in the band names Mötley Crüe and Motörhead; these umlauted letters are pronounced a certain way in the languages who use them in their writing systems, such as Swedish and German, but these bands use them as a form of identity marker. If these social identity markers are used in the present day, we should be aware that this may also be the case in the past. As an example of this, a mediaeval writer may have chosen to use the italic script to advertise to the reader that they are a humanist. 

Italic script. The image is taken from this article, where you can also read more about the history of Italic script.

As the second day progressed, we received introductions on how to use historical corpora by Dr Joanna Kopaczyk (University of Glasgow) and Dr Kristin Bech (University of Oslo). While these workshops were more focused on presenting resources for doing research in historical linguistics, the theme of the week still ran like a red thread through them: for example, we were reminded that when looking at historical written text, the scribal practice should not only be taken to be dialectal, but can also be socially conditioned. 

On the third day of the summer school, we went on a field trip to Ruthwell Cross, in Dumfriesshire. The runes inscripted on the cross make the earliest evidence we have of Anglo-Saxon in Britain, and it was interesting to learn about some of the unique features of the runic system which are only found on this monument, which again led us to think about what the purpose was behind using these particular symbols.

The Ruthwell Cross, photo by Lisa Gotthard

In the final two days of the summer school, all participants presented their PhD research, and we reflected on the mechanisms behind language change in a discussion led by Jeremy Smith. In this discussion, we looked at different examples of words or expressions which use and meaning had changed in the history of English, and whether social factors may have driven these changes. In the HLC’s weekly etymologies on facebook, we have sometimes demonstrated how social associations may trigger the meaning of a word to become more negative or positive – an example being the word ‘villain’, now a pejorative term, which developed from simply referring to someone living on a farm. This is only one type of language change that can be socially conditioned, and this week we’ve come to learn even more about how identity markers and other socially conditioned factors play a role in how we express ourselves, both in writing and speaking. This is why it’s so important for historical linguists to approach our textual sources with the same sociolinguistic awareness with which we would approach today’s spoken data.

Personally, I found this week to be incredibly inspiring, and in our final discussions you could tell that we had all received plenty of input and inspiration for continuing our research with some more attention to material culture and social practice. 

Eh: What’s the Big Deal, eh?

You may have heard the word eh being used before. Often, it’s found at the end of sentences; for example, you might hear someone say ‘nice day, eh?’. Usually, eh serves to mark a question or initiate some kind of response from the listener, though it can also be used to signal agreement or inclusiveness. We call these kinds of words ‘tag particles’ – they have no set meaning on their own but are often used for a particular communicative function.

The tag particle eh has a long history, dating back in literature to the 1600s. It has been noted across a far-ranging spread of dialects and varieties, including Scottish English, Canadian English, Guernsey English and New Zealand English, suggesting a common British origin. In each variety it shares several semantic and social functions and it is frequently associated with national identity and vernacular use. However, over time these different varieties have also developed dialect-specific uses of eh. Today we’re going to focus particularly on the use of eh in New Zealand English, where it has the shortest but nonetheless very interesting history. But first, we cannot talk about eh without briefly mentioning its prominent role in Canadian English.

Canadian English

Eh has long been recognised as a typical feature of Canadian English, and it is so prevalent and so well-known that it is often the subject of jokes or caricatures of the Canadian accent. Already in the 1970s and 80s it was being used in advertisements, indicating that this particle was becoming widespread and nationally recognised.

Canadian eh has with time become associated with national identity, and this has endowed it with the status of a purely Canadian feature, or ‘Canadianism’, despite the fact that eh also plays this role in a number of other accents. The Canadian variant is typically pronounced as the short, front, mid-high vowel [e], and has a rising intonation. The main function of eh is to mark informality and inclusiveness, as well as seek agreement from the listener. Eh has been found to be widespread across Canada geographically and socially, although it is more frequently used by the lower classes, who tend to make more use of addressee-oriented devices in general. Though it has several functions, Canadian eh is most commonly found in:

Opinions: ‘nice day, eh?’
Statements of fact: ‘it goes over there, eh’
Exclamations: ‘what a game, eh?’
and fixed expressions, such as: ‘I know, eh’ and ‘thanks, eh’.

It is also found in questions, requests for repetition, insults, accusations, and narrative functions, although the questioning and narrative function of eh is often seen by speakers as uneducated, lower class, and rural.

New Zealand English

To jump forward a few centuries to a more recently developed English accent, eh is commonly found in New Zealand English as well. New Zealand English (NZE) speakers tend to prefer eh to other possible tags, leading to its highly salient nature. As in Canadian English, eh is a well-recognised feature, and is also showing signs of growing national awareness, exemplified in its use in a nationwide advert promoting New Zealand’s national soft drink; L&P. This soft drink is an iconic feature of New Zealand, originating and being produced there, and it is partially named after the small town it was created in.

Notice that the spelling here is aye rather than eh. This is most likely because in NZE eh is realised as the diphthong [æe], as in ‘face’, with a slight palatal approximant gesture (meaning that the vowel is followed by a slight ‘y’ sound), unlike Canadian eh which is realized as [e] in IPA. New Zealand speakers generally pronounce eh with a falling intonation, which distinguishes eh from most other varieties of English who typically have a rising intonation, Canadian English included. Eh most commonly occurs at the end of sentences, but is also likely to occur mid-utterance, unlike in most other varieties. For example:

‘the phone will be non-stop eh with all the girls ringing him up and stuff’

Eh performs a number of functions in New Zealand English and tends to be used to a greater extent by working-class speakers and in informal contexts, which overlaps with the patterning we find for Canadian English. The array of semantic roles eh has acquired are both New Zealand-specific and share significant overlap with the Canadian variant. In New Zealand English its most common purpose is to signal, recheck or establish common ground with the interlocutor, but eh can also be used to checking the comprehension of information, confirm shared background knowledge or seek reassurance of the listener’s continued attention. However, question and answer sentences discourage eh, quite unlike the Canadian variant. This wide range of usage may be partially due to the historical developments it has undergone since it arrived on New Zealand’s shores. 

But where did this eh in New Zealand English come from exactly?

Whilst we cannot know for sure with the current information we have, it seems very likely that eh came from Scots, where it is still found today. Previously, the general assumption was that New Zealand English was generally derived from the English of South East England, but now we know that a surprising number of words came from the north of Britain, particularly from Scots. The use of Scottish eh, or rather e (as it is commonly transcribed), is prevalent in some Scots varieties such as Hawick Scots and also in Edinburgh. Just like New Zealand English, it too has a falling intonation, although it is pronounced [e] rather than [æe]. E typically occurs with be and have, for example:

‘he had a stroke, e?’

There are a number of significant overlaps between use of eh in NZE and use of e in Scots. E can be used to confirm shared background knowledge, which matches its usage in NZE, where eh acknowledges the shared understanding between speakers. For example:

‘we know him quite well by now, e?

Furthermore, both eh and e can also be used as a positive politeness feature to make a statement, opinion, or request less sharp and more polite. For example;

 ‘Put it down there, e’
 ‘I like Sambuca, e’

However Scots e is also noticeable in question and answer sentences, unlike NZE. For example:

‘he’s coming, e?’  
‘he isnae coming, e?’

We can see here that Scots e performs a number of functions, some of which have significant similarities with eh in NZE, and some which differ. So, if NZE eh possibly comes from e, how did it get into the accent?

Scottish e contributed to the rise of eh in New Zealand English through process of new dialect formation. Historical dialect formation is (often) the result of a number of different dialects being brought into close proximity with one another in unique, isolated circumstances. Through various processes these form a new dialect. These processes have been categorized into five distinct periods by Peter Trudgill. Initially there is reduction and accommodation between the different dialects; the most dialectal features are discarded and ‘half-way’ features are frequently chosen. The next two steps involve further levelling (so removing the strongest dialectal features) and modification through speaker convergence (speakers adapt their speech to make themselves more comprehensible). During this process one feature is chosen and becomes standardised; in this case it was eh rather than other tags that was chosen as the agreement marker. The final components to dialect formation are focussing and adoption by the wider community. These last steps are still ongoing today; use of eh is led by the youth in the NZE community.

One of the great things about the New Zealand dialect is that we actually have recordings from the very first British settlers setting foot on New Zealand soil, right up until present day NZE. These recordings, stored in what is known as the ONZE (Origins of New Zealand English) corpus (https://www.canterbury.ac.nz/nzilbb/research/onze/), have allowed researchers to see (or rather hear) these processes of dialect formation in action. In the corpus, we found that use of eh was significantly higher in the region of Otago, which historically saw a high concentration of Scottish settlers. Unlike the rest of New Zealand, the dialect from this local area has a number of Scottish-inspired features, including Scots vocabulary items and rhoticity. Furthermore, speakers with Scottish parents showed greater usage of eh, regardless of where they had settled in New Zealand. Small numbers of e were in fact present in the first wave of recordings (1860-1900), but this becomes gradually replaced by eh after 1900. So here we can see the stages of dialect formation taking off; initially e is present in the dialect, but with reduction, accommodation, and levelling, eh was chosen and has become widely adopted into everyday NZE during the last fifty years. However, this might not be the whole story.

Whilst it seems likely that eh came into NZE from Scots and pre-colonial varieties of English, the difference in pronunciation between the two is more difficult to account for. However, there is some precedent for minority language influence on New Zealand eh; various studies have found that Maori speakers, particularly males, were the most frequent users of eh. The particle eh is very similar both in pronunciation and function to the Maori tag particle (pronounced [næe]. It is possible that once eh was adopted by Maori speakers if would have been influenced by to produce a form similar in phonetic quality. The functions of eh also appear to have expanded, again through influence from .

This change in turn possibly influenced young Pakeha (non-Maori) speakers, who have shown increasing use of eh by from around 1940 onwards. This gives us the particular ‘ay-ye’ pronunciation that is now in wide circulation, as well as the new meanings associated with eh. We can see this change happening shortly after increasing numbers of Maori were migrating to the cities in search of work, bringing them into greater contact with Pakeha speakers. The New Zealand Government also practiced a policy of ‘pepper potting’- the scattering of individual Maori families among Pakeha neighbours, in an effort to prevent the Maori community from clustering together in the cities. This naturally brought the two speaker groups into closer contact with one another, allowing for cross-dialectal influence.

So it appears that eh came initially from Scots and influenced the New Zealand English dialect. It was chosen as the invariant tag of choice, and was in use within the post-colonial population in New Zealand. This tag was then adopted by Maori speakers acquiring English and influenced by their own particular tag particle, . The pronunciation changed, as well the particular uses of eh. This new form of the variant was then adopted by younger, Pakeha speakers, and is now spreading through the society, led by the youth.

But what about Canadian eh?

Again, there are similar possible links between the Scots e and Canadian eh. In 1851-61 there were several waves of British settlers to Canada, especially Scots and Irish immigrants as part of a concerted effort by the British government to populate Canada. In 1901-11 another wave of British migrants settled in Canada, particularly Scottish. In the unsettled areas of Ottawa Valley, the colonial lineage of Scottish and Irish accents remains to this day and can still be heard in the speech of some local speakers in the Ottawa basin.

So, it seems that eh could have spread via Scottish immigration during the colonial period. It concurrently underwent linguistic changes through new dialect formation to produce the form that has surfaced in several colonial countries over time. Both the New Zealand and Canadian dialects have developed their own version of eh, but it seems that the roots of this particle in both dialects stems from the same source; Scots. Pretty cool, eh?

One Nation, Many Languages

Lies your geography teacher told you

We all know that each country has one and only one language, right?

In China they speak Chinese, in England they speak English, in Iran they speak Farsi, and each language is neatly contained within the borders of its respective state, immediately switching to another language as soon as these are crossed.

Well, if you’ve been reading our blog, you have probably become rather sceptical of categorical statements like this, and for good reason: it turns out, in fact, that a situation like the one described above is pretty much unheard of. Languages spread across borders, sometimes far into a neighbouring country, and even within the borders of a relatively small state it’s not uncommon to have four or five languages spoken, sometimes even more, and large countries can have hundreds or more.

Then there’s the island of New Guinea, which fits 1,000 languages (more than some continents) in an area slightly bigger than France.

And yet, this transparent lie is what we are all taught in school. Why? Well, you can thank those dastardly Victorians again.

Before the rise of nationalism in the late 18th century, it was common knowledge that languages varied across very short distances, and being multilingual was the rule, not the exception, for most people. Even as a peasant, you spoke the language of your own state and one or two languages from neighbouring countries (which at the time were probably a few miles away, at most). Sure, most larger political entities had lingua francas, such as Latin or a prestige language selected amongst the varieties spoken within the borders (usually the language of the capital), but this was never seen as anything more than a way to facilitate communication.

It was the Victorian obsession with national unity and conformity which slowly transformed all languages different from the arbitrarily chosen “national language” into marks of ignorance, provincialism, and, during the fever pitch reached in the 1930s, even treason; this led to policies of brutal language suppression, which resulted the near-extinction of many of the native languages of Europe.

Why then is this kind of thing still taught in schools? Because, sad to say, things have only become slightly better since those dark times. Most modern countries still accept the “One Nation, One Language” doctrine as a fact of life without giving it a second thought. Some countries still proudly and openly enact policies of language suppression aimed at eliminating any language different from the national standard (je parle à toi, ma belle France…).

Which brings me to our case study: my own Italy.

La bella Italia

Given my tirade above, it should not come as a surprise to you now when I tell you that Italian is not the only language spoken in Italy. Not by a long shot. In fact, by some counts, there are as much as 35! The map below shows their distribution.

What is today known as Standard Italian (or simply Italian) is a rather polished version of the Tuscan language (shown as TO on the map). Why not Central Italian, the language of Rome? For rather complex reasons which have to do with the Renaissance, and which we won’t delve into here, lest this post become a hundred pages long.

Even though Italy stopped enforcing its language suppression policies after WWII, it is a sad fact that even the healthiest of Italian languages are today classified as “vulnerable” by UNESCO in its Atlas of the World’s Languages in Danger, with most of them in the “definitely endangered” category.

The Italian government only recognises a handful of these as separate languages, either because they’re so different it would be ludicrous to claim they’re varieties of Italian (such as Greek, Albanian and various Slavic and Germanic languages spoken in the North), or because of political considerations due to particularly strong separatist tendencies (such as Sardinian or Friulan, spoken in the Sardinia and Friuli-Venezia Giulia regions, respectively). All other languages have no official status, and are generally referred to as “dialects” of Italian, even though some are as different from Italian as French is![1]

Stereotypically, speaking one of these languages is a sign of poor education, sometimes even boorishness: in the popular eye, you’re not speaking a different language, you’re simply speaking Italian wrong.[2]

To see how deep the brainwashing goes: suffice to say that it’s not uncommon, when travelling to areas where these languages are still commonly spoken, to address a local in Italian and receive an answer in the local language. When it becomes clear to them that you don’t understand a word of what they’re saying, the locals are often puzzled and surprised, because they’re sincerely convinced they’re speaking Italian!

To better highlight the differences between Italian and these languages, here’s the same short passage in Italian and in my own regional language, Emilian (Bologna dialect):

Italian

Si bisticciavano un giorno il Vento di Tramontana e il Sole, l’uno pretendendo d’esser più forte dell’altro, quando videro un viaggiatore, che veniva innanzi avvolto nel mantello. I due litiganti convennero allora che si sarebbe ritenuto più forte chi fosse riuscito a far sì che il viaggiatore si togliesse il mantello di dosso.

Emilian

Un dé al Vänt ed såtta e al Såul i tacagnèven, parché ognón l avêva la pretaiśa d èser pió fôrt che cl èter. A un zêrt pónt i vdénn un òmen ch’al vgnêva inànz arvujè int una caparèla. Alåura, pr arsôlver la lît, i cunvgnénn ch’al srêv stè cunsidrè pió fôrt quall ed låur ch’al fóss arivè d åura ed fèr in môd che cl òmen al s cavéss la caparèla d’indòs.

Pretty different, aren’t they?

You can hear the Italian version read aloud here, and here is the Emilian version[3].

Here’s the English version of the same passage for reference:

The North Wind and the Sun were disputing which was the stronger, when a traveller came along wrapped in a warm cloak. They agreed that the one who first succeeded in making the traveller take his cloak off should be considered stronger than the other.

It is pretty hard to argue that these two are the same language, and yet this is what most people in Italy believe, thinking of Emilian as a distorted or corrupted form of Italian.

Compare this to the situation during the Renaissance, when Emilian was actually a very prestigious language, to the point that Dante himself once wrote an essay defending it from those who would claim the superiority of Latin, calling it the most elegant of the languages of Italy.

Conclusion

Italy is by no means an isolated example, as I’ve already made clear in the first section of this post: wherever you go in the world, you’ll find dozens of languages being suppressed and driven to extinction due to myopic language policies left over from an era of nationalism and intolerance.

The good news is that the situation is improving: in Italy, regional languages are not stigmatised as they once were. In fact, many people take pride in speaking their local language, and steps are being taken to teach it to the youngest generations and preserving them through literature and modern media. However, the damage done in the past is enormous, and it will take an equally enormous effort to restore these languages to the level of health they enjoyed a hundred years ago. For some of them it might very well be too late.

So if you speak a minority language, or know someone who does, take pride in it. Teach it to your children. They’re not “useless”, they’re not marks of poor education, they are languages, as dignified and deep as any national language.

And don’t mind the naysayers: whenever someone tells me Emilian is a language for farmers, incapable of the breadth of expression displayed by Italian, I remind them that when Mozart studied music in Bologna, he spoke Emilian, not Italian; and that when the oldest university in the western world opened its doors in 1088, and for 700 years after that, it was Emilian, not Italian, that was spoken in its halls.

  1. Lisa discussed the tricky question of  what’s a language and what’s a dialect here
  2. The same thing that happens to Scots or AAVE. See here
  3. The passages are taken from a short story used to compare different italian regional languages. All currently recorded versions can be found here.

 

Standardisation of languages – life or death?

Hello and happy summer! (And happy winter to those of you in the Southern Hemisphere!)

In previous posts we’ve thrown around the term ‘standard’, as in Standard English, but we haven’t really gone into what that means. It may seem intuitive to some, but this is actually quite a technical term that is earned through a lengthy process and, as is often the case, it is not awarded easily or to just any variety of a language. Today, I will briefly describe the process of standardising a variety and give you a few thoughts for discussion1. I want to stress that though we will discuss the question, I don’t necessarily think we need to find an answer to whether standardisation is “good” or “bad” – I don’t think either conclusion would be very productive. Still, it’s always good to tug a little bit at the tight boundaries we often put around the thought space reserved for linguistic concepts.

The language bohemian, at it again.

There are four processes usually involved in the standardisation of a language: selection, elaboration, codification, and acceptance.

Selection

It sure doesn’t start easy. Selection is arguably the most controversial of the processes as this is the step that involves choosing which varieties and forms the standard will be based on. Often in history we find a standard being selected from a prestigious variety, such as the one spoken by the nobility. In modern times this is less comme il faut as nobility don’t have monopoly on literacy and wider communication anymore (thankfully). This can make selection even trickier, though: as the choice of a standard variety becomes more open there is a higher need for sensitivity regarding who is represented by that standard and who isn’t. Selection may still favour an elite group of speakers, even if they may no longer be as clear-cut as a noble class. For example, a standard is often based on the variety spoken in the capital, or the cultural centre, of a nation. The selection of standard forms entails non-selection of others, and these forms are then easily perceived as worse, which affects the speakers of these non-standard forms negatively – this particularly becomes an issue when the standard is selected from a prestigious variety.

In my post about Scots , I briefly mentioned the problem of selection we would face in a standardisation of Scots as a variety which has great variation both within individual speakers and among different speakers (e.g. in terms of lects). Battling this same tricky problem, Standard Basque was mostly constructed from three Basque varieties, mixed with features of others. This standard was initially used mainly by the media and in formal writing with no “real” speakers. However, as more and more previously non-Basque-speaking people in the Basque country started to learn the language, they acquired the standard variety, with the result that this group and their children now speak a variety of Basque which is very similar to the standard.

Elaboration

Standardisation isn’t all a prestigious minefield. A quite fun and creative process of standardisation is elaboration, which involves expanding the language to be appropriate for use in all necessary contexts. This can be done by either adapting or adopting words from other varieties (i.e. other languages or nonstandard lects), by constructing new words using tools (like morphology) from within the variety that’s becoming a standard, or by looking into archaic words from the history of the variety and putting them back into use.

When French was losing its prestige in medieval England, influenced no doubt by the Hundred Years’ War, an effort was initiated to elaborate English. During the Norman Conquest, French had become the language used for formal purposes in England, while English survived as spoken by the common people. This elaboration a few hundred years later involved heavy borrowing of words from French (e.g. ‘government’ and ‘royal’) for use in legal, political, and royal contexts (and from Latin, mainly in medical contexts) – the result was that English could now be used in those situations it previously didn’t have appropriate words for (or where such words had not been in use for centuries)2.

source

Codification

Once selection and elaboration have (mostly) taken place, the process of codification cements the selected standard forms, through, for example, the compilation of dictionaries and grammars. This does not always involve pronunciation, although it can, as it famously does in the British Received Pronunciation (usually just called RP), a modern form of which is still encouraged for use by teachers and other public professions. Codification is the process that ultimately establishes what is correct and what isn’t within the standard – this makes codification the sword of the prescriptivist, meaning that codification is used to argue what the right way to use the language is (y’all know by know what the HLC thinks of prescriptivism).

When forms are codified they are not easily changed, which is why we still see some bizarre spellings in English today.  There are of course not only limitations to codification (as with the spelling example)– there is obvious benefit for communication if we all spell certain things the same way or don’t vary our word choices too much for the same thing or concept. Another benefit, and a big one at that, is that codified varieties are perceived more as real, and this is very important for speakers’ sense of value and identity.


Codification does not a standard make – most of you will know that many varieties have dictionaries without having a standard, Scots being one example. Urban Dictionary is another very good example of codification of non-standard forms.

Acceptance

The final process is surely the lengthiest and perhaps the most difficult to achieve: acceptance. It is crucial that a standard variety receives recognition as such, more especially by officials or other influential speakers but also by the general public. Speakers need to see that there is a use for the standard and that there is a benefit to using it (such as benefiting in social standing or in a career). Generally though, people don’t respond very well to being prescribed language norms, which we have discussed previously, so when standard forms have been selected and codified it does not necessarily lead to people using these forms in their speech (as was initially the case with Standard Basque). Further, if the selection process is done without sensitivity, some groups may feel they have no connection to the standard, sometimes for social or political reasons, and may actively choose to not use it. Again, we find that a sense of identity is significant to us when it comes to language; it is important for us to feel represented by our standard variety.

What’s the use?

Ideally, a standard language could be seen as a way to promote communication within a nation or across several nations. Despite the different varieties of Arabic, for example, Arabic speakers are able to switch to a standard when communicating with each other even if they are from different countries far apart. Likewise, a Scottish person can use Standard English when talking to someone from Australia, while if the same speakers switched back to their local English (or Scots) varieties, they wouldn’t necessarily understand each other. Standardisation certainly eases communication within a country also, and a shared standard variety can provide a sense of shared nationality and culture. There is definitely a point in having a written standard used for our laws, education, politics, and other official purposes which is accessible for everyone. On the other side of this, however, we find a counterforce with speaker communities wanting to preserve their lects and actively opposing using a standard if they can’t identify with it.

So, a thought for discussion I want to leave with you today: Do you think the process of standardisation essentially kills language, or does it it keep it alive? An argument for the first point is that standardisation limits variation3 – this means that when a standard has been established and accepted, the varieties of that standard will naturally start pulling towards the standard as its prestige and use increases. However, standardising is also a way to officially recognise minority varieties, which gives speakers an incentive to keep their language alive. It is also a way to ease understanding between speakers (as explained earlier), and in some cases (like Basque), standardisation gives birth to a new variety acquired as a first language. As I said from the start, maybe we won’t find an answer to this, and maybe we shouldn’t, but it’s worth thinking about these matters in a more critical way.

Footnotes

1 I’ve used the contents of several courses, lectures, and literatures as sources for this post. The four processes of standardisation are credited to Haugen (1996): ‘Dialect, language, nation’.

2 In fact, a large bulk of French borrowings into English comes from this elaboration, rather than from language contact during the Norman Conquest.

On a very HLC note, historical standardisation makes research into dialectal variation and language change quite difficult. The standard written form of Old English is based on the West Saxon variety, and there are far fewer documents to be found written in Northumbrian, which was a quite different variety and has played a huge part in the development of the English we know today.

 

That’s just bad English!

Hi there!

If you’ve read my mini-series about Scots (here are parts 1 and 2) you are probably more aware of this particular language, its history and its complicated present-day status than before. With these facts in mind, wouldn’t you find it un-intuitive to think of Scots as “Bad English”? In this post, I want to, in a rather bohemian way, explore the problematic idea of Bad English. That is, I want to challenge the often constraining idea of what is correct and what is deviating; once again, we will see that this has very much to do with politics and power1.

We have seen that Scots clearly has a distinct history and development, and that it once was a fully-functioning language used for all purposes – it was, arguably, an autonomous variety. However, during the anglicisation of Scots (read more about it here) English became a prestigious variety associated with power and status, and thus became the target language to which many adapted Scots. This led to a shift in the general perception of Scots’ autonomy, and today many are more likely to perceive Scots as a dialect of English – that is, perceive Scots as heteronomous to English. This means that instead of viewing Scots features, such as the ones presented in my last post, as proper language features, many would see them as (at best) quirky features or (at worst) bastardisations of English2.

As an example of how shifting heteronomy can be, back in the days when the south of (present-day) Sweden belonged to Denmark, the Scanian dialect was considered a dialect of Danish. When Scania (Skåne) became part of Sweden, it took less than 100 years for this dialect to become referred to as a dialect of Swedish in documents from the time. It’s quite unlikely that Scanian changed much in itself during that time. Rather, what had changed was which language had power over it. That is, which language it was perceived as targeting.

When we really get into it, determining what is Bad English gets more and more blurry, just like what I demonstrated for the distinction between language and dialect way back. There are  several dialectal features which are technically “ungrammatical” but used so categorically in some dialects that calling them Bad English just doesn’t sit right. One such example is the use of was instead of were in, for example, Yorkshire: “You was there when it happened”. What we can establish is that Bad English is usually whatever diverts from (the current version of) Standard English, and this brings us to how such a standard is defined – more on this in a future post.

Scots is, unsurprisingly, not the only variety affected by the idea of Bad English. As Sabina recently taught us, a creole is the result of a pidgin (i.e. a mix of two or more languages to ease communication between speakers) gaining native speakers3. This means that a child can be born with a creole as their first language. Further to this, creoles, just like older languages, tend to have distinct grammatical rules and vocabularies. Despite this, many will describe for example Jamaican Creole as “broken English” – I’m sure this is not unfamiliar to anyone reading. This can again be explained by power and prestige: English, being the language of colonisers, was the prestigious target, just like it became for Scots during the anglicisation, and so these creoles have a hard time losing the image of being heteronomous to English even long after the nations where they are spoken have gained independence.

In the United States, there is a lect which linguists call African-American Vernacular English (AAVE), sometimes called Ebonics. As the name suggests, it is mainly spoken by African-Americans, and most of us would be able to recognise it from various American media. This variety is another which is often misunderstood as Bad English, when in fact it carries many similarities to a creole: during the slave trade era, many of the slaves arriving in America would have had different first languages, and likely developed a pidgin to communicate both amongst themselves and with their masters. From there, we can assume that an early version of AAVE would have developed as a creole which is largely based on English vocabulary. In fact, AAVE shares grammatical features with other English-based creoles, such as using be instead of are (as in “these bitches be crazy”, to use a offensively stereotypical expression). If the AAVE speakers were not living in an English-speaking nation, maybe their variety would have continued to develop as an independent creole like those in, for example, the Caribbean nations?

Besides, what is considered standard in a language often change over time. A feature which is often used to represent “dumb” speech is double negation: “I didn’t do nothing!”. The prescriptivist smartass would smirk at such expressions and say that two negations cancel each other out, and using double negations is widely considered Bad English4. However, did you know that using double negation was for a long time the standard way of expressing negation in English? It was actually used by the upper classes until it reached commoner speech, and thus became less prestigious5. This is another example of how language change also affects our perception of what is right and proper – and as Sabina showed us a while ago, language changes will often be met with scepticism and prescriptivist backlash.

What the examples I’ve presented show us is that less prestigious varieties are not necessarily in the wrong, just because they deviate from a standard that they don’t necessarily “belong to” anyway. It can also be argued that, in many cases, classing a variety as a “bad” version of the language in power is just another way of maintaining a superiority over the people who speak that variety. The perception of heteronomy can be a crutch even for linguists when studying particular varieties; this may be a reason why Scots grammar is relatively under-researched still. When we shake off these very deep-rooted ideas, we may find interesting patterns and developments in varieties which can tell us even more about our history, and language development at large. Hopefully, this post will have created some more language bohemians out there, and more tolerance for Bad English.

Footnotes

1While this post focuses on English, this can be applied to many prestigious languages and in particular those involved in colonisation or invasions (e.g. French, Dutch, Spanish, Arabic, etc.)

2Within Scots itself there are also ideas of what is “good” and what is “bad”: Urban Glaswegian speech is an example of what some would call ‘bad Scots’. Prestige is a factor here too – is not surprising that it’s the speech of the lower classes that receive the “bad” stamp.

3 Not all creoles are English-based, of course. Here is a list of some of the more known creoles and where they derive from.

4There are other languages which do fine with double negation as their standard, without causing any meaning issues – most of you may be familiar with French ne…pas.

5Credit goes to Sabina for providing this example!

It’s all Greek to me!

 

Or, How No Language is Any More (or Less) Difficult than Any Other

Lessons I learned from Latin

How did Latin speakers remember which case a word goes in, and its form, as they spoke? We probably all wondered about this question at some time or another. I remember studying Latin in middle school (it’s mandatory in Italy) and being absolutely baffled at the thought that such a byzantine language could have been spoken fluently at some time in the past as I struggled to learn by heart dozens of declension tables as well as lists of environments which required the presence of some case or another (and even longer lists of exceptions to those lists!). The Romans must have been geniuses with prodigious memories who would probably find Italian a ridiculously simple and unsophisticated language to learn.

Then one day, in high school, I stumbled upon a textbook which used a different method to teach Latin from the one I was used to: it taught it as a living language. No more declension tables, no more long lists of baroque rules, no more grand examples of complicated rhetorical stylings; instead, it had everyday dialogues, going from simpler to more complex, and bite-sized grammar sections. Suddenly, Latin became easy: with the help of a dictionary, I could read and write in it with a reasonable degree of proficiency (which, alas, I’ve largely lost).

Had I become a genius? Did I start seeing my native Italian as a boorish, simplified version of the language of Rome? Absolutely not. All that changed was the way the language had been taught to me. That was the day I learned that no language is any more difficult than any other. Also, everything’s easier when you learn it as a baby, and the Romans spoke Latin since they were born, no declension tables necessary.

Latin is by no means the only language to be considered particularly difficult: we’ve all heard how difficult it is to learn Chinese, with all those ideographs[1] to learn, and with words being so ambiguous and whatnot; or Finnish, which has 15 cases and innumerable verbal inflections. Also, it’s a national pastime for everyone[2] to regard their language as the most complex to learn for foreigners, because that makes you feel oh-so-intelligent.

The idea that some languages are inherently more complex than others is, unsurprisingly, another legacy of the dastardly Victorians and their colonialist obsession with ethnocentric nationalism.

It was, of course, in the interest of Eurocentric racists to paint foreign languages as being either primitively simple and unsophisticated, or bizarrely and unnecessarily complicated (damned if you do, damned if you don’t). If this sounds familiar, it’s probably because you’ve read our post on phonaesthetics a few weeks ago, where we found out that the same reasoning was applied to how a language sounds.

Those Victorians… never happy until they’ve enslaved, massacred or culturally neutered someone different from them. Bless their little hearts.

Scientists estimate that a greater-than-average amount of moustache-twirling went into the making of this linguistic prejudice

My task today is showing you how this is not really true at all, and how your failure to realise your dream of learning Ahkwesásne Mohawk is more due to a lack of proper learning materials rather than any difficulty inherent in the language itself.

It all depends on your point of view

So, am I saying that all languages are equally simple in all their aspects? Well, no. While all languages are more or less equally complex, how that complexity is distributed changes from language to language. For example, while it is undeniably true that Finnish is far more morphologically complex than English, phonologically speaking English makes it look like toddler babbling.

Amazingly, although complexity might be distributed differently from language to language, overall the different parts balance out to make languages more or less as complex as each other. We don’t really know how this happens: various mechanisms have been proposed, but they all have fatal flaws. It is one of the great mysteries of linguistics.[3]

“But why do I find French so difficult, Riccardo?” you scream through a haze of tears as you once again fail to understand how the past subjunctive is of any use in any language ever. Well, the answer is that how difficult a language is to learn for you depends on your first language. Specifically, the more similar two languages are in their distribution of complexity, the easier it is for speakers of each to learn the other. If the languages are related, then it becomes even easier.[4] So, Mandarin Chinese might well be very difficult to learn for an English speaker, due to its very simple morphology, rigid syntactic structure and tonal phonology; but, say, a Tibetan speaker would find it much easier to learn than English, because the two languages are distantly related, and therefore have similar structure.

The moral of the story

And so, once again, we come to the end of a post having dispelled another widespread linguistic misconception.

Even though these myths might seem rather innocuous, they have real and sometimes very serious consequences. The idea that some languages are more or less complex or difficult to learn than others has, over the centuries, been used to justify nationalist, racist, and xenophobic sentiments which have ultimately resulted in suffering and sometimes even genocide.

What we need to do with languages is learn them, share them, preserve them, and speak them, not pitting them against each other in a competition over which is the best, most “logical”, most difficult or better-sounding one.

So enjoy the amazing diversity of human languages, people!

Stay tuned for next week, when Sabina will answer the old question: is English really three languages stacked upon each other wearing a trenchcoat?

  1. They’re not actually ideographs, they’re logographs, but that’s a topic for another post.
  2. Except for English speakers, who, for various reasons, have convinced themselves that their language is stupid, unsophisticated, illogical and boring. More on this in a future post.
  3. It is important to note that this rule does not apply to pidgins and (young) creoles, due to the way they were formed, as pointed out by John McWhorter (2011). These languages truly are simpler than all others. This, however, does NOT make them any more “primitive” or “less expressive”.
  4. Paradoxically, if two languages are TOO closely related, it becomes slightly more difficult for their speakers to learn the other, because they tend to over-rely on the similarities and end up tripping up on the differences.

Phonaesthetics, or “The Phrenology of Language”

 

Stop me if you’ve heard this before: French is a beautiful, romantic language; Italian sounds like music; Spanish is passionate and primal; Japanese is aggressive; Polish is melancholic; and German is a guttural, ugly, unpronounceable mess (Ha! Tricked you! You couldn’t stop me because I’ve written all of this down way before now. Your cries and frantic gesticulations were for naught.)

We’ve all heard these judgements (and many others) repeated multiple times over the course of our lives; not only in idle conversation, but also in movies, books, comics, and other popular media. There’s even a series of memes dedicated to mocking how German sounds in relation to other European languages:

“Ich liebe dich” is a perfectly fine and non-threatening way of expressing affection towards another human being

What you might not know is that this phenomenon has a technical name in linguistics: phonaesthetics.[1]

Phonaesthetics, in short, is the hypothesis that languages are objectively more or less beautiful or pleasant depending on various parameters, such as vowel to consonant ratio, presence or absence of certain sounds etc., and, not to put too fine a point on it, it’s a gigantic mountain of male bovine excrement.

Pictured: phonaesthetics

Let me explain why:

A bit of history

Like so many other terrible ideas, phonaesthetics goes way back in human history. In fact, it may have been with us since the very beginning.

The ancient Greeks, for example, deemed their language the most perfect and beautiful and thought all other languages ugly and ungainly. To them, these foreign languages all sounded like strings of unpleasant sounds: a mocking imitation of how they sounded to the Greeks, “barbarbarbar”, is where we got our word “barbarian” from.

In the raging (…ly racist) 19th century, phonaesthetics took off as a way to justify the rampant prejudice white Europeans had against all ethnicities different from their own.

The European elite of the time arbitrarily decided that Latin was the most beautiful language that ever existed, and that the aesthetics of all languages would be measured against it. That’s why Romance languages such as Italian or French, which descended from Latin[2], are still considered particularly beautiful.

Thanks to this convenient measuring stick, European languages were painted as euphonious ( ‘pleasant sounding’), splendid monuments of linguistic accomplishment, while extra-European languages were invariably described as cacophonous (‘unpleasant sounding’), barely understandable masses of noise. This period is when the common prejudice that Arabic is a harsh and unpleasant language arose, a prejudice that is easily dispelled once you hear a mu’adhin chant passages from the Qur’an from the top of a minaret.

Another tool in the racist’s toolbox, very similar to phonaesthetics, and invented right around the turn of the 19th century, was phrenology, or racial-biology, the pseudoscience which alleged to be able to discern a person’s intelligence and personality from the shape of their head. To the surprise of no one, intelligence, grace and other positive characteristics were all associated with the typical form of a European white male skull, while all other shapes indicated shortcomings in various neurological functions. What a pleasant surprise that must have been for the European white male inventors of this technique![3] Phrenology was eventually abandoned and widely condemned, but phonaesthetics, unfortunately, wasn’t, and it’s amazingly prevalent even today.

To see how prevalent this century-old model of linguistic beauty is in popular culture, a very good example are Tolkien’s invented languages. For all their amazing virtues, Tolkien’s novels are not exactly known for featuring particularly nuanced moral actors: the good guys might have some (usually redeemable) flaws, but the bad guys are just bad, period.

Here’s a brief passage in Quenya, the noblest of all Elven languages:

Ai! Laurië lantar lassi súrinen,

Yéni únótimë ve rámar aldaron!

Yéni ve lintë yuldar avánier

Mi oromardi lissë-miruvóreva

[…]

Notice the high vowel-to-consonant ratio, the prevalence of liquid (“l”, “r”), fricative (“s”, “v”) and nasal (“n”, “m”) sounds, all characteristic of Latinate languages.

Now, here’s a passage in the language of the Orcs:

Ash nazg durbatulûk, ash nazg gimbatul

Ash nazg thrakatulûk, agh burzum-ishi krimpatul

See any differences? The vowel-to-consonant ratio is almost reversed, and most syllables end with a consonant. Also, notice the rather un-Latinate consonant combinations (“zg”, “thr”), and the predominance of stops (“d”, “g”, “b”, “k”). It is likely that you never thought about what makes Elvish so “beautiful” and “melodious”, and Orcish (or Klingon, for that matter), so harsh and unpleasant: these prejudices are so deeply ingrained that we don’t even notice they’re present.

So why is phonaesthetics “wrong”?

Well, the reason is actually very simple: beauty is subjective and cannot be scientifically defined. As they say, beauty is in the eye of the beholder.

Not this beholder.
Image copyright: Wizards of the Coast

What one finds “beautiful” is subject to change both in space and in time. If you think German’s relatively low vowel-to-consonant ratio is “harsh”, then you have yet to meet Nuxálk.

Welcome to phonaesthetic hell.

Speaking of German, it is actually a very good example of how these supposedly “objective” and “common sense” criteria of phonetic beauty can change with time, sometimes even abruptly. You see, in the 19th century, German was considered a very beautiful language, on par with Italian or French. A wealth of amazing prose and poetry was written in it: it was probably the main language of Romantic literature. It was also the second language of opera, after Italian, and was routinely described as melodious, elegant and logical.

Then the Nazis came.

Nazis: always ruining everything.

Suddenly, Germans were the bad guys. No longer the pillars of European intellectual culture, their language became painted as harsh, aggressive, unfriendly and cold, and suddenly every Hollywood villain and mad scientist acquired a German accent.

So, what’s the takeaway from this long and rambling rant?

No language is more, or less, beautiful than any other language. All languages have literature, poetry, song and various other ways to beautifully use their sounds for artistic purposes, and the idea that some are better at this than others is a relic from a prejudiced era better left behind. So next time you feel tempted to mock German for how harsh and unpleasant it sounds, stop and think that maybe this is not actually what you think, and that you’ve been programmed by a century of social prejudice into thinking so.

And read some Goethe, you’ll like it.

Stay tuned for next week, when the amazing Rebekah will bring you on the third leg of our lightning trip through Phonphon!

  1. Phonaesthetics also has a different meaning, which is the study of how certain combinations of sounds evoke specific meanings in a given language. Although this form of phonaesthetics has its problems, too, it is not what I’m talking about in this post, so keep that in mind as we go forward.
  2. See our post on language families here.
  3. First the men assumed that the female skull was smaller than the male, and this was obviously a sign of their inferior intelligence. Later, however, they found that the female skull was larger, so they came up with the idea that this meant females were closer to children, and thus the male was still more intelligent! – Lisa

The Sapir-Whorf Hypothesis

 

“the Sapir-Whorf hypothesis is the theory that the language you speak determines how you think”

 

So says the fictive linguist Louise Banks (ably played by Amy Adams) in the sci-fi flick ‘Arrival’ (2016). The movie’s plot relies rather heavily on the Sapir-Whorf hypothesis, also known as the principle of linguistic relativity, so heavily in fact that the entire plot would be undone without it.

But what is the Sapir-Whorf hypothesis, really? Before digging into why ‘Arrival’ may have gotten it a bit… well, off, a word of caution: If you haven’t seen the movie (and intend to do so), go ahead and do that before reading the rest of this post because there will be SPOILERS!!!


Now that you have been duly warned, let’s get going.

The Sapir-Whorf hypothesis is, in a way, what Louise Banks describes: it is in part a hypothesis claiming that language determines the way you think. This idea is called linguistic determinism and is actually only one half of the Sapir-Whorf hypothesis.

Commonly known as the “strong” version of Sapir-Whorf, linguistic determinism holds that language limits and determines cognitive categories, thereby limiting our worldview to that which can be described in the words of whatever language we speak. Our worldview, and our way of thinking, is thus determined by our language.

That sounds pretty technical, so let’s use the example provided by ‘Arrival’:

The movie’s plot revolves around aliens coming to earth, speaking a language that is completely unknown to mankind. To try to figure out what they want, the movie linguist is called in. She manages to figure out their language pretty quickly (of course), realising that they think of time in a non-linear way.

This is quite a concept for a human to grasp since our idea of time is very linear. In western societies, we commonly think of time as a timeline going from left to right, as below.


 

Let’s say that we are currently at point C of our timeline. We can probably all agree that, as humans, we cannot go back in time to point A, right? However, in ‘Arrival’, we are given the impression that the reason we can’t do that is because our language doesn’t let us think about time in a non-linear way. That is, because our language doesn’t allow us, we can’t go back in time. Sounds a bit wonky, doesn’t it?

Well, you might be somewhat unsurprised to hear that this “strong” version has been discredited in linguistics for quite some time now and, for most modern-day linguists, it is a bit silly. Yet, we can’t claim that language doesn’t influence our way of thinking, can we?

Consider the many bi/multilinguals who has stated that they feel kinda like a different person when speaking their second language. If you’ve never met one, we bilinguals at the HLC agree that we could vouch for that fact.

Why would they feel that way, if language doesn’t affect our way of thinking? Well, of course, language does affect our way of thinking, it just doesn’t determine it. This is the ‘weak’ version of Sapir-Whorf, also known as linguistic relativism.

The weak version may be somewhat more palatable to you (and us): it holds that language influence our way of thinking but does not determine it. Think about it: if someone were to point out a rainbow to you and you had no word for the color red, you would still be able to perceive that that color was different from the others.

If someone were to discover a brand-new color (somewhat mind-boggling, I know, but just consider that), you would be able to explain that this is a color for which you have no word but you would still be able to see it just fine.

That might be the most clear distinction between linguistic determinism and linguistic relativism: the former would claim that you wouldn’t be able to perceive the color while the latter would say that you’ll see it just fine, you just don’t have a word for it.

So, while ‘Arrival’ was (at least in my opinion) a pleasant waste of time, when it comes to the linguistics of it, I’d just like to say:


(Oh, and on a side note, the name of the hypothesis (i.e. Sapir-Whorf), is actually quite misleading since Sapir and Whorf never did a collaborate effort to formalise the hypothesis)

Tune in for more linguistic stuff next week when the marvellous Rebekah will dive into the phonology of consonants (trust me, you have a treat coming)!

 

Too much linguistics, too little time

Hello, it’s me, Lisa, again. I just couldn’t stay away! This week, I have been given the challenging task of outlining the subfields of linguistics1. The most common responses I get when I tell people I study linguistics are variations of “What is that?” and  “What can you do with that?”. This leads me to explain extremely broadly what linguistics is (eh, er, uhm, the science of languages? Like, how they work and where they come from…. But I don’t actually learn a language! I just study them. One language or lots of them. Sort of.), and then I describe various professions you can have from studying linguistics. What all of those professions have in common is that I can do none of them, since they are related to subfields of linguistics that I haven’t specialised in (looking at you forensic and applied linguistics). My own specialties, historical linguistics and syntax, lead to nothing but long days in the library and crippling student debt, but let’s not dwell on that.

Linguistics is a minefield of subdisciplines. To set the scene, look at this very confusing mind-map I made:

Now ignore that mind-map because it does you no good. It’s highly subjective and inconclusive.  However, it does demonstrate how although these subfields are distinct, they end up intersecting quite a lot. At some point in their career, linguists need to use knowledge from several areas, no matter what their specialty. To not wear you out completely, I’m focusing here on the core areas of linguistics: Phonetics and phonology (PhonPhon for short2), syntax,  morphology, and semantics. I will also briefly talk about Sociolinguistics and Pragmatics3.

Right, let’s do this.

Phonetics and Phonology

Let’s start with the most recognisable and fundamental component of spoken language: sounds!

The phonetics part of phonetics and phonology is kind of the natural sciences, physics and biology, of linguistics. In phonetics, we describe speech production by analysing sound waves, vocal fold vibrations and the position of the anatomical elements of the mouth and throat. We use cool latinate terms, like alveolar and labiodental, to formally describe sounds, like voiced alveolar fricative (= the sound /z/ in zoo). The known possible sounds speakers can produce in the languages of the world are described by the International Phonetic Alphabet (IPA), which Rebekah will tell you all about next week4.

The phonology part of phonetics and phonology concerns itself with how these phonetic sounds organise into systems and how they’re used in languages. In a way, phonetics gives the material for phonology to build a language’s sound rule system. Phonology figures out, for example, what sounds can go together and what syllables are possible. All humans with a well-functioning vocal apparatus are able to produce the same sounds, yet different languages have different sound inventories; for example, English has a sound /θ/, the sound spelled <th> as in thing, while Swedish does not. Phonology maps these inventories and explains the rules and mechanisms behind them, looking both within one language and comparatively between languages.

Speaking of Rebekah, she summarised the difference between Phonetics and Phonology far more eloquently than I could so I’ll quote her: “Phonetics is the concrete, physical manifestation of speech sounds, and phonology is kind of the abstract side of it, how we conceptualize and store those sounds in our mind.”

Syntax (and morphology, you can come too)

Begin where I are doing to syntax explained?

Why this madness!, you may exclaim, post reading the above sentence. That, friends, is what it looks like to break syntax rules; the sentence above has a weird word order and the wrong inflections on the verbs. The same sentence obeying the rules would be: Where do I begin to explain syntax?

Syntax is one of my favourite things in the world, up there with cats and OLW Cheez Doodles. The syntax of a language is the rule system which organises word-like elements into clause structures based on the grammatical information that comes with each element. In plain English: Syntax creates sentences that look and sound right to us. This doesn’t only affect word order, but also agreement patterns (syntax rules make sure we say I sing, she sings and not I sings, she sing), and how we express semantic roles5. Syntax is kind of like the maths of linguistics; it involves a lot of problem solving and neat solutions with the aim of being as universal and objective as possible. The rules of syntax are not sensitive to prescriptive norms – the syntax of a language is a product of the language people actually produce and not what they should produce.

Morphology is, roughly, the study of word-formation. Morphology takes the smallest units of meaningful information (morphemes), puts them together if necessary, and gives them to syntax so that syntax can do its thing (much like how phonetics provides material for phonology, morphology provides material for syntax). A morpheme can be an independent word, like the preposition in, but it can also be the -ed at the end of waited, telling us that the event happened in the past. This is contrasting phonology, which deals with units which are not necessarily informative; the ‘ed’ in Edinburgh is a phonological unit, a syllable, but it gives us no grammatical information and is therefore not a morpheme. Languages can have very different types of morphological systems. English tends to separate informative units into multiple words, whereas languages like Swahili can express whole sentences in one word. Riccardo will discuss this in more detail in a few weeks.

Semantics (with a pinch of pragmatics)

Semantics is the study of meaning (she said, vaguely). When phonetics and phonology has taken care of the sounds and morphology and syntax have created phrases and sentences from those sounds, semantics takes over to make sense of it all – what does a word mean and what does a sentence mean and how does that interact with and/or influence the way we think? Let’s attempt an elevator pitch for semantics: Semantics discusses the relationship between words, phrases and sentences, and the meanings they denote; it concerns itself with the relationship between linguistic elements and the world in which they exist. (Have you got a headache yet?).

If phonetics is the physics/biology of linguistics and syntax is the maths, Semantics is the philosophy of linguistics, both theoretical and formal. In my three years of studying semantics, we went from discussing whether a sentence like The King of France is bald is true or false (considering there is no king of France in the real world), to translating phrases and words into logical denotation ( andVP = λP[λQ[λx[P(x) ∧ Q(x)]]] ), to discussing universal patterns in linguistics where semantics and syntax meet and the different methods languages use to adhere to these patterns, for example how Mandarin counts “uncountable” nouns.

Pragmatics follows semantics in that it is also a study of meaning, but pragmatics concerns the way we interpret utterances. It is much more concerned with discourse, language in actual use and language subtexts. For example, pragmatics can describe the mechanisms involved when we interpret the sentence ‘it’s cold in here’ to mean ‘can you close the window?’.

Sociolinguistics and historical linguistics

Sociolinguistics has given me about 80% of my worthy dinner table conversations about linguistics. It is the study of the way language interacts with society, identity, communities and other social aspects of our world, and it also includes the study of geographical dialects (dialectology). Sociolinguistics is essentially the study of language variation and change within the above areas, both at a specific point in time (synchronically) and across a period of time (diachronically); my post last week, as well as Riccardo’s and Sabina’s posts in the weeks before, dealt with issues relevant for sociolinguistics.

When studying the HLC’s speciality historical linguistics, which involves the historical variation and change of language(s), we often need to consider sociolinguistics as a factor in why a certain historical language change has taken place or why we see a variation in the linguistic phenomenon we’re investigating. We also often need to consider several other fields of linguistics in order to understand a phenomenon, which can play out something like this:

  • Is this strange spelling variation found in this 16th century letter because it was pronounced differently (phonetics, phonology), and if so, was it because of a dialectal difference (sociolinguistics)? Or, does this spelling actually indicate a different function of the word (morphology, semantics)?
  • What caused this strange word order change starting in the 14th century? Did it start within the syntax itself, triggered by an earlier different change, or did it arise from a method of trying to focus the reader’s attention on something specific in the clause (information structure, pragmatics)? Did that word order arise because this language was in contact with speakers of another language which had that word order (sociolinguistics, typology)?

To summarise, phonetics and phonology gives us sounds and organises them. The sounds become morphemes which are put into the syntax. The syntactic output is then interpreted through semantics and pragmatics. Finally, the external context in which this all takes place and is interpreted is dealt with by sociolinguistics. Makes sense?

There is so much more to say about each of these subfields; it’s hard to do any of them justice in such a brief format! However, the point of this post was to give you a foundation to stand on when we go into these topics more in-depth in the future. If you have any questions or anything you’d like to know more about, you can always comment or email, or have a look at some of the literature I mention in the footnotes. Next week, Rebekah will give us some background on the IPA – one of the most important tools for any linguist. Thanks for reading!

Footnotes

 

1I had to bring out the whole arsenal of introductory textbooks to use as inspiration for this post. Titles include but are not limited to: Beginning Linguistics by Laurie Bauer; A Practical introduction to Phonetics by J.C. Catford; A Historical Syntax of English by Bettelou Los; What is Morphology? By Mark Aronoff and Kristen Fudeman; Meaning: A slim guide to Semantics by Paul Elborne; Pragmatics by Yan Huang; and Introducing Sociolinguistics by Miriam Meyerhoff. I also consulted old lecture notes from my undergraduate studies at the University of York.

2This is of course not an official term, just a nickname used by students.

3We’ll hopefully get back to some of the others another time. For now, if you are interested, a description of most of the subfields is available from a quick google search of each of the names you find in the mind map.

4If you want a sneak peek, you can play around with this interactive IPA chart where clicking a sound on the chart will give you its pronunciation.

5This is more visible in languages that have an active case system. English has lost case on all proper nouns, but we can still see the remains of the English case system on pronouns (hehimhis).