Stressing syllables in a word or sentence (or music) conveys an emotion or other paralinguistic information. The same holds for the melodic structure of an utterance. Speech can be monotonic, or very lively and this melodic pattern communicates something that goes beyond the ‘basic meaning’ alone. But how much can a different intonation, or a melodic speechpattern alone change the meaning of a word?
Well, that depends on the language that speak. For English language speaker, Dutch or German, this basically is the case. Pronouncing a word in a different way does not change the meaning of the word. But not so in Chinese, Yoruba, Bantu among others. In these languages one word can be pronounced in several ways and each denotes a different meaning. In some tonal languages, for instance, a grammatical information is solely distinguished by tone. So what differentiates these, so-called tonal languages, from the non-tonal languages, such as English, is the that these languages disambiguate word meanings based on speech melody.
I was thinking about this topic, while trying to get a better understanding of how much of our spoken speech can be related to musical intervals. Initially I thought that I would be able to find some consistent interval/melodic patterns for specific speech utterances. But this did not really seem the case. Rather basic melody for specific words spoken by one person, could change from situation to the next. It would actually never really be the same. This indeed supports the hypotheses that tonality does not play a very important role in meaning constitution in a language as English. I therefore would, on the other hand, expect to find more consistent melodic patterns in a tonal language such as Chinese.
However, I did find some interesting patterns with respect to the interval distances of the first and second formants. The formants (overtones) in speech seem to be related to the natural harmonic series. The intervals between the F1 and F2 formants for instance show a preference for intervals that we also find in the harmonic scale, especially the unison, perfect 5th and major 3rd. This seems to show that music and speech are indeed deeply related! But,obviously, it still leaves open the question, what came first, speech or music?
Moreover the rhythmic structure does seem to be more consistent among speakers, which also makes sense. Most speakers find it strange, when somebody uses an unusual intonation or rhythmic structure (But it does not meaning, that we cannot understand this person!), which can easily happen when speaking a foreign language. But I still need to investigate this question a little bit further.
Anyway, this information raises some interesting issues to think about next:
- How much and what type of meaning does a melodic pattern of a non-tonal language carry?
- How much of this information can actually be translated to music?
- What type of information carries the rhythmic structure and intonation of words and syllables in a sentence?
- How much of this information can be translated to music?
- Would a speech to music translation for a tonal language be able to translate word meaning into music?
- Tidbits of Inspiration: The Language of the Prairie Dogs (undiscoveredauthor.wordpress.com)
- Perfect Pitch (psychologytoday.com)
- Chinese Tones (nosparoles.wordpress.com)