Download PDF The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man

Free download. Book file PDF easily for everyone and every device. You can download and read online The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man book. Happy reading The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man Bookeveryone. Download file Free Book PDF The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man Pocket Guide.

Based on these results, we had to correct our assumptions of familiarity. Thus, we classified agonistic and affiliative human infant and dog voices as well as agonistic chimpanzee voices as familiar and affiliative chimpanzee voices and agonistic and affiliative tree shrew voices as unfamiliar. Comparing the results for the objective and assumed familiarity rating we found slight discrepancies. For example, although both tree shrew voices could not be correctly labelled, participants rated agonistic tree shrew voices as more familiar than affiliative tree shrew voices.

Agonistic tree shrew voices received a middle sized assumed familiarity score of 2. This indicates that context had different effects on the VI self depending on the species. As a break down analysis we used one-sample t-tests to analyze whether the VI self was significantly different from zero indicating induced positive emotional response positive VI self or negative emotional response negative VI self for each playback category Figure 3a.

In contrast there were mixed results for the animal taxa. Interestingly, the tree shrew voices induced the contrary emotional valence. Mean and standard deviation of the valence index for the playback categories of the A self-perspective and the B others- perspective. This indicates that context had different effects on the subjects' ratings depending on the species. Again, results for the animal taxa were mixed.

Tree shrew voices were classified wrongly. This indicated that perspective had different effects on participants' VI depending on species and context. Therefore, we investigated the effect of perspective for each playback category separately using dependent t-tests. For agonistic human infant, chimpanzee and dog voices the VI other was significantly more negative than the VI self , whereas affiliative chimpanzee voices produced the reverse pattern. Our findings provide evidence that adult male human listeners are able to recognize the emotional valence of human and some but not all animal voices.

Of the investigated animal species, only the emotional valence of agonistic dog and chimpanzee voices were classified correctly. Notably tree shrew voices were classified to the contrary emotional valence. This pattern of results can be best explained by familiarity with the respective call type and context.

Shopping Cart

In almost all cases where the species of the playback category was correctly recognized participants were also able to classify the emotional valence of the recording correctly exception: affiliative dog voices. Based on the present findings reflections of induced affective states self-perspective or degree of phylogenetic relatedness towards humans seems to be less important for cross-taxa emotional recognition. Human listeners classified the emotional valence of human infant voices with the highest accuracy, this being in agreement with findings in the literature e.

Belin et al. However, in our study the same person selected both playback stimuli of human and animal voices based on the same criteria. For animal voices we found not only differences in the recognition accuracy of the emotional valence between species but also between agonistic and affiliative contexts within a species.

In the following we will discuss our results for the others-perspective for each animal species and context separately. Agonistic dog voices were correctly recognized by participants which is in agreement with findings in the literature for both call types, barks [31] , [39] , [40] , [41] and growls [42]. The fact that dogs use the same call type, the bark, in both, agonistic and affiliative contexts, may have made it more difficult for human listeners to discriminate the emotional valence.

For primate voices, there are only few studies investigating how humans perceive the emotional content of their voices and these provide inconsistent results.

The Evolution of Emotional Communication from Sounds in Nonhuman Mammals to Speech and Music in Man

Linnankoski and colleagues [33] showed that adults and children are able to recognize the context of macaque voices correctly. In contrast, Belin and colleagues [7] did not find correct emotional classifications of rhesus monkey voices. Martin and Clark [55] played screams of chimpanzees to newborn human infants. Whereas they started to cry when listening to other newborn infant cries they did not cry when listening to chimpanzee infant cries.

For the affiliative chimpanzee voices Davila Ross and colleagues [24] could reconstruct the phylogenetic tree of humans and apes based on increasing similarities in acoustic features of ape laughter, which underlines the close relatedness of the human and primate voices used in this study. However, the fact that participants did not recognize the emotional valence of affiliative chimpanzee vocalizations shows that acoustic similarities are not sufficient for explicit cross-taxa emotional recognition.

For tree shrew voices, participants were not able to classify the emotional valence correctly. Instead, they classified the contrary emotional valence. A potential explanation for these results may be the different associations participants reported see Table S1. Participants labeled agonistic tree shrew voices mainly as birds. Sometimes they also described the stimuli as a sea-gull crying at the beach.

Thus, positive associations e. In contrast affiliative tree shrew voices were associated with the noise of a horse-drawn carriage or of the street, sounds of machines or a squeaking wheel. These sounds may have been perceived as unpleasant explaining the negative valence scores for the self-perspective. Our findings that agonistic chimpanzee voices did not induce negative or positive emotions self-perspective but were classified correctly to the negative emotional context others-perspective and the fact that there was no correlation between the VI self and the ECI contradicts our hypothesis that a simple reflection of the self-perspective alone is sufficient for voice-induced emotional recognition.

Furthermore, we found quantitative differences between the self- and the others-perspective indicating that participants reported more negative valence scores in the others- than in the self-perspective for human and animal voices. Nevertheless, valence indices of both perspectives showed a strong correlation to each other.

This might be possibly because human listeners perceive these voices as less behaviorally relevant for themselves than for the sender and might be able to differentiate between how they feel when listening to the calls compared to how the other was feeling when calling. All in all, these results show that at least in human men, voice-induced recognition of emotions cannot be exhaustively explained by a simple reflection of the recipient's inner state.

Further studies have to clarify to which extent cognitive processes influence the self- and others-perspective and to which extent an initial emotional response triggered by vocalizations self-perspective may be overridden by other cognitive mechanisms to differentiate the own emotional feeling emotion or emotional intensity from that of the sender. Our findings do support the hypothesis that familiarity has a high impact on voice-induced cross-taxa recognition others-perspective at least in explicit rating tasks as the one used in the present study.

However, these studies presented only voices of one domestic animal species, which are all, to some extent, familiar with humans e. Using this within-species design they showed that even participants who were scarcely familiar with pets i. In contrast to these studies the present study tested animal voices of different species which varied in the degree of familiarity to human listeners. By testing an absolutely unknown species, the tree shrew, we showed that familiarity does play a role in emotional recognition across species. This is pointed out in particular by the fact that participants classified the contrary emotional valence which can best be explained by cognitive associations based on similarity to or pleasantness of more well-known sounds.

In previous studies familiarity was measured either as what we refer to here as assumed familiarity [37] or as frequency of interaction with the respective species e. The fact that we found discrepancies between the assumed familiarity and the objective familiarity measurement shows that the former approach is problematic.

In our study, humans assumed to be familiar with the respective acoustic stimuli, resulting in high assumed familiarity ratings, whereas they were in fact not able to identify the species correctly. Furthermore, our results for the chimpanzee showed that even within the same species familiarity can differ between contexts. Whereas participants recognized a primate voice as such when listening to agonistic chimpanzee voices they were not able to recognize a primate species when listening to affiliative chimpanzee voices.

We suggest that this is due to the fact that chimpanzee screams are very loud and frequently produced calls that may be encountered in zoo settings or in the media. In contrast, chimpanzee laughter is very soft, cannot be heard in a zoo settings and is only rarely displayed in the media. Furthermore, after the experiment was finished we informed the participants about the nature of the vocalizations, and almost all of them were surprised to learn that chimpanzees can produce such laughter sounds at all.

This suggests that familiarity with the respective species alone is not sufficient for voice-induced cross-taxa emotional recognition. Human listeners also had to be familiar with the specific sound. Comparing the results of objective familiarity index with the classification of the recording context others-perspective revealed that when participants recognized the species they also recognized the emotional valence of the recording context except for affiliative dog voices. He turned out to be a biology student who had taken part in a biology course investigating chimpanzee behavior one week before the experiment.

This example shows that the current results were widely influenced by experience-based cognitive mechanisms. Altogether, the present results showed a high impact of call type familiarity on voice-induced cross-taxa emotional recognition. Based on the discrepancy between assumed and objective familiarity it can be assumed that participants based their emotional ratings on this in part wrongly assumed familiarity, which is yet another indication of experience-based recognition mechanisms.

Our data may provide little evidence of evolutionary retained mechanisms in explicit cross-taxa emotional recognition from voice others-perspective , at least for adult men. If phylogeny was a decisive factor, we would have expected a high recognition accuracy of emotional valence for human and chimpanzee but not for dog and tree shrew voices. This was not the case. However, an aspect of the present data that can be linked to evolutionary mechanism is that cross-taxa emotional recognition was most successful for contexts of negative emotional valence, i.

Agonistic animal voices were better recognized than affiliative animal voices. This was also the case for pig [32] and dog vocalizations [31]. It could be argued that negative voices are more meaningful in cross-taxa communication since they convey information about possible dangerous or aggressive situations e. This would suggest that the acoustic structure of negative voices is evolutionarily more conserved than that of positive voices which could explain the lack of valence recognition for affiliative dog and chimpanzee voices in contrast to agonistic dog and chimpanzee voices.

For dog vocalizations we have to keep in mind that domestication may have changed barking behavior such as acoustic parameters e. To minimize breed-specific vocal behavior, we used vocalizations from different breeds including small- and large-bodied dog breeds. However, we cannot exclude that evolutionary mechanisms are masked by domestication.

We acknowledge that different mechanisms may account for each species e. However, in interpreting the results we did not just focus on one species but tried to find the most parsimonious interpretation taking all the results into account. Therefore we argue that call type familiarity has the most important impact explaining our results. When listening to unfamiliar stimuli participants made erroneous context associations resulting in a wrong valence rating or may have rated the others-perspective according to the self-perspective or to the pleasantness of the stimulus.

The present findings may be limited by the fact that we can only assume the emotional state of an animal. In the present study we chose two superordinate context categories: affiliative context assumed to be associated with positive emotions and agonistic context assumed to be associated with negative emotions. Based on video and audio analyses we related each vocalization to a special behavior of the sender Table 1 and assigned these contexts to one of the two superordinate context categories, affiliative or agonistic context.

We cannot rule out that a lack of correct recognition can also be explained by the fact that the animal is not in the assumed emotional state and therefore the receiver has no chance to recognize the context. To solve this problem, comparative acoustical designs are necessary to test the perception of conspecific and heterospecific species in humans and animals using the same acoustic stimuli.

In conclusion, adult human male listeners showed highest emotional recognition accuracy for conspecific voices, while the recognition accuracy towards animal voices depended mainly on call type familiarity, i. These findings suggest that at least under explicit task conditions cross-taxa voice-induced emotional recognition in adult men is more affected by cognitive experience-based mechanisms than by phylogeny.

Currently, an EEG and an fMRI study are under way to investigate the temporal determinants and neuronal networks underlying cross-taxa voice-induced emotional perception. Performed the experiments: MS. Analyzed the data: MS. Browse Subject Areas? Click through the PLOS taxonomy to find articles in your field. Abstract Voice-induced cross-taxa emotional recognition is the ability to understand the emotional state of another species based on its voice. Introduction The recognition of affective information in human voice plays an important role in human social interaction and is linked to human empathy, which refers to the capacity to perceive, understand and respond to the unique affective state of another person e.

Materials and Methods Ethical Statement The experiment was conducted with the approval of the ethics committee of the University of Leipzig and in compliance with the Declaration of Helsinki. Acoustic stimuli We used recorded acoustic stimuli of four species human infant, dog, chimpanzee and tree shrew in two distinct superordinate context categories agonistic versus affiliative; for detailed context description see Table 1 , Figure 1 as playback stimuli. Download: PPT.

Figure 1. Sonograms of examples of playback stimuli of the eight playback categories. Table 2. Acoustic characterization of the playback categories.

Experimental Set-up Each participant was tested separately in a quiet, dimmed room. Experimental Task Each participant listened to all playback stimuli in a randomized order twice in two blocks. Discussion Our findings provide evidence that adult male human listeners are able to recognize the emotional valence of human and some but not all animal voices. Conclusions In conclusion, adult human male listeners showed highest emotional recognition accuracy for conspecific voices, while the recognition accuracy towards animal voices depended mainly on call type familiarity, i.

Supporting Information. Table S1. References 1. Neuropsychologia — View Article Google Scholar 2. J Cogn Neurosci 42— View Article Google Scholar 3. J Voice — View Article Google Scholar 4. View Article Google Scholar 5. Handbook of Affective Sciences. Oxford: Oxford University Press. Zimmermann E, Leliveld LMC, Schehka S Toward the evolutionary roots of affective prosody in human acoustic communication: a comparative approach to mammalian voices.

Oxford: Oxford University Press pp. Proc Biol Sci — View Article Google Scholar 8. Appl Neuropsychol 40— View Article Google Scholar 9. Hum Brain Mapp — View Article Google Scholar Child Dev — Curr Biol — Juslin PN, Laukka P Communication of emotions in vocal expression and music performance: different channels, same code?


  • A Hybrid Theory of Metaphor: Relevance Theory and Cognitive Linguistics.
  • Physics and Technology of Nuclear Materials;
  • Vocal expression of emotions in laughter - ePrints - Newcastle University!
  • Shop now and earn 2 points per $1.
  • After Marriage: Rethinking Marital Relationships.

Psychol Bull — J Phon — J Cross Cult Psychol 76— London: John Murray. Bastian A, Schmidt S Affect cues in vocalizations of the bat, Megaderma lyra , during agonistic interactions. J Acoust Soc Am — Acta Ethol 75— The first two categories involve the development of auditory perception and sensitivity to vocal emotion information. But in the other two categories they point to elements such as melody, harmony, counterpoint, and syntax that are fundamental to the complexity and beauty in music see also Patel, Speech is often cited as an important domain contributing to music perception.

Speech communication in people has likely resulted in many refinements of phylogenetically older vocal production and perception abilities shared with many non-human animals Owren et al. Models of efficient coding of sound also suggest that any specialized auditory processes for speech could be achieved by integrating auditory filtering strategies shared by all mammalian species Lewicki, Based on modeling work examining potential filtering strategies of peripheral auditory systems, Lewicki proposed that the representational coding of speech could be effectively instantiated using schemes specialized for broadband environmental sounds combined with schemes for encoding narrowband i.

That is, evolutionarily conserved auditory processes might have constrained speech production mechanisms such that speech sounds fell into frequency and temporal ranges exploiting prelinguistic perceptual sensitivities. Speech perception is quite robust in normal speakers even in cases where high degradation or interruption is occurring e.

These facts hint at perceptual specialization. But a good deal of our speech processing ability is likely due to auditory abilities widely shared across mammals Moore, Cognitive neuroscience research has shown repeatedly that music and speech share brain resources indicating that speech perception systems accept music as input for recent reviews see Arbib, , though evidence exists for separate processing as well Zatorre et al. The relationship between speech and music is certainly more than a coincidence.

Amplitude peaks in the normalized speech spectrum correspond well to musical intervals of the chromatic scale, and consonance rankings Schwartz et al. Many parallels also exist between music and speech development McMullen and Saffran, The physical properties of the sounds are not the only dimensions that link speech and music. The structure of various sound sequences also seems to activate the same underlying cognitive machinery. Research examining rule learning of auditory stimuli demonstrates the close connection between perceiving speech and music.

Marcus et al. However, extracting rules from sequences of non-speech stimuli was facilitated by first learning the rules with speech, suggesting that the proper domain see below of rule learning in sound sequences is speech, but musical tones and other sounds satisfy the input conditions of the rule learning system once the system is calibrated by spoken syllables. Studies exploring the acquisition of conditional relations between non-adjacent entities in speech or melodic sequences show similar patterns Creel et al.

A good deal of music perception is likely due to the activity of speech processing mechanisms, but perception is only half of the system. We should be concerned with how production and perception systems evolved together. There are clear adaptations in place underlying breathing processes in speech production and laryngeal and articulator control MacLarnon and Hewitt, Moreover, we have fine cortical control over pitch, loudness, and spectral dynamics Levelt, These production systems, as a rule of animal signaling, must have complementary adaptive response patterns in listeners.

Many perceptual biases were in place before articulated speech evolved, such as the categorical perception of continuous sounds Kuhl, But other response biases might be new, such as sensitivity to the coordinated isochronic i. Sperber made a distinction between the proper domain of a mechanism and its actual domain.

Proper domain refers to those specific features that allow a system to solve an adaptive problem. Depending on the nature of the dynamics i. The actual domain of a system is the range of physical variation in stimuli that will result in a triggering of that mechanism, something that is often a function of context and the evolutionary history of the cognitive trait. In these terms, the actual domain of speech processers presumably includes most music.

But how these preferences manifest themselves as social phenomena remains to be explained. One possibility is that cultural evolutionary processes act on those sound characteristics that people are motivated to produce and hear. For example, rhythmic sound that triggers spatial localization mechanisms could be preferred by listeners, and consequently be subject to positive cultural selection resulting in the feature spreading through musical communities. Other examples include singing patterns that exaggerate the sound of affective voices, or frequency and amplitude modulations that activate systems designed to detect speech sounds.

The question becomes, of course, is any sound pattern unique to music? MacCallum et al. This evolutionary process resulted in several higher order structures manifesting as unquestionably musical attributes. For instance, an isochronic beat emerged. Understanding perceptual sensitivities i. The sound of fear represents one dimension of auditory processing relevant for music which is in place because of conserved signaling incorporating arousal.

As a consequence, people are interested in sounds associated with high arousal, and cultural transmission processes perpetuate them. Consider the form and function of punk rock in western culture. The relevant cultural phenomena for a complete description of any genre of music are highly complex, and not well understood. But we can clearly recognize some basic relationships between the sonic nature of certain genres of music and their behavioral associations in its listeners.

Like much music across culture, there is a strong connection between music production and movement in listeners, epitomized by dancing, resulting in a cross-cultural convergence on isochronic beats in music traditions. The tight relationship between musical rhythm perception and associated body movement is apparent in babies as young as seven months Phillips-Silver and Trainor, Punk rock is no exception.

Early punk is characterized by a return to fundamentals in rock music Lentini, It began as a reaction to a variety of cultural factors, and the perceived excesses of ornate progressive music in general. The initial creative ethos was that anybody can do it, and it was more of an expression of attitude than the making of cultural artifacts. The music is characterized by fast steady rhythms, overall high amplitude, and noisy sound features in all instruments — attributes that facilitate forceful dancing.

But the distortion noise is especially distinct and key for the genre. Of course, many genres of rock use noise — the punk example is just preferred here for many cultural and explanatory reasons, but the same principle applies to many variations of blues and rock music. Noisy features in rock took a life of their own in the No Wave, post punk, and experimental movements of the s and beyond e.

In rock music, what originally likely arose as a by-product of amplification i. Particular manifestations of noisy features forms were directly related to compositional and performance goals of musicians functions. Products were developed that harnessed particular kinds of distortion in devices e. This allowed artists to achieve the desired distortion sounds without having to push amplifiers beyond their natural limit.

The use of noise quickly became a focus of a whole family of musical styles, most being avant garde and experimental. Continuing the trend of rejecting aspects of dominant cultural practices, artists could signal their innovation and uniqueness by using this new feature of music in ways that set them apart. The sound affordances of broadband noise provide a powerful means for artists to generate cultural attractors fueled by discontent with mass market music.

Moreover, the creative use of distortion and other effects can result in spectrally rich and textured sounds. Cultural evolutionary forces will tap into any feature that allows socially motivated agents to differentially sort based on esthetic phenomena Sperber, ; McElreath et al.

[langev] W. Tecumseh Fitch

Simple sound quality dimensions like intensity might be excellent predictors of how people are drawn to some genres and not others Rentfrow et al. Listeners also often find moderate incongruities as opposed to great disparities between established forms and newer variations the most interesting Mandler, For example, modern noise rock with extreme distortion that is quite popular today would likely have been considered much more unlistenable in because it is such a dramatic departure from the accepted sounds for music at the time.

But today it is only slightly noisier than its recent predecessors. What gets liked depends on what is liked. Distortion effects in contemporary music mimic in important ways the nonlinear characteristics we see in highly aroused animal signals, including human voices.

Research Repository

Electronic amplification, including the development of electro-magnetic pick ups in guitars, was arguably the most important technological innovation that led to the cultural evolution of rock music, and the situation afforded an incredible palette of sound-making that is ongoing well over half a century later Poss, Early garage rock music, the precursor to punk rock, was likely the first genre to systematically use this overblown amplification effect on purpose.

Specific manipulations of electronic signal pathways were developed that allowed musicians to emulate in music what is an honest feature of a vocalization: high arousal. A basic distortion pedal works as follows. The first process is typically an amplitude gain accompanied by a low-pass filter, pushing the signal toward a saturation point where nonlinear alterations will occur. This saturating nonlinearity is filtered again, resulting in output that becomes a multi-band-passed nonlinearity. Yeh et al. Arrow notes onset of wave shaping.

Recently, we produced musical stimuli to examine the role of noise in emotional perceptions of music, and used digital models created for musicians as our noisy source Blumstein et al. Twelve 10 s compositions were created that were then manipulated into three different versions: one with added musical distortion noise, one with a rapid frequency shift in the music, and one unaltered control. The manipulations were added at the halfway point in the pieces. These stimuli were played to listeners and they were asked to rate them for arousal and valance.

We expected that distortion effects approximating deterministic chaos would cause higher ratings of arousal, and negative valence judgments — the two dimensional description of vocalized fear Laukka et al. This is precisely what we found. Subjects also judged rapid pitch shifts up as arousing, but not pitch shifts down. Downward pitch shifts were judged as more negatively valenced which is what we should expect given the acoustic correlates of sadness in voices Scherer, Surprisingly, previous work had not explored the role of distortion in affective judgments of music, but an animal model of auditory sensitivity afforded a clear prediction which was confirmed.

We were interested in how these effects occurred in the context of film. Previous work had found that horror soundtracks contained nonlinearities at a much higher rate than other film genres Blumstein et al. Of course, for the most part the direct connection is not consciously made between the ecology of fear screams in animals and the induction of fear in a human audience. But composers and music listeners have an intuitive sense of what sounds are associated with what emotions, and this intuition is rooted in our implicit understanding of form and function in nature — a principle that is strongly reinforced by cultural processes bringing these sounds to us repeatedly generation after generation.

But would sound features alone be sufficient to invoke fear even in the context of an emotionally benign film sequence? We created simple s videos of people engaged in emotionally neutral actions, such as reading a paper, or drinking a cup of coffee. Subjects viewed these videos paired with the same music as described above, and we found something interesting.

Judgments of arousal were no longer affected by the nonlinear features in the music clips when viewed in the context of a benign action, but the negative valence remained. Clearly, decision processes used in judgments of affect in multimodal stimuli will integrate these perceptual dimensions.

One obvious possibility for our result is that the visual information essentially trumped the auditory information when assessing urgency, but the emotional quality of a situation was still shaped by what people heard. Future research should explore how consistent fearful information is processed, and we should expect that auditory nonlinearities will enhance a fear effect as evidenced by the successful pairing of scary sounds and sights in movies.

As mentioned earlier, nonlinear characteristics in music represent one dimension in sound processing that plays a role in music perception and enjoyment. Our sensitivity to such features is rooted in a highly conserved mammalian vocal signaling system. I argue that much of what makes music enjoyable can be explained similarly.

But one aspect of music that is not well explained as a by-product is the conspicuous feature that it is often performed by groups — coordinated action of multiple individuals sharing a common cultural history, generating synchronized sounds in a context of ritualized group activity. Humans are animals — animals with culture, language, and a particular set of cognitive adaptations designed to interface with a complex social network of sophisticated conspecifics. Information networks and social ecologies have co-evolved with information processors, and thus, a form—fit relationship exists between the cognitive processes in the human mind and the culturally evolved environments for social information.

Humans cooperate extensively — in an extreme way when viewed zoologically — and we have many reliably developing cognitive mechanisms designed to solve problems associated with elaborate social knowledge Barrett et al. Because many of the adaptive problems associated with extreme sociality involve communicating intentions to cooperate as well as recognizing cues of potential defection in conspecifics, we should expect a variety of abilities that facilitate effective signaling between cooperative agents.

Many species, ranging from primates, to birds, to canines, engage in coordinated signaling. By chorusing together, groups can generate a signal that honestly communicates their numbers, and many other properties of their health and stature. Chorusing sometimes involves the ability to rhythmically coordinate signal production. When two signaling systems synchronize their periodic output i. Fitch described the paradox of rhythm, which is the puzzle of why periodic phenomena are so ubiquitous in nature, but overt rhythmic ability in animals is so exceedingly rare.

The answer, Fitch argued, lies in how we conceptualize rhythm in the first place. When we consider the component abilities that contribute to our capacity for rhythmic entrainment, the complexity in the neurocomputational underpinnings makes the capacity much less paradoxical, and instead understandably rare. The basic ability to coordinate behavior with an external stimulus requires at a minimum three capabilities: detecting rhythmic signals, generating rhythms through motor action, and integrating sensory information with motor output Phillips-Silver et al.

Phillips-Silver et al. While many species have variations of these abilities, only humans seem to have a prepared learning system designed to govern coordinated action of a rhythmic nature. The ability to entrain with others develops early, and is greatly facilitated by interactions with other social agents, but not mechanized rhythmic producers, or auditory stimuli alone Kirschner and Tomasello, Young infants reliably develop beat induction quite early Winkler et al.

Most rhythmic ability demonstrated by human infants has never been replicated in any other adult primate. Even with explicit training, a grown chimpanzee cannot entrain their rhythmic production with another agent, let alone another chimpanzee.


  • Making Vegan Frozen Treats: 50 Recipes for Nondairy Ice Creams, Sorbets, Granitas, and Other Delicious Desserts.
  • Global Culture, Island Identity?
  • Cat Owners Home Veterinary Handbook, Fully Revised and Updated.
  • Handbook of Dynamical Systems, Volume 3?

African apes, including chimps and gorillas, will drum alone, and this behavior is likely to be homologous with human drumming Fitch, , suggesting that coordinated as opposed to solo rhythmic production evolved after the split with the last common ancestor. So what is it about the hominin line that allowed for our unique evolutionary trajectory in the domain of coordinated action? There are other species that have the ability to entrain their behavior to rhythmic stimuli and other agents.

Birds that engage in vocal mimicry, such as the sulfur-crested cockatoo Cacatua galerita have been shown to be capable of highly coordinated responses to music and rhythmic images, and will even attempt to ignore behaviors around them produced by agents who are not in synch with the stimulus to which they are coordinated Patel et al. African gray parrots Psittacus erithacus also have this ability Schachner et al. Recently, Cook et al. Fitch pointed out that examining these analogous behaviors can quite possibly elucidate human adaptations for entrainment, but he did not address the larger question of why humans might possess entrainment abilities uniquely across all terrestrial mammals.

Hagen and Bryant proposed that music and dance constitute a coalition signaling system. Signals of coalition strength might have evolved from territorial displays seen in other primates, including chimpanzees Hagen and Hammerstein, The ideal signal of coalition quality should be easily and rapidly decoded by a target audience, and only plausibly generated by stable coalitions able to engage in complex, coordinated action.

A coordinated performance affords an opportunity to signal honest information about time investments with fellow performers, individual skills related to practice time investment, and creative ability indicating cognitive competence. In short, individuals can signal about themselves which could be subject to sexual selection , and the group can signal about their quality as well. To test these ideas, original music was recorded, and versions were made that contained different kinds of performance errors Hagen and Bryant, As expected, the composition with introduced errors that disrupted the synchrony between the performers was judged by listeners as lower in music quality.

We also asked the listeners to judge the relationships between the performers, including questions about how long they have known each other, and whether they liked each other. The ethnographic record clearly reveals the importance of music and dance displays to traditional societies throughout history Hagen and Bryant, Initial meetings where groups introduce one another to their cultures, including these coordinated displays, can have crucial adaptive significance in the context of cooperation and conflict. The potential for selection on such display behaviors is clear, as is the important interface with cultural evolutionary processes McElreath et al.

Cultural traditions that underlie the nature of specific coordinated displays are revealed in contemporary manifestations of the role of music in social identity and early markers of friendship preferences and alliances Mark, ; Giles et al. Mark proposed an ecological theory of music preference suggesting that music can act as a proxy for making judgments about social similarity.

According to the theory, musical preferences spread through social network ties unified by principles of social similarity and history. Investment of time in one preference necessarily imposes time constraints on other preferences. Music can also function to increase coalition strength within groups McNeill, and this effect has been documented in children. Kirschner and Tomasello had pairs of 4-year-old children partake in one of two matched play activities that differed only in the participation of a song and dance.

The musical condition involved singing to a prerecorded song with a periodic pulse while striking a wooden toy with a stick, and walking to the time. The non-musical condition involved only walking together in a similar manner with non-synchronized utterances. Pairs of children who participated together in the musical condition spontaneously helped their partner more in a set-up scenario immediately after the play activity where one needed assistance, and they engaged in more joint problem solving in that set-up as well.

Our proximate experiences of pleasure in engaging with other social agents in musical activity might serve to bolster within-group relationships, and provide a motivating force for generating a robust signal of intragroup solidarity that can be detected by out-group members.

Patterns of cultural transmission occur through different channels. Many cultural traits get passed not only vertically from older members of a culture to their offspring, but also horizontally across peers. For instance, children typically will adopt the dialect and accent of their same-aged peers rather than their parents Chambers, , illustrating how language learning and communicative-pragmatic mechanisms are quite sensitive to the source of its input. Similarly, peers should be an important source of musical taste development if that esthetic is important for social assortment Selfhout et al.

Variations of forms in any cultural domain will typically cluster around particular attractors, but the nature of the attraction depends on the type of artifact. For instance, artifacts such as tools that have some specific functional use will be selected based largely though not completely on physical affordances e.

For example, people prefer landscape portrayals with water over those without water because of evolved foraging psychology Orians and Heerwagen, As described earlier, music exploits many auditory mechanisms that were designed for adaptive auditory problems like speech processing, sound source localization, or vocal emotion signaling. Many proposals exist describing potential factors that might contribute to the spreading of any kind of cultural product, and theorists debate about the nature of the representations including whether they need to be conceived as representations at all and what particular dynamics are most important for the successful transmission of various cultural phenomena Henrich and Boyd, ; McElreath et al.

In the case of music, some aspects seem relatively uncontroversial. For example, the status of an individual composer or a group of individual music makers likely plays an important role in whether musical ideas get perpetuated. A coordinated display by the most prestigious and influential members of a group was likely to be an important factor in whether the musical innovations by these people were learned and perpetuated by the next generation. Subsequent transmission can be facilitated by conformity-based processes.

A combination of factors related to the physical properties of the music, the social intentions and status of the producers, and the social network dynamics of the group at large will all interact in the cultural evolution of musical artifacts. McElreath et al. There are many possible evolutionary paths for the perpetuation of musical forms, any even the propensity for musical ability in the first place e.

But how does emotion play into the process? Little research has explored directly the affective impact of group performances aside from the evocative nature of the music itself. The feelings associated with experiencing coordinated action between groups of people might not fit into a traditional categorical view of emotions, and instead may be better categorized as something like profundity or awe Davies, ; Keltner and Haidt, According to the coalition signaling perspective, elaborate coordinated performances are an honest signal that is causally linked to the group of signalers.

This view does not require any specific affective component, at least not in the traditional approach of studies on emotion and music. The affect inducing qualities of music facilitate its function in that the generated product is inherently interesting to listeners and relevant to the context-specific emotional intentions of the participants.

The surface features of the signals satisfy input conditions of a variety of perceptual systems i. But the ultimate explanation addresses how coordinated displays provide valuable information about the group producing it. A form—function approach again can illuminate the nature of the signaling system and how it operates.

Musical features such as a predictable isochronic beats and fixed pitches facilitate the coordinated production of multiple individuals and afford a platform for inducing intended affect in listeners. Our perceptual sensitivity to rhythm and pitch, also important for human speech and other auditory adaptations, allow listeners to make fine grained judgments about relationships between performers. We can tell if people have practiced, whether they have skill that requires time, talent, and effort, and whether they have spent time with the other performers.

Hatfield et al. Contagion effects in groups are likely connected to a variety of non-human animal behaviors. Several primate species seem to experience some version of contagious affect, including quite notably the pant hoots of chimpanzees that could be phylogenetically related to music behavior in humans Fritz and Koelsch, While rhythmic entrainment is zoologically rare, other acoustic features can be coordinated in non-human animals signals, a phenomenon Brown calls contagious heterophony which he believes played a crucial role in the evolution of human music.

In the case of people, Spoor and Kelly proposed that emotions experienced by groups might assist in communicating affect between group members and help build social bonds. Recent work shows that the transmission of emotion across crowds can act like an unconscious cascade Dezecache et al. While all of these ideas are likely to be part of the human music puzzle, scholars have neglected to develop the idea of how coordinated musical action might constitute a collective signal to people outside of the action.

Many of the claimed benefits of coordinated action, such as increased social cohesion and alignment of affect, might be proximate mechanisms serving ultimate communicative functions. As is common in the social sciences, proximate mechanisms are often treated as ultimate functions, or function is not considered at all. Evidence is mounting that affect is not necessarily tied to synchronous movement or the benefits associated with it. A variety of studies have shown that positive affect is not needed for successful coordination, and that explicit instruction to coordinate action can result in cooperative interactions without any associated positive emotions being experienced by participants e.

Language style matching was also not related to cooperative moves in the PD game, suggesting that coordinated action can impact future interaction behavior without mediating emotions or behavior matching lacking temporal structure. The role of emotions in group musical performances is not clear, but what is intuitively obvious is that the experience of a group performance is often associated with feelings of exhilaration, and a whole range of emotions.

But such emotional experiences are necessarily tied up in the complexities of the social interaction, and the cultural evolutionary phenomena that contribute to the transmission of the musical behavior. Researchers should examine more closely how specific emotions are conjured during group performances: in players, dancers, and audience members alike. Moreover, how much of the impact of the emotional experience is due to the particular structural features of the music, independent of the coordinated behavioral components?

Flow can be thought of as an experiential pleasure that is derived from certain moderately difficult activities, and it can facilitate the continued motivation to engage in those activities. One study examined flow in piano players, and found that several physiological variables such as blood pressure, facial muscle movements, and heart rate measures were positively correlated with self-reported flow experiences de Manzano et al.

The psychological constructs of the groove and flow speak to both the motivational mechanisms underlying music, and the high degree of shared processing that many musical and non-musical phenomena share. In many cultures, the concept of music as separate from the social contexts and rituals in which it manifests is non-existent Fritz and Koelsch, The western perspective has potentially isolated music as a phenomenon that is often divorced from the broader repertoire of behaviors in which is typically occurs, and this situation might have important consequences for understanding it as an evolved behavior McDermott, Music moves us — emotionally and physically.

The physical characteristics of music are often responsible, such as the wailing sound of a guitar that is reminiscent of a human emotional voice, or the solid beat that unconsciously causes us to tap our foot. The reasons music has these effects are related in important ways to the information-processing mechanisms it engages, most of which did not evolve for the purposes of listening to music.

Music sounds like voices, or approaching objects, or the sounds of animals. Cognitive processes of attraction, and cultural transmission mechanisms, have cumulatively shaped an enormous variety of genres and innovations that help people define themselves socially. Music is an inherently social phenomenon, a fact often lost on scientists studying its structure and effects.

The social nature of music and the complex cultural processes that have led to its important role in most human lives strongly suggests an evolutionary function: signaling social relationships. Evidence of adaptive design is there: people are especially susceptible to the isochronic beats so common across cultures, we are particularly skilled like no other animal in coordinating our action with others in a rhythmic way, and the ability develops early and reliably across cultures.

Group performances in music and dance are universal across all known cultures, and they are usually inextricably tied to central cultural traditions. Several predictions emerge from this theoretical perspective. Subjects should be able to readily judge coalition quality through music and dance production Hagen and Bryant, Kirschner and Tomasello , have begun work in this area that I believe will prove to quite fruitful in understanding the nature of group-level social signaling. The current approach also makes predictions about the culturally evolved sound of music.

We should expect musical elements to exploit pre-existing sensory biases, including sensitivity to prosodic signals conveying vocal emotion in humans and non-human animals Juslin and Laukka, ; Blumstein et al. These characteristics should be stable properties of otherwise variable musical traditions across cultures, and persistent across cultural evolutionary time.

One obvious case described earlier is the perpetuation of electronically generated nonlinearities across a broad range of musical styles today that can be traced back to fairly recent technological innovations. In a matter of a few decades, most popular music now includes nonlinear features of one sort or another that only experimental avant-garde music used before. Indeed, sound features present in the vocal emotions of mammalian species are reflected in the most sophisticated instrumentation of modern classical and jazz.

Following Snowdon and Teie , we should also expect to find predictable responses in many non-human animals to musical creations based on the structural features of their emotional vocal signals. The question of why humans have evolved musical behavior, and other social animals have not, can only be answered by understanding the nature of culture itself — no small task.

Comparative analyses provide crucial insights into evolutionary explanations for any behavioral trait in a given species. In the case of human music, there is clear uniqueness, but we recognize traits common across many species that play into the complex behavior Fitch, Convergent evolutionary processes lead to structural similarities across diverse taxa, such as the relationships between birdsong and human music e.

Many animals signal in unison, or at least simultaneously, for a variety of reasons related to territorial behavior, and mating. These kinds of behaviors might be the most important ones to examine in our effort to identify any adaptive function of human musical activity, as the structural forms and typical manifestations of human music seem particularly well-suited for effective and efficient communication between groups.

This is especially interesting considering the fact that music often co-occurs with many other coordinated behaviors such as dancing, and themes in artifacts like clothing and food. Music should be viewed as one component among many across cultures that allows groups to effectively signal their social identity in the service of large scale cooperation and alliance building. The beautiful complexity that emerges stands as a testament to the power of biological and cultural evolution.

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Europe PMC requires Javascript to function effectively. Recent Activity. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases - many shared across species - and proliferate through cultural evolutionary processes.

The snippet could not be located in the article text.