Papers selected by Yoko Okumiya

 ■Memory for Music: Effect of Melody on Recall of Text
 ■Physical Interaction and Association by Contiguity in Memory for the Words and Melodies of Songs
 ■Expectancy Effects in Memory for Melodies
 ■The Influence of Expectancy on Melodic Perception
 ■Expectancies Generated by Melodic Intervals: Evaluation of Principles of Melodic Implication in a Melody-Completion Task
 ■Schema Driven Properties in Melody Cognition: Experiments on Final Tone Extrapolation by Music
 ■Measuring Melodic Expectancies with Children
 ■Reaction Time and Musical Expectancy: Priming of Chords
 ■Effects of Background Music on the Remembering of Filmed Events
 ■The Processing of Structured and Unstructured Tonal Sequences
 ■Encoding Strategies for Tonal and Atonal Melodies
 ■Memory for Melodies among Subjects Differing in Age and Experience in Music
 ■Emergence of Thematic Concepts in Repeated Listening to Music
 ■Memorising Two Melodies of Different Style

Memory for Music: Effect of Melody on Recall of Text
W. T. Wallace
Journal of Experimental Psychology: Learning, Memory, and Cognition, 1994, 20(6), 1471- 1485.

The melody of a song, in some situations, can facilitate learning and recall. The experiments in this article demonstrate that text is better recalled when it is heard as a song rather than as speech, provided the music repeats so that it is easily learned. When Ss heard 3 verses of a text sung with the same melody, they had better recall than when the same text was spoken. However, the opposite occurred when Ss heard a single verse of a text sung or when Ss heard different melodies for each verse of a song; in these instances, Ss had better recall when the text was spoken. Furthermore, the experiments indicate that the melody contributes more than just rhythmical information. Music is a rich structure that chunks words and phrases, identifies line lengths, identifies stress patterns, and adds emphasis as well as focuses listeners on surface characteristics. The musical structure can assist in learning, in retrieving, and if necessary, in reconstructing a text.

Back to Top

Physical Interaction and Association by Contiguity in Memory for the Words and Melodies of Songs
R. G. Crowder, M. L. Serafine, and B. Repp.
Memory and Cognition, 1990, 18(5), 469-476.

Three experiments were designed to investigate two explanations for the integration effect in memory for songs (Serafine, Crowder & Repp, 1984; Serafine, Davidson, Crowder, & Repp, 1986). The integration effect is the finding that recognition of the melody (or text) of a song is better in the presence of the text (or melody) with which it had been heard originally than in the presence of a different text (or melody). One explanation for this finding is the physical interaction hypothesis, which holds that one component of a song exerts subtle but memorable physical changes on the other component, making the latter different from what it would be with a different companion. In Experiments 1 and 2, we investigated the influence that words could exert on the subtle musical character of a melody. A second explanation for the integration effect is the association-by-contiguity hypothesis, which holds that any two events experienced in close temporal proximity may became connected in memory such that each acts as a recall cue for the other. In Experiment 3, we investigated the degree to which simultaneous presentations of spoken text with a hummed melody would induce an association between the two components. The results gave encouragement for both explanations and are discussed in terms of the distinction between encoding specificity and independent associative bonding.

Back to Top

Expectancy Effects in Memory for Melodies
M. A. Schmuckler
Canadian Journal of Experimental Psychology, 1997, 51(4), 292-305.

Two experiments explored the relation between melodic expectancy and melodic memory. In Experiment 1, listeners rated the degree to which different endings confirmed their expectations for a set of melodies. After providing these expectancy ratings, listeners received a recognition memory test in which they discriminated previously heard melodies from new melodies. Recognition memory in this task positively correlated with perceived expectancy, and was related to the estimated tonal coherence of these melodies. Experiment 2 extended these results, demonstrating better recognition memory for high expectancy melodies, relative to medium and low expectancy melodies. This experiment also observed asymmetrical memory confusions as a function of perceived expectancy. These findings fit with a model of musical memory in which schematically central events are better remembered than schematically peripheral events.

Back to Top

The Influence of Expectancy on Melodic Perception
A. M. Unyk & J. C. Carlsen
Psychomusicology, 1987, 7(1), 3-23.

The fulfillment and violation of melodic expectancies influences musicians' ability to perceive, identify, and recall melodic patterns as measured by transcription accuracy. Twenty-seven musicians registered their melodic continuation expectancies by singing. Those expectancies were used to generate six types of brief melodies that varied in their relationships to the individual musician's expectancies: fulfillment of strong expectancies, fulfillment of weak expectancies, interval-size violation of strong or weak expectancies, and contour violation of strong or weak expectancies. The test melodies were presented aurally for transcription. Analysis of variance revealed that violations of strong expectancies led to more errors than expectancy fulfillment. Contour violations did not lead to more errors than mere interval-size violations. Analysis of the pattern of errors suggests that the salience of contour in melody strongly resists the influence of expectancy upon perception, but does not completely overcome it.

Back to Top

Expectancies Generated by Melodic Intervals: Evaluation of Principles of Melodic Implication
in a Melody-Completion Task

W. F. Thompson, L. L. Cuddy & C. Plaus
Perception & Psychophysics, 1997, 59(7), 1069-1076.

Bottom-up principles of melodic implication (Narmour, 1990) were evaluated in a melody- completion task. One hundred subjects (50 low training; 50 high training in music) were presented each of eight melodic intervals. For each interval, the subjects were asked to compose a short melody on a piano keyboard, treating the interval provided as the first two notes of the melody. For each melody, the first response - the note immediately following the initial interval - was analyzed. Multinominal log linear analyses were conducted to assess the extent to which responses could be predicted by Narmour's (1990, 1992) bottom- up principles. Support was found for all of Narmour's principles, and two additional predictors based on implied tonal structure. Responses of low - and high - training groups were similar.

Back to Top

Schema Driven Properties in Melody Cognition: Experiments on Final Tone Extrapolation by Music
J. Abe & E. Hoshino
Psychomusicology, 1990, 9(2), 161-172.

Two experiments were conducted to investigate the final-tone extrapolating behavior of two music experts, one of Western classical music and the other of Japanese traditional music. The first experiment verified schema-driven properties of the experts' melody cognition. Both experts' final tone extrapolation was highly rule-governed, but the underlying properties of their melodic schema seemed quite different. The Western classical music expert tended to process given melodies within the Western diatonic tonal frame, while the Japanese traditional music expert tended to operate "bimusically." That is, the latter processed some stimulus tone sequences based on the Japanese traditional tonal frame, but processed others based on the diatonic tonal frame. It was also confirmed that the melodic cognition of the Western music expert was more harmony-oriented, while that of the Japanese music expert was relatively contour-oriented. The second experiment conducted a more detailed investigation of the Western music experts' rules in extrapolating the final tones. Only the relationship between the responses of this expert and the Western diatonic scale structures was focused on. The response structures of this expert were analyzed in contrast to her subjective tonal structures for stimulus melodies and are discussed in terms of her tonal schema.

Back to Top

Measuring Melodic Expectancies with Children
M. Adachi & J. C. Carlsen
Council for Research in Music Education, 1995, 127, 1-7.

The development of melodic expectancy (i.e., an ability to anticipate upcoming melodic events) has been discussed theoretically in the literature. Unfortunately, empirical investigations of melodic expectancy have been conducted only with adult musicians; no experimental tool has been established for obtaining expectancies other than from adult musicians. To establish a valid measure of melodic expectancy in children, we conducted four studies in which a total of 64 children ranging in age from 3:8 to 12:2 participated. Through these studies, a screening test was developed that could effectively identify children who were able to reveal melodic expectancy information through an adapted version of the "sung continuation procedure", the melodic expectancy measure originally used with adult musicians. In this paper, we summarize the process of developing the screening test and describe modifications of the original melodic expectancy measure so as to fit children's cognitive needs. Both the adequacy and limitations of our melodic expectancy measure are discussed.

Back to Top

Reaction Time and Musical Expectancy: Priming of Chords
J. J. Bharucha & K. Stoeckig
Journal of Experimental Psychology: Human Perception and Performance, 1986, 12(4), 403-410.

The cognitive processes underlying musical expectation were explored by measuring reaction time in a priming paradigm. Subjects made a speeded true/false decision about a target chord following a prime chord to which it was either closely or distantly related harmonically. Using a major/minor decision task in Experiment 1, we found that major targets were identified faster, and with fewer errors, when they were related than when unrelated. An apparent absence (and possible reversal) of this effect for minor targets can be attributed to the prime's biasing effect on the target's stability. In Experiments 2 and 3 we tested this hypothesis by employing an in-tune/out-of-tune decision for major and minor targets separately. Both major and minor in-tune targets were identified faster when related than when unrelated. We outline a spreading activation model which consists of a network or harmonic relations. Priming results from the indirect activation or chord nodes linked through the network.

Back to Top


Effects of Background Music on the Remembering of Filmed Events
M. Boltz, T. Schulkind & S. Kantra
Memory & Cognition, 1991, 19(6), 593-606.

The use of background music within films provides a naturalistic setting in which to investigate certain issues of schematic processing. Here, the relative placement of music was manipulated such that music either accompanied a scene's outcome, and thereby accentuated its affective meaning, or foreshadowed the same scene, and thereby created expectancies about the future course of events. In addition, background music was either congruent or incongruent with the affect of an episode's outcome. When subjects were later asked to recall the series or filmed episodes, results showed that expectancy violations arising from mood-incongruent relations led to better memory in the foreshadowing condition, while mood-congruent relations led to better performance in the accompanying condition. Results from a recognition task further revealed that scenes unavailable for recall could be recognized when cued by background music. These overall in terms of selective-attending processes that are differentially directed as a function of background music.

Back to Top

The Processing of Structured and Unstructured Tonal Sequences
D. Deutsch
Perception & Psychophysics, 28 (5), 381-389.

The recall of hierarchically organized tonal sequences was investigated in two experiments. An adaptation of the technique of melodic dictation was employed, in which musically trained listeners notated each sequence after it was presented. Strong effects of sequence structure were obtained. Sequences whose tonal structure could be parsimoniously encoded in hierarchical fashion were recalled with a high level of accuracy. Sequences that could not be parsimoniously encoded produced substantially more errors in recall. Temporal segmentation was found to have a substantial effect on performance, which reflected grouping by temporal proximity regardless of tonal structure. The results provide evidence for the hypothesis that we encode tonal materials by inferring sequence structures and alphabets at different hierarchical levels, together with their rules of combination.

Back to Top

Encoding Strategies for Tonal and Atonal Melodies
M. Mikumo
Music Perception, 1992, 10(1), 73-82.

In this experiment, strategies of pitch encoding in the processing of melodies were investigated. Twenty-six students who were highly trained in music and twenty-six who were less well trained were instructed to make recognition judgments concerning melodies after a 12-sec retention interval. During each retention interval, subjects were exposed to one of four conditions (pause, listening to an interfering melody, shadowing nonsense syllables, and shadowing note names). Both the standard and the comparison melodies were six-tone series that had either a high-tonality structure ("tonal melody") or a low-tonality structure ("atonal melody"). The results (obtained by Newman-Keuls method) showed that recognition performance for the' musically highly trained group was severely disrupted by the note names for the tonal melodies, while it was disrupted by the interfering melody for the atonal melodies. On the other hand, for the musically less well trained group, whose recognition performance was significantly worse than that of the highly trained group even in the Pause condition, there were no significant differences in disruptive effects between the different types of interfering materials. These findings suggest that the highly trained group could use a verbal (note name) encoding strategy for the pitches in the tonal melodies, and also rehearsal strategies (such as humming and whistling) for the atonal melodies, but that subjects in the less well trained group were unable to use any effective strategies to encode the melodies.

Back to Top

Memory for Melodies among Subjects Differing in Age and Experience in Music
Y. Oura & G. Hatano
Psychology of Music, l6, 91-109.

An unfamiliar commercial song of l2 measures was learned by four musically experienced college students and four 4th-grade children, who each had had about 5 years of piano training, and eight musically unexperienced college students. Each subject was presented the melody auditoriIy and required to reproduce it in singing ten times. The experienced students were far superior to the unexperienced ones, regardless of age, in the speed of acquisition as well as the eventual level of mastery of the melody. Additional experiments revealed that both musically experienced and unexperienced college students could learn a poem significantly faster than the 4th graders, and that these three groups of subjects were comparable in acquiring a non-tonal (modal) Japanese folk song. Tonal melodic memory of the experienced students seemed to be facilitated primarily by knowledge and strategies specific to tonal music.

Back to Top

Emergence of Thematic Concepts in Repeated Listening to Music
L. Pollard-Gott
Cognitive Psychology, 1983, 15, 66-94.

A repeated listening procedure was designed to monitor changes in listener's appreciation of thematic categories in musical compositions. Subjects listened to a recorded musical composition. Passages selected from the composition were then played in pairs, and listeners rated their similarity. The similarity data were submitted to INDSCAL, a multidimensional scaling procedure, which located the passages in an n-dimensional space. This procedure was repeated in three separate sessions, so that changes in the perceived musical structure could be observed. In Study 1, subjects heard Liszt's Sonata in b, and target passages were Theme A, Theme B, and three variations of each theme. While extrathematic dimensions dominated early acquaintance, a theme dimension emerged in the second and third sessions. Musicians gave higher weight to the theme dimension than did nonmusicians, and theme was the only dimension for experts on this sonata. Musicians were also more accurate in a final classification test, but only after repeated listening. The effect of repeated exposure on transfer to new theme exemplars was considered in Study 2. It is hoped this work will foster more naturalistic approaches to musical cognition.

Back to Top

Memorising Two Melodies of Different Style
H. Zielinska & K. Miklaszewski
Psychology of Music, 1992, 20, 95-111.

Thirty one freshmen of the Academy of Music, classified either as possessors of absolute pitch (N=5) or non-possessors (N=26), memorised two melodies: one having a clear tonal structure, another one based on a modal scale. Generally, the culturally remote (modal) melody was more difficult for both groups of subjects, and the rhythm of the melodies caused more trouble in reproductions than pitch. Qualitative analysis of the measures sung during reproductions suggested that the musical material and individual characteristics of the subjects not controlled in the study influenced strategies of learning applied during the experiment, rather than the possession of AP per se. Results of a MANOVA showed no significant indication that the possession of AP modifies the proportions of contour, interval and rhythm errors during melodic memorisation of both melodies, though AP subjects memorised both melodies faster than non-APs. They also scored significantly better in retaining the key and storing pitch information when they recalled the melodies from long term memory.

Back to Top

認知論文研究会
AI Lab. トップページ