77 results on '"musical features"'
Search Results
2. Groove as a multidimensional participatory experience.
- Author
-
Duman, Deniz, Snape, Nerdinga, Danso, Andrew, Toiviainen, Petri, and Luck, Geoff
- Abstract
Groove is a popular and widely used concept in the field of music. Yet, its precise definition remains elusive. Upon closer inspection, groove appears to be used as an umbrella term with various connotations depending on the musical era, the musical context, and the individual using the term. Our aim in this article was to explore different definitions and connotations of the term groove so as to reach a more detailed understanding of it. Consequently, in an online survey, 88 participants provided free-text descriptions of the term groove. A thematic analysis revealed that groove is a multifaceted phenomenon, and participants' descriptions fit into two main categories: music- and experience- related aspects. Based on this analysis, we propose a contemporary working definition of the term groove as used in the field of music psychology: "Groove is a participatory experience (related to immersion, movement, positive affect, and social connection) resulting from subtle interaction of specific music- (such as time- and pitch-related features), performance-, and/or individual-related factors." Importantly, this proposed definition highlights the participatory aspect of the groove experience, which participants frequently mentioned, for example describing it as an urge to be "involved in" the music physically and/or psychologically. Furthermore, we propose that being immersed in music might be a prerequisite for other experiential qualities of groove, whereas the social aspect could be a secondary quality that comes into play as a consequence of musical activity. Overall, we anticipate that these findings will encourage a greater variety of research on this significant yet still not fully elucidated aspect of the musical experience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Time-of-day practices echo circadian physiological arousal: An enculturated embodied practice in Hindustani classical music.
- Author
-
Agrawal, Tanushree, Shanahan, Daniel, Huron, David, and Keller, Hannah
- Abstract
Traditionally, various Hindustani (North Indian) ragas have been performed at specific times of day, such as dawn, dusk, midday, and evening. Human physiology also exhibits common circadian patterns, with reduced arousal at night, rising during the morning, culminating in peak arousal, and then declining arousal towards the end of the day. This raises the question of how and whether the musical features of ragas for each time of day are related to these circadian patterns of arousal. We formally examined associations between traditionally designated time-of-day classifications and musical features from 65 Hindustani raga performances. Our results showed that only pitch-related features are predictive of time-of-day classifications. Surprisingly, non-pitch factors known to correlate with arousal, such as tempo, did not covary with raga time-of-day practices. In general, the results are consistent with rules for North Indian raga performances described by Vishnu Narayan Bhatkhande (1860–1936) that emphasize the presence or prevalence of particular tones in the raga. The results point to a combination of enculturated and embodied influences in conveying musical arousal. Specifically, they suggest that while time-of-day-related raga listening practices may have been initially influenced by embodied processes, they have ultimately been reshaped by pitch-related cultural norms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Exploring Variational Auto-encoder Architectures, Configurations, and Datasets for Generative Music Explainable AI
- Author
-
Bryan-Kinns, Nick, Zhang, Bingyuan, Zhao, Songyan, and Banar, Berker
- Published
- 2024
- Full Text
- View/download PDF
5. Affective algorithmic composition of music: A systematic review.
- Author
-
Wiafe, Abigail and Fränti, Pasi
- Subjects
MUSICAL composition ,AFFECT (Psychology) ,EMOTIONS ,ENVIRONMENTAL music ,AFFECTIVE computing ,MUSIC software - Abstract
Affective music composition systems are known to trigger emotions in humans. However, the design of such systems to stimulate users? emotions continues to be a challenge because, studies that aggregate existing literature in the domain to help advance research and knowledge is limited. This study presents a systematic literature review on affective algorithmic composition systems. Eighteen primary studies were selected from IEEE Xplore, ACM Digital Library, SpringerLink, PubMed, ScienceDirect, and Google Scholar databases following a systematic review protocol. The findings revealed that there is a lack of a unique definition that encapsulates the various types of affective algorithmic composition systems. Accordingly, a unique definition is provided. The findings also show that most affective algorithmic composition systems are designed for games to provide background music. The generative composition method was the most used compositional approach. Overall, there was rather a low amount of research in the domain. Possible reasons for these trends are the lack of a common definition for affective music composition systems and also the lack of detailed documentation of the design, implementation and evaluation of the existing systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Dual-Function Integrated Emotion-Based Music Classification System Using Features From Physiological Signals.
- Author
-
Kim, Hyoung-Gook, Lee, Gi Yong, and Kim, Min-Soo
- Subjects
- *
MUSICAL perception , *AUTOMATIC classification , *MULTIPLE Signal Classification - Abstract
In this paper, we propose an emotion-based music classification system using features from physiological signals. The proposed system integrates two functions; the first uses physiological sensors to recognize the emotions of users listening to music, and the second classifies music according to the feelings evoked in the listeners, without using physiological sensors. Moreover, to directly predict the user’s emotions from sensor data acquired through wearable physiological sensors, we developed and implemented a hierarchical inner attention-mechanism-based deep neural network. To relieve the discomfort of users wearing physiological sensors every time to receive content recommendations, the relation between emotion-specific features that are extracted from previously generated physiological signals, and musical features that are extracted from music is learned through a regression neural network. Based on these models, the proposed system classifies input music automatically according to users’ emotional reactions without measuring human physiological signals. The experimental results not only demonstrate the accuracy of the proposed automatic music classification framework, but also provide a new perspective in which human experience-based characteristics related to emotion are applied to artificial-intelligence-based content classification. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Music with Concurrent Saliences of Musical Features Elicits Stronger Brain Responses.
- Author
-
Tardón, Lorenzo J., Rodríguez-Rodríguez, Ignacio, Haumann, Niels T., Brattico, Elvira, and Barbancho, Isabel
- Subjects
MUSICAL perception ,MUSICALS ,EVOKED potentials (Electrophysiology) ,ELECTROENCEPHALOGRAPHY ,PRICE deflation ,LOUDNESS - Abstract
Brain responses are often studied under strictly experimental conditions in which electroencephalograms (EEGs) are recorded to reflect reactions to short and repetitive stimuli. However, in real life, aural stimuli are continuously mixed and cannot be found isolated, such as when listening to music. In this audio context, the acoustic features in music related to brightness, loudness, noise, and spectral flux, among others, change continuously; thus, significant values of these features can occur nearly simultaneously. Such situations are expected to give rise to increased brain reaction with respect to a case in which they would appear in isolation. In order to assert this, EEG signals recorded while listening to a tango piece were considered. The focus was on the amplitude and time of the negative deflation (N100) and positive deflation (P200) after the stimuli, which was defined on the basis of the selected music feature saliences, in order to perform a statistical analysis intended to test the initial hypothesis. Differences in brain reactions can be identified depending on the concurrence (or not) of such significant values of different features, proving that coterminous increments in several qualities of music influence and modulate the strength of brain responses. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Brain Computer Music Interfacing (BCMI)
- Author
-
Williams, Duncan, Lee, Newton, Series editor, and Williams, Duncan, editor
- Published
- 2018
- Full Text
- View/download PDF
9. Music with Concurrent Saliences of Musical Features Elicits Stronger Brain Responses
- Author
-
Lorenzo J. Tardón, Ignacio Rodríguez-Rodríguez, Niels T. Haumann, Elvira Brattico, and Isabel Barbancho
- Subjects
event-related potentials (ERP) ,electroencephalography (EEG) ,musical features ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Brain responses are often studied under strictly experimental conditions in which electroencephalograms (EEGs) are recorded to reflect reactions to short and repetitive stimuli. However, in real life, aural stimuli are continuously mixed and cannot be found isolated, such as when listening to music. In this audio context, the acoustic features in music related to brightness, loudness, noise, and spectral flux, among others, change continuously; thus, significant values of these features can occur nearly simultaneously. Such situations are expected to give rise to increased brain reaction with respect to a case in which they would appear in isolation. In order to assert this, EEG signals recorded while listening to a tango piece were considered. The focus was on the amplitude and time of the negative deflation (N100) and positive deflation (P200) after the stimuli, which was defined on the basis of the selected music feature saliences, in order to perform a statistical analysis intended to test the initial hypothesis. Differences in brain reactions can be identified depending on the concurrence (or not) of such significant values of different features, proving that coterminous increments in several qualities of music influence and modulate the strength of brain responses.
- Published
- 2021
- Full Text
- View/download PDF
10. Groove as a multidimensional participatory experience
- Author
-
Duman, Deniz, Snape, Nerdinga, Danso, Andrew, Toiviainen, Petri, and Luck, Geoff
- Subjects
immersion ,osallistaminen ,groove ,musiikki ,social connection ,musiikkipsykologia ,thematic analysis ,movement ,liike ,positive affect ,musical features - Abstract
Groove is a popular and widely used concept in the field of music. Yet, its precise definition remains elusive. Upon closer inspection, groove appears to be used as an umbrella term with various connotations depending on the musical era, the musical context, and the individual using the term. Our aim in this article was to explore different definitions and connotations of the term groove so as to reach a more detailed understanding of it. Consequently, in an online survey, 88 participants provided free-text descriptions of the term groove. A thematic analysis revealed that groove is a multifaceted phenomenon, and participants’ descriptions fit into two main categories: music- and experience-related aspects. Based on this analysis, we propose a contemporary working definition of the term groove as used in the field of music psychology: “Groove is a participatory experience (related to immersion, movement, positive affect, and social connection) resulting from subtle interaction of specific music- (such as time- and pitch-related features), performance-, and/or individual-related factors.” Importantly, this proposed definition highlights the participatory aspect of the groove experience, which participants frequently mentioned, for example describing it as an urge to be “involved in” the music physically and/or psychologically. Furthermore, we propose that being immersed in music might be a prerequisite for other experiential qualities of groove, whereas the social aspect could be a secondary quality that comes into play as a consequence of musical activity. Overall, we anticipate that these findings will encourage a greater variety of research on this significant yet still not fully elucidated aspect of the musical experience. peerReviewed
- Published
- 2023
11. On application of kernel PCA for generating stimulus features for fMRI during continuous music listening.
- Author
-
Tsatsishvili, Valeri, Burunat, Iballa, Cong, Fengyu, Toiviainen, Petri, Alluri, Vinoo, and Ristaniemi, Tapani
- Subjects
- *
KERNEL functions , *BRAIN imaging , *AUDITORY perception , *FUNCTIONAL magnetic resonance imaging , *DIAGNOSTIC imaging - Abstract
Background There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. New method fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. Results The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Comparison with existing method Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Conclusions Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Modalities and causal routes in music-induced mental imagery.
- Author
-
Küssner, Mats B. and Taruffi, Liila
- Subjects
- *
MENTAL imagery - Published
- 2023
- Full Text
- View/download PDF
13. NCM-Based Raga Classification Using Musical Features.
- Author
-
Anitha, Raghunathan and Gunavathi, Kandasamy
- Subjects
MUSICAL notation ,RAGAS ,MUSICAL intervals & scales ,VOCAL music ,REED organ ,NEUTROSOPHIC logic - Abstract
This paper deals with the study of Carnatic raga identification using musical features. In Carnatic music, there are 72 melakartha ragas. Each raga is denoted by musical notes. The musical features of 72 main ragas are extracted. A number of features such as pitch, timbre, tonal, rhythmic features have been discussed with reference to their ability to distinguish different ragas. Due to the intricate nature of Carnatic music, the concept of neutrosophic logic is used to identify each raga. This is because the concept of neutrosophic logic lies in the neutralities present in between truth and false. This creates a component of indeterminacy, which will make raga identification more accurate and smooth. Neutrosophic Cognitive Maps (NCMs) are drawn based on the musical features and solved. Using neutrosophic logic, a reduced set of musical features is arrived for each raga which can be thought of features characterizing the raga. Each raga is classified using a set of musical features which are solutions of NCMs. This paper represents one of the first attempts to classify all 72 melakartha ragas of using neutrosophic logic. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. REAL-TIME RESPONSES TO STRAVINSKY'S SYMPHONIES OF WIND INSTRUMENTS: PERCEPTION OF INTERNAL REPETITION AND MUSICAL INTEREST.
- Author
-
XIN WEN, OLIVIA and KRUMHANSL, CAROL LYNNE
- Subjects
- *
SYMPHONY , *WIND instruments , *AESTHETICS , *MUSICAL style - Abstract
THIS EXPERIMENT WAS DESIGNED TO ADDRESS factors that make repetition of musical themes within a piece recognizable, and to explore the relationship between internal repetition and musical interest. Thirty-seven participants of varied levels of music training listened to Stravinsky's Symphonies of Wind Instruments twice and responded to the music in real time. During the first listening, they continuously rated their level of interest and at the same time mentally identified the major themes. During the second listening, they indicated when they heard the major themes repeating. One theme was especially well recognized when repeated. It was relatively short, slow, began and ended with a predictable pattern, occurred relatively early in the piece, and was interspersed with other themes. Another theme stood out in the interest ratings, which was relatively long, fast, sometimes repeated immediately with a build-up of instrumentation and dynamics, and occurred later in the piece. In general, themes judged interesting were not those that were easily identified when repeated, suggesting these are independent aspects of this composition. No effect of music training was found. Extensive analyses of Stravinsky's Symphonies consider how the themes are repeated and interwoven. The experimental results confirmed the musical attributes considered in these analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. Global Sensory Qualities and Aesthetic Experience in Music.
- Author
-
Brattico, Pauli, Brattico, Elvira, and Vuust, Peter
- Subjects
AESTHETIC experience ,VISUAL perception ,DISSONANCE (Music theory) - Abstract
A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. MULTI-SCALE MODELLING OF SEGMENTATION: EFFECT OF MUSIC TRAINING AND EXPERIMENTAL TASK.
- Author
-
HARTMANN, MARTIN, LARTILLOT, OLIVIER, and TOIVIAINEN, PETRI
- Subjects
- *
MUSICAL performance , *REAL-time computing , *KERNEL functions , *CHORAL societies , *CHORAL music - Abstract
WHILE LISTENING TO MUSIC , PEOPLE OFTEN unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between one and two seconds were optimal for combining boundary indications. In addition, we found effects of task on number of indications, and a time lag between tasks dependent on beat length. Also, the optimal time scale for combining responses increased when the pulse clarity or event density decreased. Implications for future segmentation studies are raised concerning the selection of time scales for modelling boundary density, and time alignment between models. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Event-related brain responses while listening to entire pieces of music.
- Author
-
Poikonen, H., Alluri, V., Brattico, E., Lartillot, O., Tervaniemi, M., and Huotilainen, M.
- Subjects
- *
EVOKED potentials (Electrophysiology) , *ELECTROENCEPHALOGRAPHY , *MUSIC , *INTERSTIMULUS interval , *FINITE impulse response filters , *INDEPENDENT component analysis - Abstract
Brain responses to discrete short sounds have been studied intensively using the event-related potential (ERP) method, in which the electroencephalogram (EEG) signal is divided into epochs time-locked to stimuli of interest. Here we introduce and apply a novel technique which enables one to isolate ERPs in human elicited by continuous music. The ERPs were recorded during listening to a Tango Nuevo piece, a deep techno track and an acoustic lullaby. Acoustic features related to timbre, harmony, and dynamics of the audio signal were computationally extracted from the musical pieces. Negative deflation occurring around 100 milliseconds after the stimulus onset (N100) and positive deflation occurring around 200 milliseconds after the stimulus onset (P200) ERP responses to peak changes in the acoustic features were distinguishable and were often largest for Tango Nuevo. In addition to large changes in these musical features, long phases of low values that precede a rapid increase – and that we will call Preceding Low-Feature Phases – followed by a rapid increase enhanced the amplitudes of N100 and P200 responses. These ERP responses resembled those to simpler sounds, making it possible to utilize the tradition of ERP research with naturalistic paradigms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. The reliability of continuous brain responses during naturalistic listening to music.
- Author
-
Burunat, Iballa, Toiviainen, Petri, Alluri, Vinoo, Bogert, Brigitte, Ristaniemi, Tapani, Sams, Mikko, and Brattico, Elvira
- Subjects
- *
BRAIN physiology , *BRAIN imaging , *BRAIN anatomy , *COGNITIVE ability , *STIMULUS & response (Biology) , *FUNCTIONAL magnetic resonance imaging - Abstract
Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants’ fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
19. Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity.
- Author
-
Trost, Wiebke, Frühholz, Sascha, Cochrane, Tom, Cojan, Yann, and Vuilleumier, Patrik
- Subjects
- *
MUSICOLOGY , *BRAIN research , *FUNCTIONAL magnetic resonance imaging , *AMYGDALOID body , *COGNITIVE dissonance - Abstract
To study emotional reactions to music, it is important to consider the temporal dynamics of both affective responses and underlying brain activity. Here, we investigated emotions induced by music using functional magnetic resonance imaging (fMRI) with a data-driven approach based on intersubject correlations (ISC). This method allowed us to identify moments in the music that produced similar brain activity (i.e. synchrony) among listeners under relatively natural listening conditions. Continuous ratings of subjective pleasantness and arousal elicited by the music were also obtained for the music outside of the scanner. Our results reveal synchronous activations in left amygdala, left insula and right caudate nucleus that were associated with higher arousal, whereas positive valence ratings correlated with decreases in amygdala and caudate activity. Additional analyses showed that synchronous amygdala responses were driven by energy-related features in the music such as root mean square and dissonance, while synchrony in insula was additionally sensitive to acoustic event density. Intersubject synchrony also occurred in the left nucleus accumbens, a region critically implicated in reward processing. Our study demonstrates the feasibility and usefulness of an approach based on ISC to explore the temporal dynamics of music perception and emotion in naturalistic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. INTEGRATED ACTIVITIES IN INSTRUMENT LESSONS.
- Author
-
FORTUNA, SANDRA
- Subjects
- *
MUSIC education , *BODY movement , *TRANSPOSITION (Music theory) - Abstract
Music multimodal experience and the transposition into movement, speech and visual codes is very strong among children following a spontaneous game. The aim of this work was to enable children playing an instrument to explore a sound's length, volume, energy and articulation by using voice, movement and finally graphic symbols. As a hypothesis I guessed that these integrated activities would improve the understanding of elements of sounds enabling the children to be more expressive on the instrument. The study was carried out with children aged 5 to 7. During violin lessons the children were offered music features, both emotional (cheerful/sad, firm/uncertain, energetic/delicate etc.) and the dynamic and rhythmic (long/short, heavy/light, fast/slow, legato/staccato, etc.). While the teacher was playing the instrument, children were encouraged to accompany the music with movement and voice. Afterwards they were invited to reproduce with the violin the features of sounds they had experienced themselves. Finally the children were asked to draw their own invented graphic notation indicating the sounds experienced during the various stages of the process and to perform it with the violin again. This study reveals that turning music into body movement and singing voice helped to improve the psychomotor awareness of the sounds they want to achieve. Hence, translating the sounds into body movements, singing voice and graphic representation can be a means to improve children's understanding of the sound features and its reexpressing/ re-performing with the instrument. [ABSTRACT FROM AUTHOR]
- Published
- 2015
21. Emotional Expression in Music: Contribution, Linearity, and Additivity of Primary Musical Cues
- Author
-
Tuomas eEerola, Anders eFriberg, and Roberto eBresin
- Subjects
Perception ,emotion ,self-report ,Factorial design ,musical features ,lens model ,Psychology ,BF1-990 - Abstract
The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin & Lindström, 2010).
- Published
- 2013
- Full Text
- View/download PDF
22. Otkrivanje raspoloženja glazbene pjesme korištenjem dubokog učenja
- Author
-
Avdić, Dino and Bagić Babac, Marina
- Subjects
Dropout tehnika ,TEHNIČKE ZNANOSTI. Računarstvo ,deep neural network ,Adam optimizator ,duboka neuronska mreža, klasifikacija, glazbene značajke, točnost modela, funkcija gubitka, Adam optimizator, Dropout tehnika ,točnost modela ,klasifikacija ,funkcija gubitka ,loss function ,Dropout technique ,glazbene značajke ,classification ,model accuracy ,TECHNICAL SCIENCES. Computing ,duboka neuronska mreža ,musical features - Abstract
Glazba je moćan jezik za izražavanje emocija i raspoloženja kao i sredstvo za utjecaj na emocije slušatelja. Porastom ogromnih digitalnih glazbenih knjižnica pokazalo se da su glazbena raspoloženja postala poželjna pristupna točka glazbenim bazama podataka, a automatsko prepoznavanje i klasifikacija raspoloženja posljednjih godina dobivaju sve veću pozornost. U usporedbi s tradicionalnim karakteristikama glazbe poput žanra ili izvođača, prepoznavanje raspoloženja je subjektivnije i teže ga je kvantificirati, što ga čini izazovnijim. U ovom radu treba prikazati klasifikacijske modele koji mogu odrediti raspoloženja pjesme pomoću dubokog učenja. Potrebno je dizajnirati i implementirati detektor raspoloženja korištenjem trenutno dostupnih skupova podataka. Music is a powerful language to express the emotions and moods as well as a mean to influence emotions of a listener. With the increase of huge digital music libraries over the past decade, music moods have been shown that becoming a desirable access point to music databases, and automatic recognition and classification of mood have received increasing attention in recent years. Compared to traditional characteristics of music such as genre or artist, mood recognition is more subjective and difficult to quantify, which make it more challenging. In this thesis, classification models that can determinate which is the mood of a specific track based on deep learning, are shown. An example of mood detector is designed and implemented using current datasets that are available for this purpose.
- Published
- 2021
23. Robustness of musical features on deep learning models for music genre classification.
- Author
-
Singh, Yeshwant and Biswas, Anupam
- Subjects
- *
POPULAR music genres , *DEEP learning , *MUSICAL analysis , *MUSICALS , *INFORMATION retrieval , *MACHINE learning - Abstract
Music information retrieval (MIR) has witnessed rapid advances in various tasks like musical similarity, music genre classification (MGC), etc. MGC and audio tagging are approached using various features through traditional machine learning and deep learning (DL) based techniques by many researchers. DL-based models require a large amount of data to generalize well on new data samples. Unfortunately, the lack of sizeable open music datasets makes the analyses of the robustness of musical features on DL models even more necessary. So, this paper assesses and compares the robustness of some commonly used musical and non-musical features on DL models for the MGC task by evaluating the performance of selected models on multiple employed features extracted from various datasets accounting for billions of segmented data samples. In our evaluation, Mel-Scale based features and Swaragram showed high robustness across the datasets over various DL models for the MGC task. • Robustness of musical features analyzed for music classification using DL models. • Mel-based features and Swaragram generally perform well across various music styles. • Large frame and small sliding window length envelopes are suitable for DL models. • Small DL models suffer from underfitting, whereas large overfit on musical features. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Melodic similarity among folk songs: An annotation study on similarity-based categorization in music.
- Author
-
Volk, Anja and van Kranenburg, Peter
- Abstract
In this article we determine the role of different musical features for the human categorization of folk songs into tune families in a large collection of Dutch folk songs. Through an annotation study we investigate the relation between musical features, perceived similarity and human categorization in music. We introduce a newly developed annotation method which is used to create an annotation data set for 360 folk song melodies in 26 tune families. This dataset delivers valuable information on the contribution of musical features to the process of categorization which is based on assessing the similarity between melodies. The analysis of the annotation data set reveals that the importance of single musical features for assessing similarity varies both between and within tune families. In general, the recurrence of short characteristic motifs is most relevant for the perception of similarity between songs belonging to the same tune family. Global melodic features often used for the description of melodies (such as melodic contour) play a less important role. The annotation data set is a valuable resource for further research on melodic similarity and can be used as enriched “ground truth” to test various kinds of retrieval algorithms in Music Information Retrieval. Our annotation study exemplifies that assessing similarity is crucial for human categorization processes, which has been questioned within Cognitive Science in the context of rule-based approaches to categorization. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
25. Understanding the relationship between user emotion and latent musical features
- Author
-
Shastry, Aishwarya (author) and Shastry, Aishwarya (author)
- Abstract
With the advent of Internet and resulting data boom, Recommender Systems have come to rescue by filtering the information available on the internet by providing us with relevant information. These systems come handy when one wants to listen to songs, watch movies or even buy products on the Internet. Primarily, these recommender systems used content based or collaborative filtering techniques to recommend items. More recent research has studied the importance of contextual features in recommender systems. Music preference has always been associated with the contextual feature emotion. However, few studies study the mood congruence effect in the domain of music recommender systems. The field of music emotion recognition also remains unexplored with recommendations being made with limited features. \\ This master thesis analyses the relationship between few latent musical features and user emotion through our interface MooDify. It is a music recommendation system that incorporates emotion in a user using emotion induction techniques and investigate the effect of their emotional state on satisfaction and unexpectedness when presented with songs curated to specific musical features. To achieve this, we analysed the enjoyment and unexpectedness ratings for recommendations specific to latent musical features for a given emotional state. We have been able to achieve some interesting results through this study which has been discussed later in this work., Computer Science | Data Science and Technology
- Published
- 2019
26. NCM-Based Raga Classification Using Musical Features
- Author
-
Anitha, Raghunathan and Gunavathi, Kandasamy
- Published
- 2016
- Full Text
- View/download PDF
27. Réseaux de neurones convolutifs et paramètres musicaux pour la classification en genres
- Author
-
Sènac, Christine, Pellegrini, Thomas, Pinquier, Julien, Mouret, Florian, Équipe Structuration, Analyse et MOdélisation de documents Vidéo et Audio (IRIT-SAMoVA), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Signal et Communications (IRIT-SC), Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), and Université Toulouse 1 Capitole - UT1 (FRANCE)
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Music genre classification ,Traitement du signal et de l'image ,Convolutional neural networks ,Musical features - Abstract
National audience; Nous proposons d’utiliser des réseaux de neurones convolutifs (Convolutional Neural Networks (CNN)) pour la classification en genres musicaux. Mais contrairement à l’approche classique qui consiste à présenter un spectrogramme en entrée, nous choisissons un ensemble de paramètres musicaux selon trois dimensions musicales : la dynamique, le timbre et la tonalité. Avec une topologie de CNN appropriée, les résultats montrent que huit paramètres musicaux sont plus efficaces que 513 fréquences d’un spectrogramme et que la fusion tardive des systèmes basés sur les deux types de caractéristiques permet d’atteindre un taux de bonne classification de 91% sur le corpus GTZAN.
- Published
- 2017
28. Global Sensory Qualities and Aesthetic Experience in Music
- Author
-
Pauli Brattico, Peter Vuust, and Elvira Brattico
- Subjects
media_common.quotation_subject ,Sensory system ,Stimulus (physiology) ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Hypothesis and Theory ,Journal Article ,Cognitive dissonance ,medicine ,0501 psychology and cognitive sciences ,Processing fluency ,media_common ,Cognitive science ,Communication ,business.industry ,General Neuroscience ,05 social sciences ,music aesthetics ,naturalistic paradigm ,visual aesthetics ,Visual cortex ,medicine.anatomical_structure ,neuroaesthetics ,Beauty ,Percept ,Psychology ,business ,Timbre ,030217 neurology & neurosurgery ,Neuroscience ,musical features - Abstract
A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics.
- Published
- 2017
- Full Text
- View/download PDF
29. Musical Feature and Novelty Curve Characterizations as Predictors of Segmentation Accuracy
- Subjects
ta113 ,predictability ,ta6131 ,music ,novelty curves ,notation (music) ,keys (tone systems) ,rhythm ,ta515 ,novelty detection ,musical features - Published
- 2017
30. A fuzzy rule model for high level musical features on automated composition systems
- Author
-
Francisco Mugica, Àngela Nebot, Enrique Romero, Ivan Paz, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. SOCO - Soft Computing
- Subjects
Informàtica::Intel·ligència artificial::Aprenentatge automàtic [Àrees temàtiques de la UPC] ,Computer science ,Process (engineering) ,Rationality ,Machine learning ,computer.software_genre ,Fuzzy logic ,Musical representation ,Aprenentatge automàtic ,Selection (linguistics) ,Feature (machine learning) ,Algorithmic composition ,Musical features ,Composition (language) ,Fuzzy rule ,Musical analysis -- Data processing ,business.industry ,Computer composition ,Anàlisi musical -- Processament de dades ,Composició musical per ordinador ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Algorithmic composition systems are now well-understood. However, when they are used for specific tasks like creating material for a part of a piece, it is common to prefer, from all of its possible outputs, those exhibiting specific properties. Even though the number of valid outputs is huge, many times the selection is performed manually, either using expertise in the algorithmic model, by means of sampling techniques, or some times even by chance. Automations of this process have been done traditionally by using machine learning techniques. However, whether or not these techniques are really capable of capturing the human rationality, through which the selection is done, to a great degree remains as an open question. The present work discusses a possible approach, that combines expert’s opinion and a fuzzy methodology for rule extraction, to model high level features. An early implementation able to explore the universe of outputs of a particular algorithm by means of the extracted rules is discussed. The rules search for objects similar to those having a desired and pre-identified feature. In this sense, the model can be seen as a finder of objects with specific properties.
- Published
- 2017
31. Modelling and prediction of perceptual segmentation
- Author
-
Hartmann, Martín Ariel
- Subjects
Ydinestimointi ,rakenne ,musiikki ,perceptual segmentation task ,havaitseminen ,muutos ,musical structure ,kuunteleminen ,segmentointi ,musiikintutkimus ,musiikkitiede ,kernel density estimation ,novelty detection ,musical features ,musical training - Abstract
While listening to music, we somehow make sense of a multiplicity of auditory events; for example, in popular music we are often able to recognize whether the current section is a verse or a chorus, and to identify the boundaries between these segments. This organization occurs at multiple levels, since we can discern motifs, phrases, sections and other groupings. In this work, we understand segment boundaries as instants of significant change. Several studies on music perception and cognition have strived to understand what types of changes are associated with perceptual structure. However, effects of musical training, possible differences between real-time and non real-time segmentation, and the relative importance of different musical dimensions on perception and prediction of segmentation are still unsolved problems. Investigating these issues can lead to a better understanding of mechanisms used by different types of listeners in different contexts, and to gain knowledge of the relationship between perceptual structure and underlying acoustic changes in the music. In this work, we collected segmentation responses from musical pieces in two listening experiments, a real-time task and a non real-time task. Boundary data was obtained from 18 non-musicians in the real-time task and from 18 musicians in both tasks. We used kernel density estimation to aggregate boundary responses from multiple participants into a perceptual segment density curve, and novelty detection to obtain computational models based on audio musical features extracted from the musical stimuli. Overall, our findings provide evidence for an effect of experimental task on perceptual segmentation and its prediction, and clarify the contribution of local and global musical characteristics. However, the findings do not resolve discrepancies in the literature regarding musicianship. Furthermore, this investigation highlights the role of local musical change between homogeneous regions in boundary perception, the impact of boundary indication delays on segmentation, and the problem of segmentation time scales on modelling.
- Published
- 2017
32. Modelling and prediction of perceptual segmentation
- Subjects
rakenne ,musiikki ,havaitseminen ,muutos ,musical structure ,kuunteleminen ,segmentointi ,musiikintutkimus ,musiikkitiede ,ta6131 ,kernel density estimation ,novelty detection ,musical training ,musical features - Published
- 2017
33. A fuzzy rule model for high level musical features on automated composition systems
- Author
-
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. SOCO - Soft Computing, Paz Ortiz, Alejandro Iván, Nebot Castells, M. Àngela, Múgica Álvarez, Francisco, Romero Merino, Enrique, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. SOCO - Soft Computing, Paz Ortiz, Alejandro Iván, Nebot Castells, M. Àngela, Múgica Álvarez, Francisco, and Romero Merino, Enrique
- Abstract
Algorithmic composition systems are now well-understood. However, when they are used for specific tasks like creating material for a part of a piece, it is common to prefer, from all of its possible outputs, those exhibiting specific properties. Even though the number of valid outputs is huge, many times the selection is performed manually, either using expertise in the algorithmic model, by means of sampling techniques, or some times even by chance. Automations of this process have been done traditionally by using machine learning techniques. However, whether or not these techniques are really capable of capturing the human rationality, through which the selection is done, to a great degree remains as an open question. The present work discusses a possible approach, that combines expert’s opinion and a fuzzy methodology for rule extraction, to model high level features. An early implementation able to explore the universe of outputs of a particular algorithm by means of the extracted rules is discussed. The rules search for objects similar to those having a desired and pre-identified feature. In this sense, the model can be seen as a finder of objects with specific properties., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
34. Assessment of urban soundscapes with the focus on an architectural installation with musical features
- Author
-
Hrvoje Domitrović, Kristian Jambrošić, and Marko Horvat
- Subjects
Soundscape ,geography ,geography.geographical_feature_category ,Acoustics and Ultrasonics ,Computer science ,Acoustics ,media_common.quotation_subject ,urban soundscapes ,architectural installation ,musical features ,Context (language use) ,Monaural ,Sound installation ,Musical acoustics ,Sound recording and reproduction ,Arts and Humanities (miscellaneous) ,Perception ,Binaural recording ,Sound (geography) ,Cognitive psychology ,media_common - Abstract
Urban soundscapes at five locations in the city of Zadar were perceptually assessed by on-site surveys and objectively evaluated based on monaural and binaural recordings. All locations were chosen so that they would display auditory and visual diversity as much as possible. The unique sound installation known as the Sea Organ was included as an atypical music-like environment. Typical objective parameters were calculated from the recordings related to the amount of acoustic energy, spectral properties of sound, the amount of fluctuations, and tonal properties. The subjective assessment was done on- site using a common survey for evaluating the properties of sound and visual environment. The results revealed the importance of introducing the context into soundscape research because objective parameters did not show significant correlation with responses obtained from interviewees. Excessive values of certain objective parameters could indicate that a sound environment will be perceived as unpleasant or annoying, but its overall perception depends on how well it agrees with people's expectations. This was clearly seen for the case of Sea Organ for which the highest values of objective parameters were obtained, but, at the same time, it was evaluated as the most positive sound environment in every aspect.
- Published
- 2013
35. Emotional expression in music: contribution, linearity, and additivity of primary musical cues
- Author
-
Anders Friberg, Roberto Bresin, and Tuomas Eerola
- Subjects
media_common.quotation_subject ,lcsh:BF1-990 ,emotion ,Musical ,Mode (music) ,Perception ,Discrete emotion ratings ,Psychology ,Emotional expression ,Original Research Article ,General Psychology ,media_common ,Articulation (music) ,lens model ,self-report ,discrete emotion ratings ,lcsh:Psychology ,Register (music) ,Dynamics (music) ,factorial design ,ta6131 ,music cues ,Social psychology ,Timbre ,musical features - Abstract
The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010). peerReviewed
- Published
- 2013
36. Move the way you feel : effects of musical features, perceived emotions, and personality on music-induced movement
- Author
-
Burger, Birgitta
- Subjects
liikeoppi ,tanssi ,musiikki ,emotion ,havaitseminen ,liikkeenkaappaus ,perception ,liike ,persoonallisuus ,kuunteleminen ,personality ,tunteet ,tanssijat ,motion capture ,liikkuminen ,music-induced movement ,dynamiikka ,musical features - Published
- 2013
37. Feature selection for classification of music according to expressed emotion
- Author
-
Saari, Pasi
- Subjects
ominaisuudet ,feature selection ,overfitting ,tunteet ,musiikki ,musical emotions ,wrapper selection ,cross-indexing ,musical features ,luokitus - Published
- 2009
38. Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Mucha, Ján, Král, Vítězslav, Kiska, Tomáš, Mucha, Ján, and Král, Vítězslav
- Abstract
Cílem této diplomové práce je shrnout dosavadní poznatky z oblasti srovnávání zvukových nahrávek a implementovat vyhodnocovací systém pro určení období vzniku za pomoci metod parametrizace hudebního signálu. V první části této práce jsou popsány reprezentace, jakých hudba může nabývat. Dále je uveden průřez parametrů, které mohou být z hudebních nahrávek extrahovány a poskytují informaci o dynamice, tempu, barvě či časovém vývoji hudební nahrávky. V části druhé je popsán vyhodnocovací systém a jeho jednotlivé dílčí bloky. Vstupními daty pro tento vyhodnocovací systém je vytvořená databáze, čítající 56 zvukových nahrávek první věty Beethovenovy páté symfonie. Poslední kapitola je věnována shrnutí dosažených výsledků., The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
39. Porovnávání zvukových nahrávek za pomoci parametrů popisující barvu zvuku
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Miklánek, Štěpán, Kiska, Tomáš, Schimmel, Jiří, and Miklánek, Štěpán
- Abstract
Tato práce se zabývá výzkumem parametrů popisující nahrávky na základě barvy zvuku. Nejprve je popsán historický vývoj a novodobý přístup v oblasti Music Information Retrieval (MIR), poté je popsán postup při zpracovávání hudebního signálu a daná problematika je nastíněna jak z pohledu hudební teorie, tak z pohledu číslicového zpracování signálu. Následuje popis předzpracování signálu, tato část je důležitá z hlediska parametrizace hudebního signálu. V kapitole parametrizace jsou shrnuty poznatky o parametrech, které jsou běžně využívány při získávání informací z nahrávek, přičemž je kladen důraz zejména na parametry týkající se barvy zvuku. Je také představena databáze nahrávek k analýze a návrh vyhodnocovacího systému, který bude nahrávky analyzovat. Nakonec je představena individuální analýza parametrů, kterými jsou popisovány nahrávky na základě barvy zvuku., This thesis deals with research of musical features, which are describing music recordings relating to timbre. First chapter deals with historical development and modern approach in a discipline called Music Information Retrieval (MIR), further there is a description of music processing from the perspective of music theory and digital signal processing. Then followed by a description of signal pre-processing. This part is very important when retrieving features from music recordings. In chapter concerned about retrieving features there are summarized all common features used when retrieving information from musical recordings with main concern to timbral features. A database of music recordings and a feature retrieving system is introduced. The last chapter deals with individual analysis of timbral features.
40. Porovnávání zvukových nahrávek za pomoci parametrů popisující barvu zvuku
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Miklánek, Štěpán, Kiska, Tomáš, Schimmel, Jiří, and Miklánek, Štěpán
- Abstract
Tato práce se zabývá výzkumem parametrů popisující nahrávky na základě barvy zvuku. Nejprve je popsán historický vývoj a novodobý přístup v oblasti Music Information Retrieval (MIR), poté je popsán postup při zpracovávání hudebního signálu a daná problematika je nastíněna jak z pohledu hudební teorie, tak z pohledu číslicového zpracování signálu. Následuje popis předzpracování signálu, tato část je důležitá z hlediska parametrizace hudebního signálu. V kapitole parametrizace jsou shrnuty poznatky o parametrech, které jsou běžně využívány při získávání informací z nahrávek, přičemž je kladen důraz zejména na parametry týkající se barvy zvuku. Je také představena databáze nahrávek k analýze a návrh vyhodnocovacího systému, který bude nahrávky analyzovat. Nakonec je představena individuální analýza parametrů, kterými jsou popisovány nahrávky na základě barvy zvuku., This thesis deals with research of musical features, which are describing music recordings relating to timbre. First chapter deals with historical development and modern approach in a discipline called Music Information Retrieval (MIR), further there is a description of music processing from the perspective of music theory and digital signal processing. Then followed by a description of signal pre-processing. This part is very important when retrieving features from music recordings. In chapter concerned about retrieving features there are summarized all common features used when retrieving information from musical recordings with main concern to timbral features. A database of music recordings and a feature retrieving system is introduced. The last chapter deals with individual analysis of timbral features.
41. Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Mucha, Ján, Král, Vítězslav, Kiska, Tomáš, Mucha, Ján, and Král, Vítězslav
- Abstract
Cílem této diplomové práce je shrnout dosavadní poznatky z oblasti srovnávání zvukových nahrávek a implementovat vyhodnocovací systém pro určení období vzniku za pomoci metod parametrizace hudebního signálu. V první části této práce jsou popsány reprezentace, jakých hudba může nabývat. Dále je uveden průřez parametrů, které mohou být z hudebních nahrávek extrahovány a poskytují informaci o dynamice, tempu, barvě či časovém vývoji hudební nahrávky. V části druhé je popsán vyhodnocovací systém a jeho jednotlivé dílčí bloky. Vstupními daty pro tento vyhodnocovací systém je vytvořená databáze, čítající 56 zvukových nahrávek první věty Beethovenovy páté symfonie. Poslední kapitola je věnována shrnutí dosažených výsledků., The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
42. Analýza původu vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Brada, Tomáš, Kiska, Tomáš, Schimmel, Jiří, and Brada, Tomáš
- Abstract
Tato práce shrnuje dosavadní poznatky z oblasti srovnávání zvukových nahrávek v oblasti tzv. Music information retrieval. Dále je cílem této práce analyzovat interpretace z hlediska tempa, dynamiky a barvy zvuku. Ze získaných dat provedené analýzy je pak cílem vybrat takové hudební parametry, které mají největší schopnost diferencovat původ vzniku jednotlivých interpretací., Presented thesis summarizes existing findings in the problematics of music recording comparison, in the field of so-called Music Information Retrieval. One of the goals of the thesis is to perform analysis of multiple musical renditions from the point of view of the tempo, dynamics and sound timbre. From the obtained data, such musical parameters will be chosen which exhibit the biggest potential to differentiate the origine of the individual renditions.
43. Analýza původu vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Brada, Tomáš, Kiska, Tomáš, Schimmel, Jiří, and Brada, Tomáš
- Abstract
Tato práce shrnuje dosavadní poznatky z oblasti srovnávání zvukových nahrávek v oblasti tzv. Music information retrieval. Dále je cílem této práce analyzovat interpretace z hlediska tempa, dynamiky a barvy zvuku. Ze získaných dat provedené analýzy je pak cílem vybrat takové hudební parametry, které mají největší schopnost diferencovat původ vzniku jednotlivých interpretací., Presented thesis summarizes existing findings in the problematics of music recording comparison, in the field of so-called Music Information Retrieval. One of the goals of the thesis is to perform analysis of multiple musical renditions from the point of view of the tempo, dynamics and sound timbre. From the obtained data, such musical parameters will be chosen which exhibit the biggest potential to differentiate the origine of the individual renditions.
44. Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Mucha, Ján, Král, Vítězslav, Kiska, Tomáš, Mucha, Ján, and Král, Vítězslav
- Abstract
Cílem této diplomové práce je shrnout dosavadní poznatky z oblasti srovnávání zvukových nahrávek a implementovat vyhodnocovací systém pro určení období vzniku za pomoci metod parametrizace hudebního signálu. V první části této práce jsou popsány reprezentace, jakých hudba může nabývat. Dále je uveden průřez parametrů, které mohou být z hudebních nahrávek extrahovány a poskytují informaci o dynamice, tempu, barvě či časovém vývoji hudební nahrávky. V části druhé je popsán vyhodnocovací systém a jeho jednotlivé dílčí bloky. Vstupními daty pro tento vyhodnocovací systém je vytvořená databáze, čítající 56 zvukových nahrávek první věty Beethovenovy páté symfonie. Poslední kapitola je věnována shrnutí dosažených výsledků., The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
45. Porovnávání zvukových nahrávek za pomoci parametrů popisující barvu zvuku
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Miklánek, Štěpán, Kiska, Tomáš, Schimmel, Jiří, and Miklánek, Štěpán
- Abstract
Tato práce se zabývá výzkumem parametrů popisující nahrávky na základě barvy zvuku. Nejprve je popsán historický vývoj a novodobý přístup v oblasti Music Information Retrieval (MIR), poté je popsán postup při zpracovávání hudebního signálu a daná problematika je nastíněna jak z pohledu hudební teorie, tak z pohledu číslicového zpracování signálu. Následuje popis předzpracování signálu, tato část je důležitá z hlediska parametrizace hudebního signálu. V kapitole parametrizace jsou shrnuty poznatky o parametrech, které jsou běžně využívány při získávání informací z nahrávek, přičemž je kladen důraz zejména na parametry týkající se barvy zvuku. Je také představena databáze nahrávek k analýze a návrh vyhodnocovacího systému, který bude nahrávky analyzovat. Nakonec je představena individuální analýza parametrů, kterými jsou popisovány nahrávky na základě barvy zvuku., This thesis deals with research of musical features, which are describing music recordings relating to timbre. First chapter deals with historical development and modern approach in a discipline called Music Information Retrieval (MIR), further there is a description of music processing from the perspective of music theory and digital signal processing. Then followed by a description of signal pre-processing. This part is very important when retrieving features from music recordings. In chapter concerned about retrieving features there are summarized all common features used when retrieving information from musical recordings with main concern to timbral features. A database of music recordings and a feature retrieving system is introduced. The last chapter deals with individual analysis of timbral features.
46. Analýza původu vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Brada, Tomáš, Kiska, Tomáš, Schimmel, Jiří, and Brada, Tomáš
- Abstract
Tato práce shrnuje dosavadní poznatky z oblasti srovnávání zvukových nahrávek v oblasti tzv. Music information retrieval. Dále je cílem této práce analyzovat interpretace z hlediska tempa, dynamiky a barvy zvuku. Ze získaných dat provedené analýzy je pak cílem vybrat takové hudební parametry, které mají největší schopnost diferencovat původ vzniku jednotlivých interpretací., Presented thesis summarizes existing findings in the problematics of music recording comparison, in the field of so-called Music Information Retrieval. One of the goals of the thesis is to perform analysis of multiple musical renditions from the point of view of the tempo, dynamics and sound timbre. From the obtained data, such musical parameters will be chosen which exhibit the biggest potential to differentiate the origine of the individual renditions.
47. Porovnávání zvukových nahrávek za pomoci parametrů popisující barvu zvuku
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Kiska, Tomáš, and Schimmel, Jiří
- Abstract
Tato práce se zabývá výzkumem parametrů popisující nahrávky na základě barvy zvuku. Nejprve je popsán historický vývoj a novodobý přístup v oblasti Music Information Retrieval (MIR), poté je popsán postup při zpracovávání hudebního signálu a daná problematika je nastíněna jak z pohledu hudební teorie, tak z pohledu číslicového zpracování signálu. Následuje popis předzpracování signálu, tato část je důležitá z hlediska parametrizace hudebního signálu. V kapitole parametrizace jsou shrnuty poznatky o parametrech, které jsou běžně využívány při získávání informací z nahrávek, přičemž je kladen důraz zejména na parametry týkající se barvy zvuku. Je také představena databáze nahrávek k analýze a návrh vyhodnocovacího systému, který bude nahrávky analyzovat. Nakonec je představena individuální analýza parametrů, kterými jsou popisovány nahrávky na základě barvy zvuku., This thesis deals with research of musical features, which are describing music recordings relating to timbre. First chapter deals with historical development and modern approach in a discipline called Music Information Retrieval (MIR), further there is a description of music processing from the perspective of music theory and digital signal processing. Then followed by a description of signal pre-processing. This part is very important when retrieving features from music recordings. In chapter concerned about retrieving features there are summarized all common features used when retrieving information from musical recordings with main concern to timbral features. A database of music recordings and a feature retrieving system is introduced. The last chapter deals with individual analysis of timbral features.
48. Analýza původu vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Kiska, Tomáš, and Schimmel, Jiří
- Abstract
Tato práce shrnuje dosavadní poznatky z oblasti srovnávání zvukových nahrávek v oblasti tzv. Music information retrieval. Dále je cílem této práce analyzovat interpretace z hlediska tempa, dynamiky a barvy zvuku. Ze získaných dat provedené analýzy je pak cílem vybrat takové hudební parametry, které mají největší schopnost diferencovat původ vzniku jednotlivých interpretací., Presented thesis summarizes existing findings in the problematics of music recording comparison, in the field of so-called Music Information Retrieval. One of the goals of the thesis is to perform analysis of multiple musical renditions from the point of view of the tempo, dynamics and sound timbre. From the obtained data, such musical parameters will be chosen which exhibit the biggest potential to differentiate the origine of the individual renditions.
49. Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu
- Author
-
Kiska, Tomáš, Mucha, Ján, Kiska, Tomáš, and Mucha, Ján
- Abstract
Cílem této diplomové práce je shrnout dosavadní poznatky z oblasti srovnávání zvukových nahrávek a implementovat vyhodnocovací systém pro určení období vzniku za pomoci metod parametrizace hudebního signálu. V první části této práce jsou popsány reprezentace, jakých hudba může nabývat. Dále je uveden průřez parametrů, které mohou být z hudebních nahrávek extrahovány a poskytují informaci o dynamice, tempu, barvě či časovém vývoji hudební nahrávky. V části druhé je popsán vyhodnocovací systém a jeho jednotlivé dílčí bloky. Vstupními daty pro tento vyhodnocovací systém je vytvořená databáze, čítající 56 zvukových nahrávek první věty Beethovenovy páté symfonie. Poslední kapitola je věnována shrnutí dosažených výsledků., The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
50. Porovnávání zvukových nahrávek za pomoci parametrů popisující barvu zvuku
- Author
-
Kiska, Tomáš, Schimmel, Jiří, Kiska, Tomáš, and Schimmel, Jiří
- Abstract
Tato práce se zabývá výzkumem parametrů popisující nahrávky na základě barvy zvuku. Nejprve je popsán historický vývoj a novodobý přístup v oblasti Music Information Retrieval (MIR), poté je popsán postup při zpracovávání hudebního signálu a daná problematika je nastíněna jak z pohledu hudební teorie, tak z pohledu číslicového zpracování signálu. Následuje popis předzpracování signálu, tato část je důležitá z hlediska parametrizace hudebního signálu. V kapitole parametrizace jsou shrnuty poznatky o parametrech, které jsou běžně využívány při získávání informací z nahrávek, přičemž je kladen důraz zejména na parametry týkající se barvy zvuku. Je také představena databáze nahrávek k analýze a návrh vyhodnocovacího systému, který bude nahrávky analyzovat. Nakonec je představena individuální analýza parametrů, kterými jsou popisovány nahrávky na základě barvy zvuku., This thesis deals with research of musical features, which are describing music recordings relating to timbre. First chapter deals with historical development and modern approach in a discipline called Music Information Retrieval (MIR), further there is a description of music processing from the perspective of music theory and digital signal processing. Then followed by a description of signal pre-processing. This part is very important when retrieving features from music recordings. In chapter concerned about retrieving features there are summarized all common features used when retrieving information from musical recordings with main concern to timbral features. A database of music recordings and a feature retrieving system is introduced. The last chapter deals with individual analysis of timbral features.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.