Advanced techniques for management of personal digital music libraries
|
|
- Elvin Franklin
- 6 years ago
- Views:
Transcription
1 Advanced techniques for management of personal digital music libraries Jukka Rauhala TKK, Laboratory of Acoustics and Audio signal processing Abstract In this paper, advanced techniques based on Music information retrieval (MIR) are reviewed in the context of personal digital music library management. There is a growing need for improved search services as the size of personal digital music libraries is growing rapidly. MIR-based approaches offer advanced methods for extracting metadata information directly from audio files, such as genre information and mood information. Moreover, audio features extracted with MIR can be used to form a statistics-based profile on the musical taste of the user, which can be used to generate user-tailored playlists. In addition, Query by humming (QBH) is a MIR technique, which offers a new way to search a known music track from a large library effectively. Finally, these methods introduced are discussed in a personal digital music library context. 1 INTRODUCTION Many people have nowadays a personal digital music library on their desktop or in their mobile player due to the increasing popularity of MP3-players. As the size of the libraries can easily reach the size equivalent to hundreds of CD records, there is a huge need for efficient library management techniques, more specifically, for techniques that improve and speed up the search for music. Currently, a typical MP3-player offers ways to search for music based on different classifications, such as artist, album, and genre information, that has been predefined by the user or downloaded from an internet server. Music information retrieval (MIR)- based (Downie, 2003) approaches offer new, interesting ways to search for music. MIR is a research topic, which is focusing on developing algorithms to extract information from, e.g., audio files or MIDI files. This information can be used for the classification of data, and, hence, it can be further used in digital music libary management. MIRbased approach has two major advantages over current approach. First, an Internet connection is not needed for providing classification metadata. Second, the user can tailor the metadata according to his preferences by controlling the MIR-based methods via parameters. In this paper, four interesting techniques are looked into: automatic genre recognition, automatic mood detection, intelligent recommendation, and Query by Humming (QBH) system. Automatic genre recognition means that the algorithm detects 1
2 the genre of the music from audio data. Tzanetakis and Cook (2002) have introduced an algorithm that is examined in this work. Another way to enhance the digital library management is to extract emotional information from audio files and to offer a way to create playlists on-the-fly based on emotions. There are several existing approaches for this, e.g., by van Breemen and Bartneck (2003), by Tolos et al. (2004), and by Li and Ogihara (2004). Moreover, intelligent recommendation methods use some kind of intelligent logic to form a profile for the user s musical taste. They offer a way for the user to tell the system whether he likes or dislikes the song proposed by the system. Then, the system takes the user input and MIR information that has been obtained to improve the profile. The system can also extract information from the environment in order to enhance the profile to be dependent of environmental variables. A number of systems, both commercial (e.g. Last.fm (2006)) and academic (e.g. Lifetrak (Reddy and Mascia, 2006)), exist nowadays. From the digital music library point of view, learning-based methods provide ways to develop profiles that can be used for creating playlists. The fourth advanced technique, QBH (McNab et al., 1997), is one of the most popular research topics related to MIR at the moment. In QBH, the user can make a search on the digital music library by humming a short melody from a certain song. Then, the algorithm makes a search in the database and suggests songs that match the input signal the best. The existing QBH methods are still in their infancy, as they do not reach desired robustness due to several challenges, such as the analysis of the humming. There has been a lot of research in the area of managing digital music libraries mostly focusing on public libraries located in the Internet, or in online music stores. For instance, most of the QBH research projects, such as MELody index (McNab et al., 1997), incorporate a public online digital music library. However, most of the research results can be applied on personal digital music libraries as well. Additionally, Pachet et al. (2004) have presented a personal digital music library, which uses advanced techniques based on MIR. This paper is organized as follows. First, personal digital music libraries are introduced in Section 2. Then, current systems for managing digital music libraries are presented in Section 3. In Section 4, the four MIR-based management techniques are shown. The introduced techniques are discussed from the personal digital music library point of view in Section 5 followed by the conclusions in Section 6. 2 PERSONAL DIGITAL MUSIC LIBRARIES In the past few years, many people have acquired their own personal digital music library. This has been made possible by the rapid increase of devices with capability to play digital music combined to large storage memory. Some examples of these devices, Apple IPOD MP3-player and Nokia N91 mobile phone, are shown in Figure 1. Since most of these devices are portable, it is common to have a large digital music library stored in a PC, and a copy of the library (or a limited version) on the portable device. 2
3 Figure 1. Pictures of Apple IPOD MP3-player (left, picture from and Nokia N91 mobile phone (right, picture from There are three main reasons that have enabled the rapid growth of the market of devices with built-in digital music library. First, the introduction of advanced auditorybased audio codecs has reduced the need for memory. The most important milestone was MPEG1 layer 3 codec (MPEG1 layer 3), which is the most popular audio codec for digital music at the moment. Second, the processing power of portable devices, not to speak of PCs, has increased over the years. Hence, almost all portable devices are able to decode MP3 data in real-time. Third, the size and price of hard disks have decreased, which means that nowadays it is possible to store digital music equivalent to hundreds of CDs on a portable device. A very important part of the digital music library is the user interface and the management system. The importance of these components has just grown as the size of digital music libraries has increased rapidly due to cheap storage memory. Moreover, portable digital music players have major restrictions to the user interface, which will further limit the management system. Hence, there is a growing need for advanced digital library management techniques. 3 DIGITAL MUSIC LIBRARY MANAGEMENT SYSTEMS In general, there are two main actions in digital music library management, when it is considered from the user point of view: addition and removal of music tracks, and playing of music tracks. An important part of the playing is the searching for music tracks to be played. In this work, we concentrate on the searching part of the management process, as it is the most challenging part. The search for music tracks can be divided into two approaches: a known-item search and a general search. In the known-item search, the user wants to play, for example, a certain song or album. The user might not remember all the details about the particular track(s), but the management system should assist the user to provide a fast way to search for the tracks. The other option for search is general search, where the user just wants to play some music without having anything particular in mind. However, the user might have priorities concerning, for example, mood, genre, artist 3
4 Table 1. Selection of ID3v2 tags (ID3v2, 2006). Album title Composer BPM (beats per minute) Lyricist Language Mood Lead performer Band Conductor Publisher Track number Album Performer Track Initial key etc. that the management system should be able to take into account. At the moment, the most popular personal digital music libraries are Microsoft s windows media player, Apple s itunes, and Winamp. Majority of the available systems rely on the metadata classifying the music tracks, which is obtained from an internet server, or entered by the user. An example of the metadata format is ID3v2 (ID3v2, 2006), which is supported by the three software mentioned above. The most important classifications specified in ID3v2 are shown in Table 1. Moreover, Microsoft Windows Media Player version 10 provides automatic playlists, such as music tracks I dislike, Favorites listen to at night, and Favorites listen to on Weekends (Microsoft, 2005). An interesting new standard, which will be included in future digital music libraries, is MPEG-7 (Martínez, 2002). MPEG-7 is not an audio coding standard, but a metadata standard encapsulating a large variety of audio and video features. In addition to the ordinary metadata features, such as the tags shown in Table 1, MPEG-7 specifies 17 low-level audio descriptors, which are listed in Table 2. These low-level features are common in MIR and many MIR-based algorithms take advantage of them. Hence, MPEG-7 could be used in the future to assist MIR-based management algorithms by providing data required in the process. There has been some research on advanced management systems for personal digital music libraries. An example of this is the Sony Music Browser (Pachet et al., 2004), which uses MIR-based methods. A screenshot of the application is shown in Figure 2. 4
5 Table 2. MPEG-7 low-level audio descriptors. Basic descriptors: Audio waveform Audio power Spectral descriptors: Audio spectrum envelope Audio spectrum centroid Audio spectrum spread Audio spectrum flatness Signal parameter descriptors: Harmonic ratio Upper limit of harmonicity Audio fundamental frequency Timbral descriptors: Log attack time Temporal centroid Harmonic spectral centroid Harmonic spectral deviation Harmonic spectral spread Harmonic spectral variation Spectral centroid Figure 2. A screenshot of the Sony Music Browser (Pachet et al., 2004). 4 ADVANCED MANAGEMENT TECHNIQUES In this Section, four advanced management techniques based on MIR are introduced. First, the basics of MIR are presented. Then, the four techniques, including automatic genre recognition, automatic mood detection, intelligent recommendation method, and QBH, are explained.s 5
6 Table 3. Seven classes of music information according to Downie (2003). Class Description Pitch Perceived frequencies of pitched tones, intervals, keys Temporal Duration of musical events: tempo, meter, pitch duration, harmonic duration, and accents Harmonic Relations between multiple pitched notes Timbral Everything related to tone color Editorial Fingerings, dynamic instructions, articulations etc. Textual Lyrics Bibliographic Music metadata : title, composer, performers etc. 4.1 Introduction to Music information retrieval (MIR) MIR is a wide research field that includes retrieval of all kinds of information from, for example, audio signals. Downie (2003) has defined seven classes, or facets, of information considered in MIR as shown in Table 3. Out of these seven classes, all other classes, except textual and bibliographic, can be extracted directly from an audio signal, at least in theory. Moreover, bibliographic class differs from the other classes as it cannot be derived from the content. The audio features used in MIR algorithms can be divided into low-level and highlevel audio features, where the low-level features can be extracted in a straight-forward manner from the signal and high-level features are usually determined based on a group of low-level features. For instance, a common way to detect the key of a music signal (high-level feature) is to determine a histogram of the pitches occurring in the signal (low-level features) and to compare it with pre-defined histograms determined for all keys. The low-level audio features are typically extracted from audio signals with common audio signal processing methods, such as autocorrelation function (ACF), short-term Fourier transform (STFT), and mel-frequency cepstral coefficients (MFCC). 4.2 Automatic genre recognition Musical genre can be defined as a categorical label, which is created by humans to classify music tracks (Tzanetakis and Cook, 2002). Genres can be classified in a tree format, for example heavy music is a top-level genre and it can be divided into white metal and black metal. Current digital music library software include genre classification obtained usually from the internet that can be used for making searches and playlists. An automatic genre recognition algorithm would offer interesting enhancements. First, it could provide genre classification for a large music library without requiring internet connection. Second, it would be possible to control the genre classification with some parameters, which is not possible with current software. Humans are usually very good at recognizing genres, while it is a difficult task for computers and usually they fail to reach the accuracy of humans. 6
7 Table 4. List of the features used in the genre recognition algorithm by Tzanetakis and Cook (2002). Timbral texture features: Mean of spectral centroid Variance of spectral centroid Spectral rolloff Spectral flux Zerocrossings over the texture window Low energy Means of Mel-Frequency Cepstral Coefficients (5 parameters) Variances of Mel-Frequency Cepstral Coefficients (5 parameters) Rhythm content features: Relative amplitudes of the first two peaks in the beat histogram (2 parameters) Ratio of the these amplitudes Periods of the first two peaks in beats per minute (2 parameters) Overall sum of the beat histogram Pitch content features: Amplitude of maximum peak of the folded histogram Period of the maximum peak of the unfolded histogram Period of the maximum peak of the folded histogram Pitch interval between the two most prominent peaks of the folded histogram The overall sum of the histogram Tzanetakis and Cook (2002) have proposed a musical genre recognition system, which uses 30-dimensional feature vector including timbral texture, rhythm content, and pitch content features. Table 4 shows a full list of the features used in the algorithm. Timbral texture features are determined by using STFT and MFCC, whereas rhythm content features are obtained by calculating a beat histogram with a wavelet transform (WT). In extraction of pitch content features, a multi-pitch estimation algorithm by Tolonen and Karjalainen (2000) is used for determining pitches from the signal that are then rounded to the nearest MIDI note number. The resulting note data is used to form two kind of histogram: unfolded histogram and folded histogram. In unfolded histogram, the each MIDI note corresponds to a single histogram bin, whereas in folded histogram one histogram bin corresponds to an octave. Finally, standard statistical pattern recognition (SPR) methods are used to determine the genre classification by using the feature vector. Tzanetakis and Cook have developed a graphical user interface called GenreGram for their genre classification algorithm. The software indicates graphically the results from the genre detection process, as seen in Figure 3. This software is part of the MARSYAS, which is a freely available musical signal analysis package (MARSYAS, 2006). 7
8 Figure 3. A screenshot of GenreGram a genre recognition software. 4.3 Automatic mood detection Mood is strongly connected to musical performances humans perceive emotions through listening to music. The perception varies from person to person, not to speak of people in non-western countries, but there are some general rules that can be determined. For instance, a song played in major key with fast tempo is perceived as happy, whereas a song played in minor key in a slow tempo is perceived as sad, roughly speaking. Table 5 shows one proposed presentation (Mancini et al., 2006) how the acoustic cues can be mapped to emotions. Emotional classification is not yet in wide use, even though it is included in, for example, ID3v2 as mood classifier. Again, an automatic mood detection algorithm would provide way to control emotion classification without requiring internet connection. Automatic mood detection algorithms have been presented by, for example, Friberg et al. (2002), van Breemen and Bartneck (2003), Li and Ogihara (2004), Tolos et al. (2004), and Lu et al. (2006). 8
9 Table 5. Mapping of acoustic cues to emotions according to Mancini et al. (2006). Acoustic cue Sadness Anger Happiness Mean tempo slow fast fast Tempo/timing large timing small tempo small tempo variations variations variability variability and timing variations Sound level low high high, little sound level variability Articulation legato staccato staccato, large articulation variability Duration contrasts soft sharp sharp Timbre dull sharp bright Tone attacks slow abrupt fast Micro-intonation flat accent on unstable rising notes Vibrato slow large vibrato extent - Ritardando final ritardando no ritardando Intelligent recommendation Intelligent recommendation is another advanced approach for digital music library management. Intelligent recommendation systems use a number of features obtained from the audio signal to form a user profile based on the user feedback. Web-based services use the feedback from every user to improve user profiles, but statistics-based search in a personal digital music library can rely purely on the feedback of one user. Based on the user profile, the system is able to recommend to the user some music tracks to be played that the system thinks are in line with the user s musical taste. In addition, the system can use environmental variables to improve the user profile to take into account the environmental conditions. A typical intelligent recommendation system block diagram is shown in Figure 4. Commercial web-based systems utilizing a statistics-based approach have been launched recently, for example, Last.fm (2006), which is shown in Figure 5. Reddy and Mascia (2006) have proposed a statistics-based recommendation system, which is examining five environmental variables: space, time, kinetic, entropic, and meteorological. 9
10 User profile Digital music library Feature extraction Statistical engine Recommendations Feedback User Environmental variables Figure 4. A block diagram of an intelligent recommendation system. Figure 5. Screenshot of the Last.fm application (2006). The user is able to give feedback by clicking the heart icon (positive), by clicking the cancel icon (negative), or clicking the tag icon, which categorizes the song using the user-defined tag. 4.5 Query by Humming (QBH) QBH is an interesting MIR application for searching music from large libraries. The user can hum a short piece of music and the QBH system performs a search to find the particular track. A simplified block diagram of a typical QBH system is shown in Figure 6. The QBH system consists of four blocks: humming transcription, music track transcription, database of the symbolic data of music tracks, and a search algorithm. First, the QBH requires a predefined database of symbolic data of music tracks. When the user hums or sings a line, the humming transcription block converts it to the symbolic form, which is used in the database. Then, the search algorithm makes a search and returns the results to the user. 10
11 Digital music library Transcription Symbolic data Transcription Search List of music tracks Humming Figure 6. Block diagram of a typical QBH system. The first task in implementing a QBH system is to construct a database of the music data. The transcription from audio signal to symbolic data can be done either manually, which is extremely laborious, or automatically. The goal is transcribe all pitched notes in the music signal into a symbolic format, such as MIDI. There is a lot of ongoing research on automatic music transcription algorithms, for example, by Klapuri (2004). However, even the best algorithms at the moment fail to produce reliable results. The use of automatic transcription algorithm would allow including the database inside the QBH software without requiring an Internet connection. The other option is to use an external database, which is located in the Internet. The second part of the QBH system is the transcription of the user input, which can be humming, singing, whistling, etc. Even though this part is much easier than the transcription of the music signal due to monophonic nature, there are, still, major challenges. First, it is very difficult to make the transcription independent of the user, so that whosoever can use the system without calibration. Second, the system should be robust enough that the service does not require any musical skills. Finally, the QBH system takes the transcribed input signal and tries to perform a search to find the best matching tracks. The search should be robust for notes which are out-of-tune and not in correct rhythm. Moreover, it should be independent of the key and tempo signatures. There are a number of proposed database algorithms, for example by Wiggins et al. (2002). In addition, one solution to improve the robustness is to examine the pitch differences in adjacent notes and to consider only the signs of these differences. Hence, the audio signals can be coded as a sequence of + and -. As a result, the database search does not take into account whether the notes are inputted in tune and in correct rhythm; what matters is if the note relations are inputted correctly. The disadvantage in this solution is that the definitions for the search are loosened and the algorithm might return a lot of candidates. 11
12 Figure 7. Screenshot of New York University s Query By Humming service (2006). QBH systems have been proposed by, for example, McNab et al. (1997). Moreover, there are a number of web-based QBH interfaces for large digital music libraries, such as Muspedia (2006) and New York University s Query By Humming (2006). A screenshot of the New York University s Query By Humming service is shown in Figure 7. There are also variations of QBH, for example Query by tapping (QBT), which uses only the rhythmic content. The advantage of this approach is that it is simpler than QBH. On the other hand, it might be difficult to narrow down the results to the correct song. 5 DISCUSSION Figure 8 displays how the advanced management techniques presented in the previous section can be applied for personal digital music library. Automatic genre recognition and mood detection algorithms can provide genre and mood classification data, which can be used for making general searches. In other words, these algorithms, as well as intelligent recommendation method, can be applied for generating a playlist when the user just wants to listen to certain type of music instead of specific tracks. On the other hand, when the user wants to play a specific track, QBH method is a powerful service for making efficient searches with large databases. These new features would require small changes in the user interface. First, genre recognition and mood detection can be thought as background processes that are not visible to the user. In addition, if the user can control the genre recognition and mood detection via parameters, it needs to be implemented in the user interface. Second, QBH requires recording buttons for humming the input signal. Third, intelligent recommendation system needs the feedback controls implemented in the user interface as well as the optional environmental detectors. 12
13 Genre recognition Intelligent recommendation Digital music library Mood detection Metadata Conventional search User Transcription QBH database Query by humming Figure 8. A block diagram of a personal digital music library management system, which uses advanced techniques. Table 6. Comparison of the introduced techniques. Technique Maturity Search type Advantages Challenges Genre recognition Under development General Genre information Accuracy and robustness without online connection, parameterization Mood detection Under General Mood information Accuracy, development without online robustness, connection, detection of parameterization large amount of moods Intelligent Good General Automatic Determining recommendation generation of the user profile tailored playlists Query by Under Not-known Powerful search Robustness humming development with large against databases inaccuracies in the input signal 6 CONCLUSION In this paper, four advanced management techniques for managing a personal digital music library have been presented: automatic genre recognition, automatic mood detection, intelligent recommendation, and QBH. Genre recognition, mood detection and intelligent recommendation can be used for generating tailored playlists for the user, whereas QBH provides a fast way to search for a known track. These methods offer significant improvements to the management of current personal digital music libraries, especially with large databases. On the other hand, MPEG-7, which is a new metadata standard incorporating audio features used in MIR, is a promising standard and it will be most likely implemented in future personal digital music libraries. Hence, 13
14 it can be suggested that in the future the personal digital music libraries will take advantage of the introduced advanced techniques as well as other MIR-based methods. Future work includes implementing software that incorporates the presented features, which can be further used in usability testing. REFERENCES Downie, J.S Music information retrieval (Chapter 7). Annual Review of Information Science and Technology 37, ed. Blaise Cronin. Medford, NJ. Pp Available from Friberg, A A fuzzy analyzer of emotional expression in music performance and body motion. In J. Sundberg & B. Brunson (Eds.) Proceedings of Music and Music Science. Stockholm, Sweden. October Klapuri, A. P Signal processing methods for the automatic transcription of music. Ph.D. dissertation. Tampere University of Technology. Last.fm [Online] Available at Li, T. and Ogihara, M Content-based music similarity search and emotion detection. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Montreal, Canada. May Pp Mancini, M.; Bresin, R.; Pelachaud, C From Acoustic Cues to an Expressive Agent. In Gibet, S.; Courty, N.; Kamp, J.-F. (Eds.): Gesture in Human-Computer Interaction and Simulation: 6 th International Gesture Workshop. Berlin Heidelberg: Springer. Pp MARSYAS [Online] Available at Martínez, J. M.; Koenen, R.; Pereira, F MPEG-7: The Generic Multimedia Content Description Standard, Part 1. IEEE Multimedia. Vol. 9. No. 2. Pp McNab, R.J.; Smith, L.A.; Bainbridge, D.; Witten, I.H The New Zealand Digital Library MELody index. D-Lib Magazine. Microsoft Mix Your Music in Playlists. [Online] Available at _how_to.aspx. Muspedia [Online] Available at New York University Query By Humming [Online] Available at Pachet, F.; La Burthe, A.; Zils, A.; Aucouturier, J.-J Popular Music Access: The Sony Music Browser. Journal of the American Society for Information Science and Technology. Vol. 55. No. 12. Pp Reddy, S. and Mascia, J Lifetrak: Music In Tune With Your Life. In Proceedings of the 1 st ACM International Workshop on Human-centered Multimedia. Santa Barbara, USA. Pp Tolonen, T. and Karjalainen, M A Computationally Efficient Multipitch Analysis Model. IEEE Transactions on Speech and Audio Processing. Vol. 8. No. 6. Pp
15 Tolos, M.; Tato, R.; Kemp, T Mood-based navigation through large collections of musical data. In Proceedings of the 2 nd IEEE Consumer Communications and Networking Conference. Las Vegas, USA. Jan Pp Tzanetakis, G. and Cook, P Musical genre classification of audio signals. IEEE Transactions on Speech and Audio Processing. Vol. 10. No. 5. Pp van Breemen, A. and Bartneck, C An Emotional InterFace for a Music Gathering Application. In Proceedings of the 8 th International Conference on Intelligent User Interfaces. Miami, USA. Pp Wiggins, G. A.; Lemström, K.; Meredith, D SIA(M)ESE: An Algorithm for Transposition Invariant, Polyphonic Content-Based Music Retrieval. In Proceedings of the ISMIR 02 Third International Conference on Music Information Retrieval. Paris, France. October Pp
CHAPTER 8 Multimedia Information Retrieval
CHAPTER 8 Multimedia Information Retrieval Introduction Text has been the predominant medium for the communication of information. With the availability of better computing capabilities such as availability
More informationMultimedia Database Systems. Retrieval by Content
Multimedia Database Systems Retrieval by Content MIR Motivation Large volumes of data world-wide are not only based on text: Satellite images (oil spill), deep space images (NASA) Medical images (X-rays,
More informationThe Automatic Musicologist
The Automatic Musicologist Douglas Turnbull Department of Computer Science and Engineering University of California, San Diego UCSD AI Seminar April 12, 2004 Based on the paper: Fast Recognition of Musical
More informationWorkshop W14 - Audio Gets Smart: Semantic Audio Analysis & Metadata Standards
Workshop W14 - Audio Gets Smart: Semantic Audio Analysis & Metadata Standards Jürgen Herre for Integrated Circuits (FhG-IIS) Erlangen, Germany Jürgen Herre, hrr@iis.fhg.de Page 1 Overview Extracting meaning
More informationMPEG-7 Audio: Tools for Semantic Audio Description and Processing
MPEG-7 Audio: Tools for Semantic Audio Description and Processing Jürgen Herre for Integrated Circuits (FhG-IIS) Erlangen, Germany Jürgen Herre, hrr@iis.fhg.de Page 1 Overview Why semantic description
More informationCHAPTER 7 MUSIC INFORMATION RETRIEVAL
163 CHAPTER 7 MUSIC INFORMATION RETRIEVAL Using the music and non-music components extracted, as described in chapters 5 and 6, we can design an effective Music Information Retrieval system. In this era
More informationQuery by Singing/Humming System Based on Deep Learning
Query by Singing/Humming System Based on Deep Learning Jia-qi Sun * and Seok-Pil Lee** *Department of Computer Science, Graduate School, Sangmyung University, Seoul, Korea. ** Department of Electronic
More informationMusic Signal Spotting Retrieval by a Humming Query Using Start Frame Feature Dependent Continuous Dynamic Programming
Music Signal Spotting Retrieval by a Humming Query Using Start Frame Feature Dependent Continuous Dynamic Programming Takuichi Nishimura Real World Computing Partnership / National Institute of Advanced
More informationEfficient Indexing and Searching Framework for Unstructured Data
Efficient Indexing and Searching Framework for Unstructured Data Kyar Nyo Aye, Ni Lar Thein University of Computer Studies, Yangon kyarnyoaye@gmail.com, nilarthein@gmail.com ABSTRACT The proliferation
More informationTowards an Integrated Approach to Music Retrieval
Towards an Integrated Approach to Music Retrieval Emanuele Di Buccio 1, Ivano Masiero 1, Yosi Mass 2, Massimo Melucci 1, Riccardo Miotto 1, Nicola Orio 1, and Benjamin Sznajder 2 1 Department of Information
More informationMining Large-Scale Music Data Sets
Mining Large-Scale Music Data Sets Dan Ellis & Thierry Bertin-Mahieux Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA {dpwe,thierry}@ee.columbia.edu
More informationAutomatic Classification of Audio Data
Automatic Classification of Audio Data Carlos H. C. Lopes, Jaime D. Valle Jr. & Alessandro L. Koerich IEEE International Conference on Systems, Man and Cybernetics The Hague, The Netherlands October 2004
More informationTwo-layer Distance Scheme in Matching Engine for Query by Humming System
Two-layer Distance Scheme in Matching Engine for Query by Humming System Feng Zhang, Yan Song, Lirong Dai, Renhua Wang University of Science and Technology of China, iflytek Speech Lab, Hefei zhangf@ustc.edu,
More informationLabROSA Research Overview
LabROSA Research Overview Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu 1. Music 2. Environmental sound 3.
More informationMusic Recommendation System
Music Recommendation System Using Genetic Algorithm Date > May 26, 2009 Kim Presenter > Hyun-Tae Kim Contents 1. Introduction 2. Related Work 3. Music Recommendation System - A s MUSIC 4. Conclusion Introduction
More informationPSP Rhythm User s Manual
PSP Rhythm User s Manual Introduction pg. 02 Main Menu pg. 03 Button Icon Legend pg. 03 User Interface pg. 04 Pattern View pg. 05 Track View pg. 07 Wave View pg. 09 Effects View pg. 11 Song View pg. 14
More informationCurriculum Guidebook: Music Gr PK Gr K Gr 1 Gr 2 Gr 3 Gr 4 Gr 5 Gr 6 Gr 7 Gr 8
PK K 1 2 3 4 5 6 7 8 Elements of Music 014 Differentiates rhythm and beat X X X X X X 021 Distinguishes high and low registers X X X X 022 Distinguishes loud and soft dynamics X X X X X 023 Distinguishes
More informationMp3 download amazon. Mp3 download amazon
Paieška Paieška Paieška Mp3 download amazon Mp3 download amazon > > Product description. Explore the universe of Cc-Authorised music. Search, find and download, simple as that, it cannot better than this.
More informationMACHINE LEARNING: CLUSTERING, AND CLASSIFICATION. Steve Tjoa June 25, 2014
MACHINE LEARNING: CLUSTERING, AND CLASSIFICATION Steve Tjoa kiemyang@gmail.com June 25, 2014 Review from Day 2 Supervised vs. Unsupervised Unsupervised - clustering Supervised binary classifiers (2 classes)
More informationSOUND EVENT DETECTION AND CONTEXT RECOGNITION 1 INTRODUCTION. Toni Heittola 1, Annamaria Mesaros 1, Tuomas Virtanen 1, Antti Eronen 2
Toni Heittola 1, Annamaria Mesaros 1, Tuomas Virtanen 1, Antti Eronen 2 1 Department of Signal Processing, Tampere University of Technology Korkeakoulunkatu 1, 33720, Tampere, Finland toni.heittola@tut.fi,
More informationTararira. version 0.1 USER'S MANUAL
version 0.1 USER'S MANUAL 1. INTRODUCTION Tararira is a software that allows music search in a local database using as a query a hummed, sung or whistled melody fragment performed by the user. To reach
More informationAn Approach to Automatically Tracking Music Preference on Mobile Players
An Approach to Automatically Tracking Music Preference on Mobile Players Tim Pohle, 1 Klaus Seyerlehner 1 and Gerhard Widmer 1,2 1 Department of Computational Perception Johannes Kepler University Linz,
More informationExperiments in computer-assisted annotation of audio
Experiments in computer-assisted annotation of audio George Tzanetakis Computer Science Dept. Princeton University en St. Princeton, NJ 844 USA +1 69 8 491 gtzan@cs.princeton.edu Perry R. Cook Computer
More informationA Document-centered Approach to a Natural Language Music Search Engine
A Document-centered Approach to a Natural Language Music Search Engine Peter Knees, Tim Pohle, Markus Schedl, Dominik Schnitzer, and Klaus Seyerlehner Dept. of Computational Perception, Johannes Kepler
More informationMultimedia Databases
Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de Previous Lecture Audio Retrieval - Low Level Audio
More informationAudio Classification and Content Description
2004:074 MASTER S THESIS Audio Classification and Content Description TOBIAS ANDERSSON MASTER OF SCIENCE PROGRAMME Department of Computer Science and Electrical Engineering Division of Signal Processing
More informationGarageband Basics. What is GarageBand?
Garageband Basics What is GarageBand? GarageBand puts a complete music studio on your computer, so you can make your own music to share with the world. You can create songs, ringtones, podcasts, and other
More informationRECOMMENDATION ITU-R BS Procedure for the performance test of automated query-by-humming systems
Rec. ITU-R BS.1693 1 RECOMMENDATION ITU-R BS.1693 Procedure for the performance test of automated query-by-humming systems (Question ITU-R 8/6) (2004) The ITU Radiocommunication Assembly, considering a)
More informationAffective Music Video Content Retrieval Features Based on Songs
Affective Music Video Content Retrieval Features Based on Songs R.Hemalatha Department of Computer Science and Engineering, Mahendra Institute of Technology, Mahendhirapuri, Mallasamudram West, Tiruchengode,
More informationGETTING STARTED WITH DJCONTROL COMPACT AND DJUCED 18
GETTING STARTED WITH DJCONTROL COMPACT AND DJUCED 18 INSTALLATION Connect the DJControl Compact to your computer Install the DJUCED 18 software Launch the DJUCED 18 software More information (forums, tutorials,
More informationAvailable online Journal of Scientific and Engineering Research, 2016, 3(4): Research Article
Available online www.jsaer.com, 2016, 3(4):417-422 Research Article ISSN: 2394-2630 CODEN(USA): JSERBR Automatic Indexing of Multimedia Documents by Neural Networks Dabbabi Turkia 1, Lamia Bouafif 2, Ellouze
More informationMultimedia Databases. 8 Audio Retrieval. 8.1 Music Retrieval. 8.1 Statistical Features. 8.1 Music Retrieval. 8.1 Music Retrieval 12/11/2009
8 Audio Retrieval Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 8 Audio Retrieval 8.1 Query by Humming
More informationHIGH-LEVEL AUDIO FEATURES: DISTRIBUTED EXTRACTION AND SIMILARITY SEARCH
HIGH-LEVEL AUDIO FEATURES: DISTRIBUTED EXTRACTION AND SIMILARITY SEARCH François Deliège, Bee Yong Chua, and Torben Bach Pedersen Department of Computer Science Aalborg University ABSTRACT Today, automatic
More informationCHROMA AND MFCC BASED PATTERN RECOGNITION IN AUDIO FILES UTILIZING HIDDEN MARKOV MODELS AND DYNAMIC PROGRAMMING. Alexander Wankhammer Peter Sciri
1 CHROMA AND MFCC BASED PATTERN RECOGNITION IN AUDIO FILES UTILIZING HIDDEN MARKOV MODELS AND DYNAMIC PROGRAMMING Alexander Wankhammer Peter Sciri introduction./the idea > overview What is musical structure?
More information1 Introduction. 3 Data Preprocessing. 2 Literature Review
Rock or not? This sure does. [Category] Audio & Music CS 229 Project Report Anand Venkatesan(anand95), Arjun Parthipan(arjun777), Lakshmi Manoharan(mlakshmi) 1 Introduction Music Genre Classification continues
More informationHow To Add Songs To Ipod Without Syncing >>>CLICK HERE<<<
How To Add Songs To Ipod Without Syncing Whole Library Create a playlist, adding all the songs you want to put onto your ipod, then under the How to add music from ipod to itunes without clearing itunes
More informationA MUSICAL WEB MINING AND AUDIO FEATURE EXTRACTION EXTENSION TO THE GREENSTONE DIGITAL LIBRARY SOFTWARE
12th International Society for Music Information Retrieval Conference (ISMIR 2011) A MUSICAL WEB MINING AND AUDIO FEATURE EXTRACTION EXTENSION TO THE GREENSTONE DIGITAL LIBRARY SOFTWARE Cory McKay Marianopolis
More informationRepeating Segment Detection in Songs using Audio Fingerprint Matching
Repeating Segment Detection in Songs using Audio Fingerprint Matching Regunathan Radhakrishnan and Wenyu Jiang Dolby Laboratories Inc, San Francisco, USA E-mail: regu.r@dolby.com Institute for Infocomm
More informationWorking with Apple Loops
7 Working with Apple Loops So you want to create a complete song, but you don t know how to play every instrument? An Apple Loop is a short piece of music that you can add to your song. It can be either
More informationA Brief Overview of Audio Information Retrieval. Unjung Nam CCRMA Stanford University
A Brief Overview of Audio Information Retrieval Unjung Nam CCRMA Stanford University 1 Outline What is AIR? Motivation Related Field of Research Elements of AIR Experiments and discussion Music Classification
More informationSpectral modeling of musical sounds
Spectral modeling of musical sounds Xavier Serra Audiovisual Institute, Pompeu Fabra University http://www.iua.upf.es xserra@iua.upf.es 1. Introduction Spectral based analysis/synthesis techniques offer
More informationA Simulated Annealing Optimization of Audio Features for Drum Classification
A Simulated Annealing Optimization of Audio Features for Drum Classification Sven Degroeve 1, Koen Tanghe 2, Bernard De Baets 1, Marc Leman 2 and Jean-Pierre Martens 3 1 Department of Applied Mathematics,
More informationWolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig
Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 6 Audio Retrieval 6 Audio Retrieval 6.1 Basics of
More informationFUSING BLOCK-LEVEL FEATURES FOR MUSIC SIMILARITY ESTIMATION
FUSING BLOCK-LEVEL FEATURES FOR MUSIC SIMILARITY ESTIMATION Klaus Seyerlehner Dept. of Computational Perception, Johannes Kepler University Linz, Austria klaus.seyerlehner@jku.at Gerhard Widmer Dept. of
More informationCepstral Analysis Tools for Percussive Timbre Identification
Cepstral Analysis Tools for Percussive Timbre Identification William Brent Department of Music and Center for Research in Computing and the Arts University of California, San Diego wbrent@ucsd.edu ABSTRACT
More informationjaudio: Towards a standardized extensible audio music feature extraction system
jaudio: Towards a standardized extensible audio music feature extraction system Cory McKay Faculty of Music, McGill University 555 Sherbrooke Street West Montreal, Quebec, Canada H3A 1E3 cory.mckay@mail.mcgill.ca
More informationMusic Genre Classification
Music Genre Classification Matthew Creme, Charles Burlin, Raphael Lenain Stanford University December 15, 2016 Abstract What exactly is it that makes us, humans, able to tell apart two songs of different
More informationSoftware Design Document Portable Media Player
Software Design Document Portable Media Player Prepared by: Michelle Chang CPSC 655 Sep 20, 2007 1. Introduction 1.1. Goals and Requirements This document addresses the following goals and functional requirements
More informationA Miniature-Based Image Retrieval System
A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,
More informationFPDJ. Baltazar Ortiz, Angus MacMullen, Elena Byun
Overview FPDJ Baltazar Ortiz, Angus MacMullen, Elena Byun As electronic music becomes increasingly prevalent, many listeners wonder how to make their own music. While there is software that allows musicians
More informationGarageBand 3 Getting Started Includes a complete tour of the GarageBand window, plus step-by-step lessons on working with GarageBand
GarageBand 3 Getting Started Includes a complete tour of the GarageBand window, plus step-by-step lessons on working with GarageBand 1 Contents Chapter 1 7 Welcome to GarageBand 8 What s New in GarageBand
More informationMobile-Tuner is a musical instrument tuner for java enabled mobile phones. In addition Mobile-Tuner has a built in metronome and a synthesizer:
Mobile-Tuner is a musical instrument tuner for java enabled mobile phones. In addition Mobile-Tuner has a built in metronome and a synthesizer: Keep your instruments in tune with Mobile-Tuner. Mobile-Tuner
More informationThe MUSART Testbed for Query-by-Humming Evaluation
Roger B. Dannenberg,* William P. Birmingham, George Tzanetakis, Colin Meek, Ning Hu,* and Bryan Pardo *School of Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213 USA {roger.dannenberg,
More informationA Novel Approach of Automatic Music Genre Classification based on Timbral Texture and Rhythmic Content Features
A ovel Approach of Automatic Music Genre Classification based on Timbral Texture and Rhythmic Content Features Babu Kaji Baniya*, Deepak Ghimire*, Joonwhoan Lee* *Division of Computer Enigneering, Chonbuk
More informationQueST: Querying Music Databases by Acoustic and Textual Features
QueST: Querying Music Databases by Acoustic and Textual Features Bin Cui 1 Ling Liu 2 Calton Pu 2 Jialie Shen 3 Kian-Lee Tan 4 1 Department of Computer Science & National Lab on Machine Perception, Peking
More informationHello, I am from the State University of Library Studies and Information Technologies, Bulgaria
Hello, My name is Svetla Boytcheva, I am from the State University of Library Studies and Information Technologies, Bulgaria I am goingto present you work in progress for a research project aiming development
More informationMUSIC CLUSTERING WITH CONSTRAINTS
MUSIC CLUSTERING WITH CONSTRAINTS Wei Peng Tao Li School of Computer Science Florida International University {wpeng002,taoli}@cs.fiu.edu Mitsunori Ogihara Department of Computer Science University of
More informationA GENERIC SYSTEM FOR AUDIO INDEXING: APPLICATION TO SPEECH/ MUSIC SEGMENTATION AND MUSIC GENRE RECOGNITION
A GENERIC SYSTEM FOR AUDIO INDEXING: APPLICATION TO SPEECH/ MUSIC SEGMENTATION AND MUSIC GENRE RECOGNITION Geoffroy Peeters IRCAM - Sound Analysis/Synthesis Team, CNRS - STMS Paris, France peeters@ircam.fr
More informationUVO SYSTEM USER'S MANUAL
UVO SYSTEM USER'S MANUAL Congratulations on the Purchase of your new UVO system! Your new UVO system allows you to enjoy various audio and multimedia features through the main audio system. For the latest
More informationMSc Project Report. Automatic Playlist Generation and Music Library Visualisation with Timbral Similarity Measures. Name: Steven Matthew Lloyd
MSc Project Report Automatic Playlist Generation and Music Library Visualisation with Timbral Similarity Measures Name: Steven Matthew Lloyd Student No.: 089555161 Supervisor: Professor Mark Sandler 25
More informationConnecting your smartphone or tablet to the HDD AUDIO PLAYER through a Wi-Fi (wireless LAN) network [6]
A specialized application for HDD AUDIO PLAYER HDD Audio Remote About the HDD Audio Remote Features of HDD Audio Remote [1] System requirements [2] Compatible HDD AUDIO PLAYER models [3] Trademarks [4]
More informationHow To Manually Change Album Artwork On
How To Manually Change Album Artwork On Windows Media Player Sep 14, 2014. Windows 8.1 Media Player 12.0 - Unable to Change Album Art If you've manually edited the media information for that album in your
More informationLesson 11. Media Retrieval. Information Retrieval. Image Retrieval. Video Retrieval. Audio Retrieval
Lesson 11 Media Retrieval Information Retrieval Image Retrieval Video Retrieval Audio Retrieval Information Retrieval Retrieval = Query + Search Informational Retrieval: Get required information from database/web
More informationContend Based Multimedia Retrieval
Contend Based Multimedia Retrieval CBIR Query Types Semantic Gap Features Segmentation High dimension IBMS QBIC GIFT, MRML Blobworld CLUE SIMPLIcity CBMR Multimedia Automatic Video Analysis 1 CBIR Contend
More informationAudio-Text Synchronization inside mp3 files: A new approach and its implementation
Audio-Text Synchronization inside mp3 files: A new approach and its implementation Marco Furini and Lorenzo Alboresi Computer Science Department University of Piemonte Orientale Spalto Marengo 33, 15100
More informationChapter 11 Representation & Description
Chapter 11 Representation & Description The results of segmentation is a set of regions. Regions have then to be represented and described. Two main ways of representing a region: - external characteristics
More informationterminal rasa - every music begins with silence
terminal rasa - every music begins with silence Frank EICKHOFF Media-Art, University of Arts and Design Lorenz 15 76135 Karlsruhe, Germany, feickhof@hfg-karlsruhe.de Abstract An important question in software
More informationCOS 116 The Computational Universe Laboratory 4: Digital Sound and Music
COS 116 The Computational Universe Laboratory 4: Digital Sound and Music In this lab you will learn about digital representations of sound and music, especially focusing on the role played by frequency
More informationMultimedia Information Retrieval
Multimedia Information Retrieval Prof Stefan Rüger Multimedia and Information Systems Knowledge Media Institute The Open University http://kmi.open.ac.uk/mmis Why content-based? Actually, what is content-based
More informationConvention Paper Presented at the 120th Convention 2006 May Paris, France
Audio Engineering Society Convention Paper Presented at the 12th Convention 26 May 2 23 Paris, France This convention paper has been reproduced from the author s advance manuscript, without editing, corrections,
More informationSumantra Dutta Roy, Preeti Rao and Rishabh Bhargava
1 OPTIMAL PARAMETER ESTIMATION AND PERFORMANCE MODELLING IN MELODIC CONTOUR-BASED QBH SYSTEMS Sumantra Dutta Roy, Preeti Rao and Rishabh Bhargava Department of Electrical Engineering, IIT Bombay, Powai,
More informationContents. Overview...3. Song Editor Clip Editor Browser and Rytmik Cloud Keyboard Controls Support Information...
User Manual Contents Overview...3 Song Editor...4 Clip Library...4 Song Playback...4 Tracks...5 Export...5 Clip Editor...6 Note Sequence...6 Instrument...7 Instrument Effects...7 Tempo Setting...8 Other
More informationMARSYAS SUBMISSIONS TO MIREX 2010
MARSYAS SUBMISSIONS TO MIREX 2010 George Tzanetakis University of Victoria Computer Science gtzan@cs.uvic.ca ABSTRACT Marsyas is an open source software framework for audio analysis, synthesis and retrieval
More informationFACTORSYNTH user manual
FACTORSYNTH user manual www.jjburred.com - software@jjburred.com J.J. Burred, 2018-2019 factorsynth version: 1.4, 28/2/2019 Introduction Factorsynth is a Max For Live device that uses machine learning
More informationChapter 5.5 Audio Programming
Chapter 5.5 Audio Programming Audio Programming Audio in games is more important than ever before 2 Programming Basic Audio Most gaming hardware has similar capabilities (on similar platforms) Mostly programming
More informationINDEXING AND RETRIEVAL OF MUSIC DOCUMENTS THROUGH PATTERN ANALYSIS AND DATA FUSION TECHNIQUES
INDEXING AND RETRIEVAL OF MUSIC DOCUMENTS THROUGH PATTERN ANALYSIS AND DATA FUSION TECHNIQUES Giovanna Neve University of Padova Department of Information Engineering Nicola Orio University of Padova Department
More informationInteractive Video Retrieval System Integrating Visual Search with Textual Search
From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Interactive Video Retrieval System Integrating Visual Search with Textual Search Shuichi Shiitani,
More informationA TIMBRE ANALYSIS AND CLASSIFICATION TOOLKIT FOR PURE DATA
A TIMBRE ANALYSIS AND CLASSIFICATION TOOLKIT FOR PURE DATA William Brent University of California, San Diego Center for Research in Computing and the Arts ABSTRACT This paper describes example applications
More informationTranscribing and Coding Audio and Video Files
Transcribing and Coding Audio and Video Files Contents TRANSCRIBING AND CODING AUDIO AND VIDEO FILES... 1 GENERAL INFORMATION ABOUT THE ANALYSIS OF AUDIO AND VIDEO FILES... 1 THE MEDIA PLAYER TOOLBAR...
More informationBringing Mobile Map Based Access to Digital Audio to the End User
Bringing Mobile Map Based Access to Digital Audio to the End User Robert Neumayer, Jakob Frank, Peter Hlavac, Thomas Lidy and Andreas Rauber Vienna University of Technology Department of Software Technology
More informationBISHWA PRASAD SUBEDI AUDIO-BASED RETRIEVAL OF MUSICAL SCORE DATA
- BISHWA PRASAD SUBEDI AUDIO-BASED RETRIEVAL OF MUSICAL SCORE DATA Master of Science Thesis Supervisor: Associate Professor Anssi Klapuri Examiners: Associate Professor Anssi Klapuri Adjunct Professor
More informationCOS 116 The Computational Universe Laboratory 4: Digital Sound and Music
COS 116 The Computational Universe Laboratory 4: Digital Sound and Music In this lab you will learn about digital representations of sound and music, especially focusing on the role played by frequency
More informationEUROPEAN COMPUTER DRIVING LICENCE. Multimedia Audio Editing. Syllabus
EUROPEAN COMPUTER DRIVING LICENCE Multimedia Audio Editing Syllabus Purpose This document details the syllabus for ECDL Multimedia Module 1 Audio Editing. The syllabus describes, through learning outcomes,
More informationColor-Based Classification of Natural Rock Images Using Classifier Combinations
Color-Based Classification of Natural Rock Images Using Classifier Combinations Leena Lepistö, Iivari Kunttu, and Ari Visa Tampere University of Technology, Institute of Signal Processing, P.O. Box 553,
More informationThe MUSART Testbed for Query-By-Humming Evaluation
The MUSART Testbed for Query-By-Humming Evaluation Roger B. Dannenberg, William P. Birmingham, George Tzanetakis, Colin Meek, Ning Hu, Bryan Pardo School of Computer Science Department of Electrical Engineering
More informationA NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO
International journal of computer science & information Technology (IJCSIT) Vol., No.5, October A NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO Pranab Kumar Dhar *, Mohammad
More informationConvention Paper Presented at the 120th Convention 2006 May Paris, France
Audio Engineering Society Convention Paper Presented at the 12th Convention 26 May 2 23 Paris, France This convention paper has been reproduced from the author s advance manuscript, without editing, corrections,
More informationMusicBox: Navigating the space of your music. Anita Lillie November 19, 2007
MusicBox: Navigating the space of your music Anita Lillie November 19, 2007 The Problem Navigating large music libraries describing preferences with words selecting music that fits those preferences recommending
More informationAN AUDIO PROCESSING LIBRARY FOR MIR APPLICATION DEVELOPMENT IN FLASH
11th International Society for Music Information Retrieval Conference (ISMIR 2010) AN AUDIO PROCESSING LIBRARY FOR MIR APPLICATION DEVELOPMENT IN FLASH Jeffrey Scott, Raymond Migneco, Brandon Morton,Christian
More information_APP B_549_10/31/06. Appendix B. Producing for Multimedia and the Web
1-59863-307-4_APP B_549_10/31/06 Appendix B Producing for Multimedia and the Web In addition to enabling regular music production, SONAR includes a number of features to help you create music for multimedia
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb1. Subjective
More informationEffectiveness of HMM-Based Retrieval on Large Databases
Effectiveness of HMM-Based Retrieval on Large Databases Jonah Shifrin EECS Dept, University of Michigan ATL, Beal Avenue Ann Arbor, MI 489-2 jshifrin@umich.edu William Birmingham EECS Dept, University
More informationDESIGN AND ARCHITECTURE OF A DIGITAL MUSIC LIBRARY ON THE WEB
DESIGN AND ARCHITECTURE OF A DIGITAL MUSIC LIBRARY ON THE WEB ABSTRACT In this paper, a Web-based digital music library based on a threetier architecture is presented. The digital library s primary goal
More informationClick Freegal Music from the surreylibraries.ca (hover over the blue Research and Downloads tab and select Downloads.
Freegal Quick Facts Freegal gives Surrey residents with a valid Surrey Libraries card 3 free songs per week. Residents can download and KEEP the songs. You simply log into Freegal with your library card
More informationComputer Vesion Based Music Information Retrieval
Computer Vesion Based Music Information Retrieval Philippe De Wagter pdewagte@andrew.cmu.edu Quan Chen quanc@andrew.cmu.edu Yuqian Zhao yuqianz@andrew.cmu.edu Department of Electrical and Computer Engineering
More informationConnecting your smartphone or tablet to the HDD AUDIO PLAYER through a Wi- Fi (wireless LAN) network [6]
A specialized application for HDD AUDIO PLAYER HDD Audio Remote About the HDD Audio Remote Features of HDD Audio Remote [1] System requirements [2] Compatible HDD AUDIO PLAYER models [3] Trademarks [4]
More informationMusic, Radio & Podcasts
Music, Radio & Podcasts *Buying Music *Streaming Music *Radio Online *Podcasts Buying Music (downloading): itunes Store, Amazon. Single tracks are mostly $1.29. Older music is less. Album prices vary.
More informationContent-based retrieval of music using mel frequency cepstral coefficient (MFCC)
Content-based retrieval of music using mel frequency cepstral coefficient (MFCC) Abstract Xin Luo*, Xuezheng Liu, Ran Tao, Youqun Shi School of Computer Science and Technology, Donghua University, Songjiang
More informationCS3242 assignment 2 report Content-based music retrieval. Luong Minh Thang & Nguyen Quang Minh Tuan
CS3242 assignment 2 report Content-based music retrieval Luong Minh Thang & Nguyen Quang Minh Tuan 1. INTRODUCTION With the development of the Internet, searching for information has proved to be a vital
More informationmemory product Doesn t play videos like the ipod Comes in 2, 4, and 8 Cost ranges from $135 to $225
The Apple ipod Is basically a hard drive with special software and a display Comes in 30, 60 and 80 GB sizes Price is about $230 to $330 Apple has sold over 100 million units 1 The Apple Nano Nano line
More information