About this Research Topic
AI in the production of new music
Composers have experimented with a variety of computational techniques for music composition. Some works use mathematically inspired techniques, including the use of stochastic processes or sequences, to create melodic, harmonic, and rhythmic structures. Others explore evolutionary approaches, using the notions of cross-over, mutation and population fitness to evolve new musical structures. Other examples use learning-based approaches. Learning can be knowledge-based, drawing from music theory, composer and period style to create new musical compositions. Learning can also be example-based, where algorithms learn to imitate style from examples of music performed by others. The task of the algorithm is then to learn the underlying structures and patterns that are found in musical excerpts. Such compositions can take place in real time, allowing the algorithms to respond in real time to the improvisations of the human performer. Learning approaches do not have to consider only musical pieces. Indeed, music has been created using patterns learned from the behavior of living organisms.
AI to create music as an accompaniment
AI can also be used to fill the role of accompanist, allowing a musician to play with an automated musical partner. Computer accompaniment entails monitoring one or more human performers and synchronizing accompaniment despite tempo variations or even errors in performance. Music can also be created to accompany some narrative or film. In this case, algorithms are expected to compose or alter musical content and performance to reflect this narrative or image content.
AI to create expressive music
Music must be performed expressively to be engaging. Musical expression models can be learned from examples of human performances. Musical expression algorithms then need to adapt these models to the phrasing of the music at hand to create credible performances. Computational musical expression can also use external effects such as visuals or even scents to heighten the emotional impact of the music.
AI to provide tools that support learning
AI is also changing the way musicians learn music. Such methods include sound analysis algorithms that monitor the pitch and rhythmic qualities of performance for feedback, to optimization techniques that determine fingerings. Researchers are also exploring alternative music notation systems, adapting scores to, for example, facilitate music reading for people with visual problems.
AI in the marketing of music
Music providers are also using AI to learn musical preferences of consumers in order to provide customized playlists based on listening patterns.
Robotics and Music
Advances in the automated interpretation of musical scores, the enhancement of musical expression, and human-machine interaction allows for musical robots, that is, robotic platforms which can listen to, interpret, play and interact with music. Such robotic platforms can vary in shape and form, ranging from automated, digital instruments to humanoid robots. This changes the way that humans interact with, listen to and perceive the music generated through such platforms.
Call for original contributions
Music and AI seeks original research papers that focus on the contribution of AI in musical composition, production, expression, notation and analysis. Music and AI welcomes contributions that use novel techniques to address these areas as well as techniques which advance knowledge in the area by addressing known problems in the field.
Keywords: Optical Music Recognition, Computational Expression, Music Information Retrieval, Music Visualisation, Computational Music Composition
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.