Tiefpreis
CHF200.80
Print on Demand - Exemplar wird für Sie besorgt.
This book features original papers from 27th International Symposium on Frontiers of Research in Speech and Music (FRSM 2023), jointly organized by Sardar Vallabhbhai National Institute of Technology, Surat, India, and Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India, during 45 August 2023. The book is organized into four main sections, considering both technological advancement and interdisciplinary nature of speech, music, language and their applications. The first section includes chapters related to computational, modelling and cognitive aspects of the speech signal. The second part contains chapters covering the foundations of both vocal and instrumental music processing with the signal, computational and cognitive aspects. The third section relates to the variety of research being done in the peripheral areas of languages and linguistics with special focus on regional languages of India. A lot of research is being performed within the speech and music information retrieval domain which is potentially interesting for most users of computers and the Internet. Therefore, the fourth and final section is dedicated to the chapters related to multidisciplinary applications of speech and music signal processing.
Autorentext
Dr. Keikichi Hirose received Ph.D. degree in Electronic Engineering in 1977 from the University of Tokyo. He was a professor at the University of Tokyo from 1994. In 2015, he retired from the University of Tokyo, and was received Professor of Emeritus title. He also has been a visiting professor of National Institute of Informatics from 2015, and served as a project professor during June 2019 - March 2023. From March 1987 to January 1988, he was Visiting Scientist at the Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, U.S.A. He has been engaged in a wide range of research on spoken language processing, including analysis, synthesis, recognition, dialogue systems and computer-assisted language learning. From 2000 to 2004, he was a principal investigator of the national project "Realization of advanced spoken language information processing utilizing prosodic features," supported by the Japanese Government. He served as the general chair for INTERSPEECH 2010, Makuhari, Japan. During 2010 - 2018, he served as the chair of ISCA Special Interest Group on Speech Prosody (SProSIG). Also, he served as a board member of ISCA during 2009-2017. He is a member of International Advisory Council of ISCA from January 2021. He received ISCA fellow grade in 2018. He became an honorary member, Polish Phonetic Association, in 2013. For his long-term and remarkable contribution to spoken language processing, he received Achievement Award from Acoustical Society of Japan in 2020. In 2015, he was honored as a Named Person of Merit in Science and Technology by the Mayor of Tokyo. He published more than 100 journal papers and more than 380 conference papers. Dr. Deepak Joshi is currently working as an assistant professor in the Department of Electronics Engineering, at Sardar Vallabhbhai National Institute of Technology (SVNIT), Surat, Gujarat, India. He obtained his doctoral degree from IIT Guwahati in the area of Microelectronics and VLSI Design. His areas of research include the design and optimization of analog circuits, CAD for VLSI Design, and Metaheuristics. Dr. Shankha Sanyal is a Ph.D. in the field of Cognitive Neurophysics from Sir C.V. Raman Centre for Physics and Music, Jadavpur University. He completed Graduation and Post-Graduation in Physics from Jadavpur University, Kolkata. He is currently a post-doctoral research associate at the School of Languages and Linguistics, Jadavpur University. His primary research interest currently revolves around studying the acoustical and neural manifestations of speech and music signals using state-of-the-art signal processing algorithms. He has published in more than 25 peer-reviewed international journal papers and more than 100 conference papers including a book from Springer. He is a member of the Acoustical Society of America, Asiatic Society, Kolkata as well as a reviewer of several international journals such as Frontiers in Neuroscience, Cognitive Neurodynamics. He has been part of the Organizing and Program of FRSM conference series since 2015.
Inhalt
Framework for Detecting Toxic speech using BERT and Deep Learning.- Speech Emotion Classification Using Deep Learning.- ML Based Application for Enhanced Communication With Specially Abled Children.- Acoustic Phonetic Analysis of Vowels of Native Bengali Speakers with Parkinson's Disease.- Recent Advancement in Accent Conversion using Deep Learning Techniques: A comprehensive review .- Speech Enhancement: Traditional and Deep Learning Techniques.- Continuing CompMusic: New approaches in the computational analysis of Carnatic Music.- An Ethnomusicological study of Darpawngi's Mizo folksongs: A Cantometric Experiment.- An approach of similarity measure in Hindustani music.- Ornamentation in Hindustani vocal music.- A comparative Study of the non-nasal and nasal voice in Hindustani vocal music.- Regression Approach for Shruthi Identification from Indian Classical Instrumental Polyphonic Audio.- Music-Evoked Emotion Classification from EEG: An Image Based CNN Approach.- Perception of Devotion and Happiness in Indian Spiritual Music: An Acoustical and Audience Response Exploration.- Identifying Correlations Between Hindustani Music And The Brain: A Nonlinear EEG based exploration.- Representation and Analysis of Dynamics for Automatic Music Assessment in Hindustani Vocal Music.- Emotion and Instrument audio classification of Indian Classical Music recordings using transformers.- Styles and Rhythms of musical transitions in Indian Ragas: An Acoustical Exploration.- A statistical approach to the acoustical analysis of harmonics and timbre of tabla strokes.