Home   -   Events   -   Content

Events

Computational Models for Predicting Sound Quality

Time: April 20, 2026 View:

9375Reporter: Professor Brian C.J. Moore

Time: 14:30, April 24, 2026

Location: A507, School of Mechanical and Electrical Engineering

Host: School of Mechanical and Electrical Engineering, China University of Mining and Technology

Reporter’s Profile: Professor Brian C.J. Moore is a Fellow of the Royal Society of London, a Fellow of the Acoustical Society of America, and a Fellow of the Audio Engineering Society. He is a professor at the University of Cambridge, UK. His research focuses on loudness perception, noise assessment, and artificial hearing. He has led over 40 research projects, including those funded by the UK Engineering and Physical Sciences Research Council. The Moore loudness model proposed by him has been adopted as international standards (ISO 532-2:2017, ISO 532-3:2023, ANSI/ASA S3.4-2007) and is widely used by industry (e.g., Boeing, Samsung, Bose) and military sectors for acoustic performance evaluation of mechanical and electrical equipment and noise control technology development. He has published over 600 academic papers in top international journals such as Nature, with over 60,000 citations. He has served as an associate editor for top international journals including the Journal of the Acoustical Society of America. He is the recipient of the Gold Medal of the Acoustical Society of America, the Hugh Knowles Prize, and the Thomas Simm Littler Award, among other international honors.

Report overviewThis talk will introduce computational models for predicting the perceived sound quality of mechanical system, and discuss how these models are built upon the functional principles of the peripheral auditory system. We will see that sound analysis in the cochlea can be represented through a bank of bandpass auditory filters and excitation patterns, which serve as an internal representation of a sound's spectrum. We will also learn that a critical distinction must be made between linear distortion, which alters the timbre or tone quality, and nonlinear distortion, which adds new frequency components perceived as harshness or noise. In addition, we will find that while linear effects are modeled using steady-state excitation pattern differences, predicting nonlinear distortion requires a time-domain analysis utilizing gamma-tone filters and cross-correlation of temporal fine structure and envelopes. Finally, I will identify how these two approaches are combined into a unified quality score that achieves exceptionally high correlations with human subjective ratings, and I will share how companies have applied these models to ensure their devices meet performance specifications.