Mokhtari, ParhamParhamMokhtariKato, HiroakiHiroakiKatoTakemoto, HironoriHironoriTakemotoNishimura, RyouichiRyouichiNishimuraEnomoto, SeigoSeigoEnomotoAdachi, SeijiSeijiAdachiKitamura, TatsuyaTatsuyaKitamura2022-03-052022-03-052019https://publica.fraunhofer.de/handle/publica/25851510.1038/s41598-019-43967-0Humans can externalise and localise sound-sources in three-dimensional (3D) space because approaching sound waves interact with the head and external ears, adding auditory cues by (de-)emphasising the level in different frequency bands depending on the direction of arrival. While virtual audio systems reproduce these acoustic filtering effects with signal processing, huge memory-storage capacity would be needed to cater for many listeners because the filters are as unique as the shape of each person's head and ears. Here we use a combination of physiological imaging and acoustic simulation methods to confirm and extend previous studies that represented these filters by a linear combination of a small number of eigenmodes. Based on previous psychoacoustic results we infer that more than 10, and as many as 24, eigenmodes would be needed in a virtual audio system suitable for many listeners. Furthermore, the frequency profiles of the top five eigenmodes are robust across different populations and experimental methods, and the top three eigenmodes encode familiar 3D spatial contrasts: along the left-right, top-down, and a tilted front-back axis, respectively. These findings have implications for virtual 3D-audio systems, especially those requiring high energy-efficiency and low memory-usage such as on personal mobile devices.en690Further observations on a principal components analysis of head-related transfer functionsjournal article