Under CopyrightRybok, L.L.RybokVoit, M.M.VoitEkenel, H.K.H.K.EkenelStiefelhagen, R.R.Stiefelhagen2022-03-1126.6.20122010https://publica.fraunhofer.de/handle/publica/36744110.1109/ICPR.2010.385The knowledge about the body orientation of humans can improve speed and performance of many service components of a smart-room. Since many of such components run in parallel, an estimator to acquire this knowledge needs a very low computational complexity. In this paper we address these two points with a fast and efficient algorithm using the smart-room's multiple camera output. The estimation is based on silhouette information only and is performed for each camera view separately. The single view results are fused within a Bayesian filter framework. We evaluate our system on a subset of videos from the CLEAR 2007 dataset [1] and achieve an average correct classification rate of 87.8 %, while the estimation itself just takes 12 ms when four cameras are used.enbody-orientationmulti-view fusionbag-offeatures004670Multi-view based estimation of human upper-body orientationconference paper