Gehrig, D.D.GehrigKrauthausen, P.P.KrauthausenRybok, L.L.RybokKuehne, H.H.KuehneHanebeck, U.D.U.D.HanebeckSchultz, T.T.SchultzStiefelhagen, R.R.Stiefelhagen2022-03-112022-03-112011https://publica.fraunhofer.de/handle/publica/37445510.1109/IROS.2011.6048716In this paper, a multi-level approach to intention, activity, and motion recognition for a humanoid robot is proposed. Our system processes images from a monocular camera and combines this information with domain knowledge. The recognition works on-line and in real-time, it is independent of the test person, but limited to predefined view-points. Main contributions of this paper are the extensible, multi-level modeling of the robot's vision system, the efficient activity and motion recognition, and the asynchronous information fusion based on generic processing of mid-level recognition results. The complementarity of the activity and motion recognition renders the approach robust against misclassifications. Experimental results on a real-world data set of complex kitchen tasks, e.g., Prepare Cereals or Lay Table, prove the performance and robustness of the multi-level recognition approach.en004Combined intention, activity, and motion recognition for a humanoid household robotconference paper