Zhong, ZeyunZeyunZhongSchneider, DavidDavidSchneiderVoit, MichaelMichaelVoitStiefelhagen, RainerRainerStiefelhagenBeyerer, JürgenJürgenBeyerer2023-02-162023-02-162023https://publica.fraunhofer.de/handle/publica/43605210.1109/wacv56688.2023.00601Although human action anticipation is a task which is inherently multi-modal, state-of-the-art methods on well known action anticipation datasets leverage this data by applying ensemble methods and averaging scores of uni-modal anticipation networks. In this work we introduce transformer based modality fusion techniques, which unify multi-modal data at an early stage. Our Anticipative Feature Fusion Transformer (AFFT) proves to be superior to popular score fusion approaches and presents state-of-the-art results outperforming previous methods on EpicKitchens-100 and EGTEA Gaze+. Our model is easily extensible and allows for adding new modalities without architectural changes. Consequently, we extracted audio features on EpicKitchens-100 which we add to the set of commonly used features in the community.enAnticipative Feature Fusion Transformer for Multi-Modal Action Anticipationconference paper