Options
2007
Conference Paper
Titel
GPU-accelerated affordance cueing based on visual attention
Abstract
This work focuses on the relevance of visual attention in affordance-inspired robotics. Among all approaches in robotics related to Gibson's concept of affordances the dealing with attention cues is only rudimentary. We are introducing this concept within the perception layer of our affordance-inspired robotic framework. In this context we present a high-performance visual attention system handling invariants in the optical array. This layer builds the base of higher-sophisticated tasks, like a "curiosity drive" that helps a robotic agent to explore its environment. Our attention system derived from VOCUS utilizes the parallel design of the graphics processing unit (GPU) and reaches real-time performance for the processing of online video streams in VGA resolution on a single computer platform. GPU-VOCUS is currently the fastest known visual attention system running on standard personal computers.that are learned by infants, which range from perception of unity through motion to invariants for locomotion. She shows that the perception of space is directly coupled to the development of locomotion. This dependency indicates that an agent can only perceive affordances that are related to any of its possible actions. Another example is that an agent can only perceive whether an object affords lifting if it is capable to attach to the object and to lift it. This affordance inspiration is one of the fundamentals in our EU project MACS. Within this context Paletta et al. presented a novel framework for cueing and hypothesis verification of affordances that could play an important role in future.
;
This work focuses on the relevance of visual attention in affordance-inspired robotics. Among all approaches in robotics related to Gibson's concept of affordances the dealing with attention cues is only rudimentary. We are introducing this concept within the perception layer of our affordance-inspired robotic framework. In this context we present a high-performance visual attention system handling invariants in the optical array. This layer builds the base of higher-sophisticated tasks, like a "curiosity drive" that helps a robotic agent to explore its environment. Our attention system derived from VOCUS utilizes the parallel design of the graphics processing unit (GPU) and reaches real-time performance for the processing of online video streams in VGA resolution on a single computer platform. GPU-VOCUS is currently the fastest known visual attention system running on standard personal computers.that are learned by infants, which range from perception of unity through motion to invariants for locomotion. She shows that the perception of space is directly coupled to the development of locomotion. This dependency indicates that an agent can only perceive affordances that are related to any of its possible actions. Another example is that an agent can only perceive whether an object affords lifting if it is capable to attach to the object and to lift it. This affordance inspiration is one of the fundamentals in our EU project MACS. Within this context Paletta et al. presented a novel framework for cueing and hypothesis verification of affordances that could play an important role in future.