The underlying assumption of our approach is that the textures in a video scene can be labeled subjectively relevant or irrelevant. Relevant textures are defined ascontaining subjectively meaningful details, while irrelevant textures can be seen as image content with less important subjective details. We apply this idea to video coding using a texture analyzer and a texture synthesizer. The texture analyzer (encoder side) identifies the texture regions with unimportant subjective details and generates side information for the texture synthesizer (decoder side), which in turn inserts synthetic textures at the specified locations. The focus of this paper is the texture analyzer, which uses MPEG-7 descriptors for similarity estimation. The texture analyzer is based on a split and merge segmentation approach and also provides solutions concerning temporal mapping of identified detail-relevant textures. The current implementation of the texture analyzer yields an identification rate of up to 99.64%.