Paaß, G.G.PaaßVries, H. deH. deVries2022-03-102022-03-102006https://publica.fraunhofer.de/handle/publica/35088610.1007/3-540-31314-1_50We investigate the performance of text mining systems for annotating press articles in two real-world press archives. Seven commercial systems are tested which recover the categories of a document as well named entities and catchphrases. Using cross-validation we evaluate the precision-recall characteristic. Depending on the depth of the category tree 39-79% breakeven is achieved. For one corpus 45% of the documents can be classified automatically, based on the system\'s confidence estimates. In a usability experiment the formal evaluation results are confirmed. It turns out that with respect to some features human annotators exhibit a lower performance than the text mining systems. This establishes a convincing argument to use text mining systems to support indexing of large document collections.entext miningclassificationNamed Entitiesuser interface005006629Evaluating the performance of text mining systems on real-world press archivesconference paper