CC BY 4.0Alkhoury, FouadFouadAlkhouryHorváth, TamásTamásHorváthBauckhage, ChristianChristianBauckhageWrobel, StefanStefanWrobel2025-07-022025-07-022025https://doi.org/10.24406/publica-4837https://publica.fraunhofer.de/handle/publica/48907910.1007/s10994-025-06815-z10.24406/publica-48372-s2.0-105008798804Graph neural networks (GNNs) are among the most widely used methods for node classification in graphs. A common strategy to improve their predictive performance is to enrich nodes with additional features. A weakness of this method is that the set of appropriate features can vary from graph to graph. We address this shortcoming by proposing a novel method. In a preprocessing step, a first GNN is trained on a set of graphs with varying structural properties, using a candidate set of node features fixed in advance. The resulting GNN model is then used to predict the most relevant features from the candidate set for unseen target graphs, which are later processed for node classification. For each target graph, a second GNN is trained on the graph, which is enriched with the node feature vectors calculated for the features selected by the first GNN. A key advantage of the proposed method is that the features are selected without computing the candidate features for the target graph. Our experimental results on synthetic and real-world graphs show that even a few features selected in this way is sufficient to significantly improve the predictive performance of GNNs that use either none or all of the candidate features. Moreover, the time needed to learn the second GNN for the target graph can be reduced by up to two orders of magnitude.entrueGraph neural networksNode classificationNode feature selectionImproving graph neural networks through feature importance learningjournal article