Galke, LukasMelnychuk, TetyanaSeidlmayer, EvaTrog, SteffenFörstner, Konrad U.Schultz, CarstenTochtermann, KlausDavid, KlausGeihs, KurtLange, MartinStumme, Gerd2019-08-272019-08-272019978-3-88579-688-6https://dl.gi.de/handle/20.500.12116/24973Automated research analyses are becoming more and more important as the volume of research items grows at an increasing pace. We pursue a new direction for the analysis of research dynamics with graph neural networks. So far, graph neural networks have only been applied to small-scale datasets and primarily supervised tasks such as node classification. We propose to use an unsupervised training objective for concept representation learning that is tailored towards bibliographic data with millions of research papers and thousands of concepts from a controlled vocabulary. We have evaluated the learned representations in clustering and classification downstream tasks. Furthermore, we have conducted nearest concept queries in the representation space. Our results show that the representations learned by graph convolution with our training objective are comparable to the ones learned by the DeepWalk algorithm. Our findings suggest that concept embeddings can be solely derived from the text of associated documents without using a lookup-table embedding. Thus, graph neural networks can operate on arbitrary document collections without re-training. This property makes graph neural networks useful for the analysis of research dynamics, which is often conducted on time-based snapshots of bibliographic data.enmachine learningrepresentation learningneural networksgraph miningInductive Learning of Concept Representations from Library-Scale Bibliographic CorporaText/Conference Paper10.18420/inf2019_261617-5468