Auflistung nach Autor:in "Mai, Florian"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- TextdokumentReranking-based Recommender System with Deep Learning(INFORMATIK 2017, 2017) Saleh, Ahmed; Mai, Florian; Nishioka, Chifumi; Scherp, AnsgarAn enormous volume of scientific content is published every year. The amount exceeds by far what a scientist can read in her entire life. In order to address this problem, we have developed and empirically evaluated a recommender system for scientific papers based on Twitter postings. In this paper, we improve on the previous work by a reranking approach using Deep Learning. Thus, after a list of top-k recommendations is computed, we rerank the results by employing a neural network to improve the results of the existing recommender system. We present the design of the deep reranking approach and a preliminary evaluation. Our results show that in most cases, the recommendations can be improved using our Deep Learning reranking approach.
- KonferenzbeitragWhat If We Encoded Words as Matrices and Used Matrix Multiplication as Composition Function?(INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft, 2019) Galke, Lukas; Mai, Florian; Scherp, AnsgarWe summarize our contribution to the International Conference on Learning Representations CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model, 2019.We construct a text encoder that learns matrix representations of words from unlabeled text, while using matrix multiplication as composition function. We show that our text encoder outperforms continuous bag-of-word representations on 9 out of 10 linguistic probing tasks and argue that the learned representations are complementary to the ones of vector-based approaches. Hence, we construct a hybrid model that jointly learns a matrix and a vector for each word. This hybrid model yields higher scores than purely vector-based approaches on 10 out of 16 downstream tasks in a controlled experiment with the same capacity and training data. Across all 16 tasks, the hybrid model achieves an average improvement of 1.2%. These results are insofar promising, as they open up new opportunities to efficiently incorporate order awareness into word embedding models.