Shwartz, Vered2021-12-162021-12-1620212021http://dx.doi.org/10.1007/s13218-021-00709-7https://dl.gi.de/handle/20.500.12116/37807The fundamental goal of natural language processing is to build models capable of human-level understanding of natural language. One of the obstacles to building such models is lexical variability , i.e. the ability to express the same meaning in various ways. Existing text representations excel at capturing relatedness (e.g. blue / red ), but they lack the fine-grained distinction of the specific semantic relation between a pair of words. This article is a summary of a Ph.D. dissertation submitted to Bar-Ilan University in 2019, under the supervision of Professor Ido Dagan of the Computer Science Department. The dissertation explored methods for recognizing and extracting semantic relationships between concepts ( cat is a type of animal ), the constituents of noun compounds (baby oil is oil for babies), and verbal phrases (‘X died at Y’ means the same as ‘X lived until Y’ in certain contexts). The proposed models outperform highly competitive baselines and improve the state-of-the-art in several benchmarks. The dissertation concludes in discussing two challenges in the way of human-level language understanding: developing more accurate text representations and learning to read between the lines.Computational linguisticsLexical inferenceLexical semanticsNatural language processingDissertation Abstract:Learning High Precision Lexical InferencesText/Journal Article10.1007/s13218-021-00709-71610-1987