Auflistung nach Autor:in "Heinrich, Gregor"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragActors–media–qualities: a generic model for information retrieval in virtual(10th International Conferenceon Innovative Internet Community Systems (I2CS) – Jubilee Edition 2010 –, 2010) Heinrich, GregorThe article presents a model of the structural properties of virtual communities and the information they can access. It argues that a large part of the information – and actually knowledge – present in virtual communities can be identified by a graph structure that consists of three node types – actors, media and qualities – as well as the relations that connect them. Based on these relations, information retrieval and other inference mechanisms can be mapped into the model.
- KonferenzbeitragEntwicklung von Testkollektionen für P2P Information Retrieval(INFORMATIK 2006 – Informatik für Menschen, Band 1, 2006) Heinrich, Gregor; Teresniak, Sven; Witschel, Hans FriedrichDieser Beitrag stellt laufende Arbeiten an Testkollektionen vor, die für die realistische Simulation semantischer Suche in P2P-Netzwerken verwendet werden können. Er entwickelt eine allgemeine Vorgehensweise für die Erstellung solcher Simulationsdaten. Mithilfe eines probabilistischen Modells wird die semantische Verteilung von Peer-Dokumentenbeständen und Suchanfragen abgedeckt, wobei die verwendeten Dokumente keine Vorklassifizierung benötigen.
- KonferenzbeitragAn evaluation framework for semantic search in P2P networks(10th International Conferenceon Innovative Internet Community Systems (I2CS) – Jubilee Edition 2010 –, 2010) Holz, Florian; Witschel, Hans-Friedrich; Heinrich, Gregor; Heyer, Gerhard; Teresniak, SvenWe address the problem of evaluating peer-to-peer information retrieval (P2PIR) systems with semantic overlay structure. The P2PIR community lacks a commonly accepted testbed, such as TREC is for the classic IR community. The problem with using classic test collections in a P2P scenario is that they provide no realistic distribution of documents and queries over peers, which is, however, crucial for realistically simulating and evaluating semantic overlays. On the other hand, document collections that can be easily distributed (e.g. by exploiting categories or author information) lack both queries and relevance judgments. Therefore, we propose an evaluation framework, which provides a strategy for constructing a P2PIR testbed, consisting of a prescription for content distribution, query generation and measuring effectiveness without the need for human relevance judgments. It can be used with any document collection that contains author information and document relatedness (e.g. references in scientific literature). Author information is used for assigning documents to peers, relatedness is used for generating queries from related documents. The ranking produced by the P2PIR system is evaluated by comparing it to the ranking of a centralised IR system using a new evaluation measure related to mean average precision. The combination of these three things – realistic content distribution, realistic and automated query generation and distribution, and a meaningful and flexible evaluation measure for rankings – offers an improvement over existing P2PIR evaluation approaches.