Logo des Repositoriums
 

Datenbank Spektrum 13(3) - November 2013

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 11
  • Zeitschriftenartikel
    Möglichkeiten und Konzepte zur XML-Schemavalidierung am Beispiel von DB2 for z/OS V9.1
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Koch, Christoph
    Das von IBM entwickelte relationale Datenbankmanagementsystem (DBMS) DB2 for z/OS V9.1 ermöglicht durch die Implementierung der pureXML-Technologie (IBM Corporation 2012) die native Speicherung und Verarbeitung von „Extensible Markup Language (XML)“-Daten. Hierzu zählen auch Mechanismen zum Umgang mit XML-Schemas – dem de facto Standardinstrument zur Datenmodellierung und Integritätssicherung im XML-Kontext. Der vorliegende Beitrag reflektiert in Anlehnung an (Koch 2012) die DB2-Funktionalitäten zur XML-Schemavalidierung anhand eines zuvor erarbeiteten Anforderungsprofils. Darauf aufbauend werden Konzepte zur XML-Schemavalidierung – die nachträgliche und die automatische Schemavalidierung – vorgestellt, durch deren Implementierung sich die DB2-Funktionalitäten gemäß der Anforderungen gezielt ergänzen lassen. Abschließend werden die Ausführungen dieses Beitrags in Bezug zur Fragestellung, inwieweit der Einsatz von datenbankseitiger XML-Schemavalidierung bereits mit DB2 for z/OS V9.1 sinnvoll ist, zusammengefasst.
  • Zeitschriftenartikel
    Dissertationen
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013)
  • Zeitschriftenartikel
    An Interactive System for Visual Analytics of Dynamic Topic Models
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Günnemann, Nikou; Derntl, Michael; Klamma, Ralf; Jarke, Matthias
    The vast amount and rapid growth of data on the Web and in document repositories make knowledge extraction and trend analysis a challenging task. A well-proven approach for the unsupervised analysis of large text corpora is dynamic topic modeling. While there is a solid body of research on fundamentals and applications of this technique, visual-interactive analysis systems for allowing end-users to perform analysis tasks using topic models are still rare. In this paper, we present D-VITA, an interactive text analysis system that exploits dynamic topic modeling to detect the latent topic structure and dynamics in a collection of documents. D-VITA supports end-users in understanding and exploiting the topic modeling results by providing interactive visualizations of the topic evolution in document collections and by browsing documents based on keyword search and similarity of their topic distributions. The system was evaluated by a scientific community that used D-VITA for trend analysis in their data sources. The results indicate high usability of D-VITA and its usefulness for productive analysis tasks.
  • Zeitschriftenartikel
    On the Integration of Electrical/Electronic Product Data in the Automotive Domain
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Tiedeken, Julian; Reichert, Manfred; Herbst, Joachim
    The recent innovation of modern cars has mainly been driven by the development of new as well as the continuous improvement of existing electrical and electronic (E/E) components, including sensors, actuators, and electronic control units. This trend has been accompanied by an increasing complexity of E/E components and their numerous interdependencies. In addition, external impact factors (e.g., changes of regulations, product innovations) demand for more sophisticated E/E product data management (E/E-PDM). Since E/E product data is usually scattered over a large number of distributed, heterogeneous IT systems, application-spanning use cases are difficult to realize (e.g., ensuring the consistency of artifacts corresponding to different development phases, plausibility of logical connections between electronic control units). To tackle this challenge, the partial integration of E/E product data as well as corresponding schemas becomes necessary. This paper presents the properties of a typical IT system landscape related to E/E-PDM, reveals challenges emerging in this context, and elicits requirements for E/E-PDM. Based on this, insights into our framework, which targets at the partial integration of E/E product data, are given. Such an integration will foster E/E product data integration and hence contribute to an improved E/E product quality.
  • Zeitschriftenartikel
    Editorial
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Härder, Theo
  • Zeitschriftenartikel
    JEPC: The Java Event Processing Connectivity
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Hoßbach, Bastian; Glombiewski, Nikolaus; Morgen, Andreas; Ritter, Franz; Seeger, Bernhard
    Today, event processing (EP) is the first choice technology for analyzing massive event streams in a timely manner. EP allows to detect user-defined situations of interest, like in streaming position events for example, in near real-time such that actions can be taken immediately. Unfortunately, each specific EP system has its very own API and query language because there are no standards. The exchange of EP systems as well as their use within a federation is challenging, error-prone, and expensive. To overcome these problems, we introduce the Java Event Processing Connectivity (JEPC) that is a middleware for uniform EP functionality in Java. JEPC provides always the same API and query language for EP completely independent of the EP system beneath. Furthermore, we show in detail how JEPC can integrate database systems besides EP systems and evaluate the performance of EP powered by databases systems.
  • Zeitschriftenartikel
    Cloud Data Management for Online Games: Potentials and Open Issues
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Diao, Ziqiang; Schallehn, Eike; Wang, Shuo; Mohammad, Siba
    The number of players, for massively multiplayer online role-playing games (MMORPG), typically reaches millions of people, geographically distributed throughout the world. Worldwide revenues for these games increase by billions of dollars each year. Unfortunately, their complex architecture makes them hard to maintain, resulting in considerable costs and development risks. For normal operation, MMORPGs have to access huge amounts of diverse data. With increasing numbers of players, managing growing volumes of data in a relational database becomes a big challenge, which cannot be overcome by simply adding new servers. Cloud storage systems are emerging solutions focusing on providing scalability and high performance for Cloud applications, social media, etc. However, Cloud storage systems are in general not designed for processing transactions or providing high levels of consistency. In this paper, we present our current work-in-progress by analyzing the existing architecture of MMORPGs and classifying relevant data. Based on this, we highlight the design requirements, identify the major research challenges, and propose a Cloud-based model for MMORPGs that we currently implement as a testbed for further evaluation.
  • Zeitschriftenartikel
    Vorstellung des Lehrstuhls für Datenbanksysteme der TUM
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Kemper, Alfons; Neumann, Thomas
    Der Lehrstuhl für Datenbanksysteme der Technischen Universität München beschäftigt sich in der Forschung mit der Entwicklung, Optimierung und Anwendung „moderner“ Datenbanktechnologie. Die Schwerpunkte der letzten Jahre lagen in der Entwicklung von Multi-Tenancy-fähigen Datenbanken, eScience-Datenbankanwendungen, Workload-Management für heterogene Anwendungen, RDF-Datenbanken und, insbesondere, Hauptspeicher-Datenbanken für hybride OLTP&OLAP-Anwendungen.
  • Zeitschriftenartikel
    „Gib mir so viel Gold, wie die Metzger im Nachbardorf zusammen besitzen und ich lasse den Piloten frei!“ – Spielbasiertes Lernen von SQL-Grundlagen
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013) Schildgen, Johannes; Deßloch, Stefan
    Stellen Sie sich vor, Sie landen auf einer einsamen Insel und die Bewohner verstehen nur die Sprache SQL. Das Spiel SQL Island (http://www.sql-island.de) dient zur Vermittlung und Übung von SQL-Grundlagen und wird durch die Eingabe von SQL-Anfragen gesteuert. Der Zweck des Spiels ist es, dem Spieler auf unterhaltsame Weise beizubringen, wie Daten in relationalen Datenbanken abgefragt und manipuliert werden können. Dabei werden keine SQL-Kenntnisse vorausgesetzt.
  • Zeitschriftenartikel
    News
    (Datenbank-Spektrum: Vol. 13, No. 3, 2013)