Auflistung nach Autor:in "Plattner, Hasso"
1 - 10 von 11
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAvailable-to-promise on an in-memory column store(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Tinnefeld, Christian; Müller, Stephan; Kaltegärtner, Helen; Hillig, Sebastian; Butzmann, Lars; Eickhoff, David; Klauck, Stefan; Taschik, Daniel; Wagner, Björn; Xylander, Oliver; Tosun, Cafer; Zeier, Alexander; Plattner, HassoAvailable-To-Promise (ATP) is an application in the context of Supply Chain Management (SCM) systems and provides a checking mechanism that calculates if the desired products of a customer order can be delivered on the requested date. Modern SCM systems store relevant data records as aggregated numbers which implies the disadvantages of maintaining redundant data as well as inflexibility in querying the data. Our approach omits aggregates by storing all individual data records in an in-memory, column-store and scans through all relevant records on-the-fly for each check. We contribute by describing the novel data organization and a lockingfree, highly-concurrent ATP checking algorithm. Additionally, we explain how new business functionality such as instant rescheduling of orders can be realized with our approach. All concepts are implemented within a prototype and benchmarked by using an anonymized SCM dataset of a Fortune 500 consumer products company. The paper closes with a discussion of the results and gives an outlook how this approach can help companies to find the right balance between low inventory costs and high order fulfillment rates.
- KonferenzbeitragGeneric Business Simulation Using an In-Memory Column Store(Datenbanksysteme für Business, Technologie und Web (BTW 2015), 2015) Butzmann, Lars; Klauck, Stefan; Müller, Stephan; Uflacker, Matthias; Sinzig, Werner; Plattner, HassoValue driver trees are a well-known methodology to model dependencies such as the definition of key performance indicators. While the models have well-known semantics, they lack the right tool support for business simulations, because a flexible implementation that supports multidimensional, hierarchical value driver trees and data bindings is very complex and computationally challenging. This paper tackles this problem by proposing an approach for generic enterprise simulations which are based on value driver trees. Our approach is two-fold: we present the definition of a simulation meta model at design time, and the run-time simulation tool. The simulation meta model describes the structure of the dependency graph, the data binding, and the parametrization of the model to simulate data changes. The simulation tool can then be used to create and edit simulation model instances and run simulations in real-time by leveraging an in-memory column store. Besides the formal description of the approach, this work presents a prototypical implementation of the simulation tool and an evaluation using data of a consumer packaged goods company.
- ZeitschriftenartikelHauptspeicherdatenbanken für Unternehmensanwendungen(Datenbank-Spektrum: Vol. 10, No. 3, 2010) Krueger, Jens; Grund, Martin; Tinnefeld, Christian; Eckart, Benjamin; Zeier, Alexander; Plattner, HassoUnternehmensanwendungen werden traditionell in OLTP (Online Transactional Processing) und OLAP (Online Analytical Processing) unterteilt. Während sich viele Forschungsaktivitäten der letzten Jahre auf die Optimierung dieser Trennung fokussieren, haben – im Speziellen während des letztes Jahrzehnts – sich sowohl Datenbanken als auch Hardware weiterentwickelt. Einerseits gibt es Datenmanagementsysteme, die Daten spaltenorientiert organisieren und dabei ideal das Anforderungsprofil analytischer Anfragen abdecken. Andererseits steht Anwendungen heute wesentlich mehr Hauptspeicher zur Verfügung, der in Kombination mit der ebenfalls wesentlich gesteigerten Rechenleistung es erlaubt, komplette Datenbanken von Unternehmen komprimiert im Speicher vorzuhalten. Beide Entwicklungen ermöglichen die Bearbeitung komplexer analytischer Anfragen in Sekundenbruchteilen und ermöglichen so komplett neue Geschäftsprozesse und -applikationen. Folglich stellt sich die Frage, ob die künstlich eingeführte Trennung von OLTP und OLAP aufgehoben werden kann und sämtliche Anfragen auf einem vereinten Datenbestand arbeiten können. Dieser Artikel betrachtet hierfür die Charakteristiken der Datenverarbeitung in Unternehmensanwendungen und zeigt wie ausgesuchte Technologien die Datenverarbeitung optimieren können. Ein weiterer Trend ist die Verwendung von Cloud Computing und somit die Auslagerung des Rechenzentrums zur Kostenoptimierung. Damit einher gehen Anforderungen an das Datenmanagement hinsichtlich dynamischer Erweiterung und Skalierung um dem Konzept des Cloud Computings gerecht zu werden. Die Eigenschaften spaltenorientierter Hauptspeicherdatenbanken bieten hier Vorteile, auch in Bezug auf die effektivere Auslastung der zur Verfügung stehenden Hardwareressourcen.Ein wichtiger Aspekt ist, dass alle Anfragen in einer definierten Reaktionszeit erfolgen auch wenn die Last stark schwanken kann. Erfahrungsgemäß steigt insbesondere am Ende eines Quartals die Belastung der vorhandenen Datenbanksysteme. Um hierfür immer genau die richtige Hardwareressourcen zur Verfügung zu haben, eignet sich Cloud Computing. Aus der gewünschten Elastizität ergeben sich Anforderungen an das Datenmanagement, die im Artikel betrachtet werden.
- ZeitschriftenartikelMapping Terms in Application and Implementation Domains(Softwaretechnik-Trends Band 31, Heft 2, 2011) Panchenko, Oleksandr; Plattner, Hasso; Zeier, AlexanderOne problem of reengineering is the gap between terms from the application domain and terms from the implementation domain. Particularly, it is observable while searching source code: in search queries maintainers often use terms from the application domain that differ from terms in the implementation domain that have been selected by original developers. Therefore, many queries fail. This paper proposes the utilization of code structure for automatic mapping application domain terms and implementation domain terms. In this way, implementation domain terms can be tagged by the identified terms from the application domain. Such redistribution of existing terms in the search index allows matching even those relevant documents which do not contain exact terms from the query. The feasibility of the proposed method was estimated by collecting statistics in an open-source project.
- KonferenzbeitragA new architecture for enterprise application software based on in-memory databases(INFORMATIK 2015, 2015) Plattner, Hasso
- KonferenzbeitragProviding high-availability, elasticity for an in-memory database system with RAMCloud(INFORMATIK 2013 – Informatik angepasst an Mensch, Organisation und Umwelt, 2013) Tinnefeld, Christian; Taschik, Daniel; Plattner, HassoStanford's RAMCloud is a large-scale storage system that keeps all data in DRAM and provides high availability as well as a great degree of elasticity. These properties make it desirable for being used as the persistence for an in-memory database system. In this paper, we experimentally demonstrate the high availability and elasticity RAMCloud can provide when it is being used as a storage system for a relational in-memory database system: a) We utilize RAMCloud's fast-crash-recovery mechanism and measure its impact on database query processing performance. b) We evaluate the elasticity by executing a sinus-shaped, a plateau, and an exponential database workload. Based on our experiments, we show that an in-memory database running on top of RAMCloud can within seconds adapt to changing workloads and recover data from a crashed node - both without an interruption of the ongoing query processing.
- KonferenzbeitragSanssouciDB: an in-memory database for processing enterprise workloads(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Plattner, HassoIn this paper, we present SanssouciDB: a database system designed for serving ERP transactions and analytics out of the same data store. It consists of a columnstore engine for high-speed analytics and transactions on sparse tables, as well as an engine for so-called combined columns, i.e., column groups which are used for materializing result sets, intermediates, and for pocessing transactions on tables touching many attributes at the same time. Our analysis of SAP customer data showed that the vast majority of transactions in an ERP system are of analytical nature. We describe the key concepts of SanssouciDB's architecture: concurrency control, techniques for compression and parallelization, and logging. To illustrate the potential of combining OLTP and OLAP processing in the same database, we give several examples of new applications which have been built on top of an early version of SanssouciDB and discuss the speedup achieved when running these applications at SAP customer sites.
- KonferenzbeitragSimulating multi-tenant OLAP database clusters(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Schaffner, Jan; Eckart, Benjamin; Schwarz, Christian; Brunnert, Jan; Jacobs, Dean; Zeier, Alexander; Plattner, HassoSimulation of parallel database machines was used in many database research projects during the 1990ies. One of the main reasons why simulation approaches were popular in that time was the fact that clusters with hundreds of nodes were not as readily available for experimentation as it is the case today. At the same time, the simulation models underlying these systems were fairly complex since they needed to capture both queuing processes in hardware (e.g. CPU contention or disk I/O) and software (e.g. processing distributed joins). Todays trend towards more specialized database architectures removes large parts of this complexity from the modeling task. As the main contribution of this paper, we discuss how we developed a simple simulation model of such a specialized system: a multi-tenant OLAP cluster based on an in-memory column database. The original infrastructure and testbed was built using SAP TREX, an in-memory column database part of SAP's business warehouse accelerator, which we ported to run on the Amazon EC2 cloud. Although we employ a simple queuing model, we achieve good accuracy. Similar to some of the parallel systems of the 1990ies, we are interested in studying different replication and high-availability strategies with the help of simulation. In particular, we study the effects of mirrored vs. interleaved replication on throughput and load distribution in our cluster of multi-tenant databases. We show that the better load distribution inherent to the interleaved replication strategy is exhibited both on EC2 and in our simulation environment.
- KonferenzbeitragTAMEX: A task-based query execution framework for mixed enterprise workloads on in-memory databases(INFORMATIK 2013 – Informatik angepasst an Mensch, Organisation und Umwelt, 2013) Wust, Johannes; Grund, Martin; Plattner, HassoIn-memory database management systems (DBMS) have been proposed to run transactional and analytical applications on a single database instance and to reduce the execution time of complex analytical queries to seconds. The two main reasons for this dramatic performance increase are massive intra-query parallelism on many-core CPUs and primary data storage in main memory. The benefits of these in-memory DBMS for enterprises are huge: analytical applications become largely independent of data staging delays, opening the way for real-time analytics. However, this promising approach will only be adopted, if DBMS can execute dynamically arriving transactional queries in a timely manner, even while complex analytical queries are executed. We believe that two system properties are key to achieve this objective: (1) splitting queries into fine granular atomic tasks and (2) efficiently assigning these tasks to a large number of processing units, thereby considering priorities of query classes. In this paper, we propose TAMEX, a framework for the execution of multiple query classes, designed for executing queries of heterogeneous workloads of enterprise applications on in-memory databases. The basic idea is to generate a task graph for each query during query compilation and assign these tasks to processing units by a user-level scheduler based on priorities. We evaluate the concept using a mix of transactional and join-heavy queries and focus on the impact of task sizes on load balancing and responsiveness of the system.
- ZeitschriftenartikelUser, Use & Utility Research(Wirtschaftsinformatik: Vol. 56, No. 1, 2014) Brenner, Walter; Karagiannis, Dimitris; Kolbe, Lutz; Krüger, Jens; Leifer, Larry; Lamberti, Hermann-Josef; Leimeister, Jan Marco; Österle, Hubert; Petrie, Charles; Plattner, Hasso; Schwabe, Gerhard; Uebernickel, Falk; Winter, Robert; Zarnekow, RüdigerDie Wirtschaftsinformatik steht heute am Wendepunkt. Ist die Planung, Entwicklung und der Betrieb von Informations- und Telekommunikationssystemen (ITK) bisher eine Managementaufgabe einiger „Eliten“ in Verwaltung und Wirtschaft gewesen, so führt die zunehmende Durchdringung aller Lebensbereiche mit ITK zu einer Veränderung der gesamten ITK-Landschaft, die den Menschen in der digitalen Welt – den digitalen Nutzer – in den Mittelpunkt der zukünftigen Betrachtung stellt.Aus dem Blickwinkel der Wirtschaftsinformatik stellt sich diese Entwicklung nicht nur als vorübergehende Modeerscheinung dar, sondern hat als komplexes soziotechnisches Phänomen weitreichende Bedeutung. Die sich ergebenden Herausforderungen sind vielfältig und betreffen die gesamte Wirtschaftsinformatik sowohl in Forschung als auch in Praxis. Prozessmodelle von Unternehmen sind vollkommen neu aus der Nutzungsperspektive einzelner Individuen zu betrachten und zu entwerfen.Die Individualisierung von Anwendungssystemen bei gleichzeitiger Desintegration von Funktionen und die Unterstützung völlig neuer Bedienkonzepte werfen die Frage nach neuen Entwicklungs- und Innovationsmethoden wie auch Architekturkonzepten auf. Der Umgang mit Datenmassen in einer weiter rasant zunehmenden Digitalisierung von Lebenswelten und Unternehmungen stellt die Wissenschaft und Praxis vor Herausforderungen.Um diese Herausforderungen zu lösen, wird ein neues Forschungsfeld „User, Use & Utility“ skizziert, welches den digitalen Nutzer und dessen Nutzungsverhalten in der digitalen Welt als primäres Untersuchungsobjekt beschreibt.Ziel ist die zukünftige systematische Ableitung von Anforderungen, die digitale Nutzer an Informationssysteme stellen. Für diese Aufgabenstellung ist die Kombination und Integration diverser wissenschaftlicher Disziplinen wie Design, Ingenieurswesen, Informatik, Psychologie, Soziologie und anderen notwendig. Die Wirtschaftsinformatik hat die Chance, eine Schlüsselrolle zu spielen.AbstractBusiness and Information Systems Engineering (BISE) is at a turning point. Planning, designing, developing and operating IT used to be a management task of a few elites in public administrations and corporations. But the continuous digitization of nearly all areas of life changes the IT landscape fundamentally. Success in this new era requires putting the human perspective – the digital user – at the very heart of the new digitized service-led economy.BISE faces not just a temporary trend but a complex socio-technical phenomenon with far-reaching implications. The challenges are manifold and have major consequences for all stakeholders, both in information systems and management research as well as in practice. Corporate processes have to be re-designed from the ground up, starting with the user’s perspective, thus putting usage experience and utility of the individual center stage.The digital service economy leads to highly personalized application systems while organizational functions are being fragmented. Entirely new ways of interacting with information systems, in particular beyond desktop IT, are being invented and established. These fundamental challenges require novel approaches with regards to innovation and development methods as well as adequate concepts for enterprise or service system architectures. Gigantic amounts of data are being generated at an accelerating rate by an increasing number of devices – data that need to be managed.In order to tackle these extraordinary challenges we introduce ‘user, use & utility’ as a new field of BISE that focuses primarily on the digital user, his or her usage behavior and the utility associated with system usage in the digitized service-led economy.The research objectives encompass the development of theories, methods and tools for systematic requirement elicitation, systems design, and business development for successful Business and Information Systems Engineering in a digitized economy – information systems that digital users enjoy using. This challenge calls for leveraging insights from various scientific disciplines such as Design, Engineering, Computer Science, Psychology and Sociology. BISE can provide an integrated perspective, thereby assuming a pivotal role within the digitized service led economy.