Auflistung nach Schlagwort "Scalability"
1 - 10 von 10
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBenchmarking Scalability of Cloud-Native Applications(Software Engineering 2023, 2023) Henning, Sören; Hasselbring, WilhelmThis contribution has been published in the journal Empirical Software Engineering (Springer Nature) in 2022 [HH22], https://doi.org/10.1007/s10664-022-10162-1.
- KonferenzbeitragBenchmarking Scalability of Stream Processing Frameworks Deployed as Microservices in the Cloud(Software Engineering 2024 (SE 2024), 2024) Henning, Sören; Hasselbring, Wilhelm
- KonferenzbeitragBenchmarking Univariate Time Series Classifiers(Datenbanksysteme für Business, Technologie und Web (BTW 2017), 2017) Schäfer, Patrick; Leser, UlfTime series are a collection of values sequentially recorded over time. Nowadays, sensors for recording time series are omnipresent as RFID chips, wearables, smart homes, or event-based systems. Time series classification aims at predicting a class label for a time series whose label is unknown. Therefore, a classifier has to train a model using labeled samples. Classification time is a key challenge given new applications like event-based monitoring, real-time decision or streaming systems. This paper is the first benchmark that compares 12 state of the art time series classifiers based on prediction and classification times. We observed that most of the state-of-the-art classifiers require extensive train and classification times, and might not be applicable for these new applications.
- ZeitschriftenartikelErfolgsfaktor Skalierbarkeit – Digitales Kompetenzmanagement, das mitwächst(HMD Praxis der Wirtschaftsinformatik: Vol. 54, No. 6, 2017) Melzer, Annegret; Bullinger, Angelika C.Die Digitalisierung wertschöpfender wie unterstützenden Prozesse im Unternehmen ist ein zentraler Treiber hin zum Digital Workplace. An diesem entstehen für die Mitarbeitenden regelmäßig neue Aufgaben- und Kooperationssituationen; Kompetenzanforderungen verändern sich schneller als je zuvor. Um die Digital Workers bei ihrer Kompetenzentwicklung zu unterstützen, müssen die Unternehmen strategisch auf ein wendiges Kompetenzmanagement setzen und tradierte, inhaltsgebundene und starre Konzepte der Weiterbildung ablösen. Bestehende Ansätze, Kompetenzmanagement digital und flexibel zu realisieren, adressieren v. a. Großunternehmen und sind selten für KMU skalierbar. Diese Lücke soll mit der vorgestellten Studie adressiert werden. Dabei wurde mit drei Partnerunternehmen die Entwicklung und Erprobung eines bedarfsgerecht anpassbaren Kompetenzmanagementtools für kleine, mittlere und große Unternehmen durchgeführt. Ergebnis dieses induktiven Vorgehens ist das Kompetenzmanagementsystem KM3, das auf den Ebenen des Prozesses und des Toolings so flexibel gestaltet, dass es an Industrie- und Nutzeranforderungen angepasst werden kann und individualisiertes Lernen zur Steigerung der Handlungsfähigkeit der Mitarbeitenden „anyplace, anytime“ arbeitsplatzintegriert ermöglicht. Grundlage des generalisierten KM3-Ansatzes bilden reale Anwendungsszenarien innerhalb der drei Partnerunternehmen. Sie zeigen, dass durch KM3 mit seinen Prozessen und Tools sowohl die Neueinführung strategischen (digitalen) Kompetenzmanagements in Unternehmen (Anwendungsszenario Neu) unterstützt werden kann als auch die Integration und Digitalisierung bestehender Kompetenzmanagementprozesse (Anwendungsszenario Digitalisierung) und -systeme (Anwendungsszenario Integration) möglich ist.AbstractThe digitalization of valuable and supporting processes within the company is a key driver towards the Digital Workplace. Furthermore, one of the major rationales for economic success is seen in company’s innovation capability and its resilience, which are strongly attached to skills, knowledge and abilities – thus, the competencies – of its employees. These competence requirements are changing faster than ever before. In order to support the Digital Workers in their development of competences, the companies must strategically rely on agile competence management and traditional, content-based and rigid concepts of further education have to be replaced. Existing approaches, to implement competence management digitally and flexibly, address primarily large enterprises and are rarely scalable for SME. This gap is to be addressed with the presented study. The paper shows a process-based approach for the development and testing of a competency management tool for small, medium and large companies that could be adapted to the several needs on a field study with three partner companies. The result of this inductive approach is a competency management system, which is flexible at the process and tooling levels so that it can be adapted to industrial and user requirements, and individualized learning to increase the ability of employees to act “any-place, any-time” workspace-integrated. The generalized KM3 approach as the result of the introduced study is based on real application scenarios within the three partner companies. They show that KM3 can be used to support the new introduction of strategic (digital) competence management in companies (application scenario new), as well as the integration and digitization of existing competence management processes (application scenario digitization) and systems (application scenario integration) in KM3 is possible.
- ZeitschriftenartikelGoogle: Das weltweit erste „Information Utility“?(Wirtschaftsinformatik: Vol. 51, No. 1, 2009) Chen, Rex; Kraemer, Kenneth L.; Sharma, PrakulIn nur zehn Jahren hat Google einen bemerkenswerten Erfolg durch Werbung im Rahmen der Onlinesuche erlangt. Googles Suchmaschine ist dominant und seine IT-Infrastruktur ist das mächtigste Rechnersystem der Welt. Es wird auf mehr als einer Million Computern betrieben und bedient weltweit mehr als einer Milliarde Nutzern. Google verdient, indem seine Suchmaschine Onlinewerbung neben Antworten auf Nutzeranfragen nach Informationen, Waren, Karten, Wegbeschreibungen und einer ganzen Reihe von anderen Diensten platziert. Googles Potenzial führt wahrscheinlich dazu, dass das Unternehmen zum weltweit ersten „Information Utility“ wird. Da kein deutschsprachiges Äquivalent existiert, wird im Folgenden der englischsprachige Begriff verwendet. Er ist in etwa gleichzusetzen mit „Informationsversorger“ – ein Konzept ähnlich einem Stromversorger (engl. electric utility), welcher Dienste für viele Unternehmen und Privatpersonen gleichermaßen anbietet. Ständige Innovation ist der Schlüssel zu Googles Erfolg und stellt ein Vorbild für andere Unternehmen dar: Stelle talentierte Menschen ein, lasse sie in kleinen Teams arbeiten und gib ihnen die Freiheit, andere zu übertreffen, aber nutze ein strenges datenbasiertes Konzept für die Bewertung der Ergebnisse und für Strategieanpassungen.AbstractIn only ten years, Google has achieved remarkable success from online search-based advertising. Its search engine is dominant, and its IT infrastructure is the most powerful computing system in the world running on over one million computers and serving more than one billion users globally. Google makes money by using its search engine to deliver online advertising alongside responses to user searches for information, goods, maps, directions, and a host of other services. Its capabilities make it likely to become the world’s first information utility – a concept similar to electric utilities that provide services to many corporations and individuals alike. Constant innovation is the key to Google’s success and offers lessons for other companies: hire talented people, have them work in small teams, and give them freedom to excel, but use a rigorous data-based approach to evaluating results and making course adjustments.
- KonferenzbeitragImproving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs(Proceedings of the Fifth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2021),virtual event, October 28-29, 2021, 2021) Serth, Sebastian; Köhler, Daniel; Marschke, Leonard; Auringer, Felix; Hanff, Konrad; Hellenberg, Jan-Eric; Kantusch, Tobias; Paß, Maximilian; Meinel, ChristophLearning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students' submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students' source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.
- TextdokumentNoSQL & Real-Time Data Management in Research & Practice(BTW 2019 – Workshopband, 2019) Wingerath, Wolfram; Gessert, Felix; Ritter, NorbertUsers have come to expect reactivity from mobile and web applications, i.e. they assume that changes made by other users become visible immediately. However, developers are challenged with building reactive applications on top of traditional pull-oriented databases, because they are ill-equipped to push new information to the client. Systems for data stream management and processing, on the other hand, are natively push-oriented and thus facilitate reactive behavior, but they do not follow the same collection-based semantics as traditional databases: Instead of database collections, stream-oriented systems are based on a notion of potentially unbounded sequences of data items. In this tutorial, we survey and categorize the system space between pull-oriented databases and push-oriented stream management systems, using their respectively facilitated means of data retrieval as a reference point. We start with an in-depth survey of the most relevant NoSQL databases to provide a comparative classification and highlight open challenges. To this end, we analyze the approach of each system to derive its scalability, availability, consistency, data modeling, and querying characteristics. We present how each system’s design is governed by a central set of trade-offs over irreconcilable system properties. We then cover recent research results in distributed data management to illustrate that some shortcomings of NoSQL systems could already be solved in practice, whereas other NoSQL data management problems pose interesting and unsolved research challenges. A particular emphasis lies on the novel system class of real-time databases which combine the push-based access paradigm of stream-oriented systems with the collection-based query semantics of traditional databases. We explore why real-time databases deserve distinction in a separate system class and dissect their different architectures to highlight issues, derive open challenges, and discuss avenues for addressing them.
- KonferenzbeitragScalable Data Management: An In-Depth Tutorial on NoSQL Data Stores(Datenbanksysteme für Business, Technologie und Web (BTW 2017) - Workshopband, 2017) Gessert, Felix; Wingerath, Wolfram; Ritter, NorbertThe unprecedented scale at which data is consumed and generated today has shown a large demand for scalable data management and given rise to non-relational, distributed “NoSQL” database systems. Two central problems triggered this process: 1) vast amounts of user-generated content in modern applications and the resulting request loads and data volumes as well as 2) the desire of the developer community to employ problem-specific data models for storage and querying. To address these needs, various data stores have been developed by both industry and research, arguing that the era of one-size-fits-all database systems is over. The heterogeneity and sheer amount of these systems – now commonly referred to as NoSQL data stores – make it increasingly di cult to select the most appropriate system for a given application. Therefore, these systems are frequently combined in polyglot persistence architectures to leverage each system in its respective sweet spot. This tutorial gives an in-depth survey of the most relevant NoSQL databases to provide comparative classification and highlight open challenges. To this end, we analyze the approach of each system to derive its scalability, availability, consistency, data modeling and querying characteristics. We present how each system’s design is governed by a central set of trade-o s over irreconcilable system properties. We then cover recent research results in distributed data management to illustrate that some shortcomings of NoSQL systems could already be solved in practice, whereas other NoSQL data management problems pose interesting and unsolved research challenges. In addition to earlier tutorials, we explicitly address how the quickly emerging topic of processing and storing massive amounts of data in real-time can be solved by di erent types real-time data management systems.
- ZeitschriftenartikelScalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit(Datenbank-Spektrum: Vol. 16, No. 1, 2016) Frommholz, Ingo; Roelleke, ThomasProbabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs.Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing. The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
- TextdokumentTwoogle: Searching Twitter With MongoDB Queries(BTW 2019, 2019) Wingerath, Wolfram; Gessert, Felix; Ritter, NorbertModern real-time databases follow the same collection-based querying semantics as traditional database systems. Targeting interactive workloads, real-time databases do not only deliver a query’s result upon request, but also produce a continuous stream of informational updates thereafter. In theory, building interactive, reactive, or collaborative applications should thus be simple with collection-based real-time queries as they bridge the gap between traditional database queries over static collections and continuous queries over dynamic data streams. In practice, though, building real-time applications is still considered challenging, since most real-time databases today provide only poor scalability, confusing interfaces for real-time data access, and reduced query expressiveness in comparison to their pull-based counterparts. In this demo, we illustrate that scalability, query expressiveness, and simplicity can go hand-in-hand for modern real-time databases. To this end, we present the social media search app Twoogle which is built on top of Baqend’s real-time query API.