Auflistung nach Schlagwort "empirical software engineering"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- Konferenzbeitraggit2net: Mining Time-Stamped Co-Editing Networks from Large git Repositories(INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft, 2019) Gote, Christoph; Scholtes, Ingo; Schweitzer, Frank
- Konferenzbeitrag“Jumping Through Hoops”: Why do Java Developers Struggle With Cryptography APIs?(Software Engineering 2017, 2017) Nadi, Sarah; Krüger, Stefan; Mezini, Mira; Bodden, EricTo protect sensitive data processed by current applications, developers, whether security experts or not, have to rely on cryptography. While cryptography algorithms have become increasingly advanced, many data breaches occur because developers do not correctly use the corresponding APIs. To guide future research into practical solutions to this problem, we perform an empirical investigation into the obstacles developers face while using the Java cryptography APIs, the tasks they use the APIs for, and the kind of (tool) support they desire. We triangulate data from four separate studies that include the analysis of 100 StackOverflow posts, 100 GitHub repositories, and survey input from 48 developers. We find that while developers find it difficult to use certain crypto- graphic algorithms correctly, they feel surprisingly confident in selecting the relevant cryptography concepts (e.g., encryption vs. signatures). We also find that the APIs are generally perceived to be too low-level and that developers prefer more task-based solutions.
- ConferencePaperOn the Feasibility of Automated Prediction of Bug and Non-Bug Issues(Software Engineering 2021, 2021) Herbold, Steffen; Trautsch, Alexander; Trautsch, FabianThe article "On the feasibility of automated prediction of bug and non-bug issues" published in Empirical Software Engineering in 2020 considers the application of machine learning for the automated classification of issue types, e.g., for research purposes or as a recommendation system. Issue tracking systems are used to track and describe tasks in the development process, e.g., requested feature improvements or reported bugs. However, past research has shown that the reported issue types often do not match the description of the issue. Within our work, we evaluate the state of the art of issue type prediction system can accurately identify bugs. We also investigate if manually specified knowledge can improve such systems. While we found that manually specified knowledge about contents is not useful, respecting structural aspects can be valuable. Our experiments show that issue type prediction system can be trained based on large amounts of unvalidated data and still be sufficiently accurate to be useful. Overall, the misclassifications of the automated system are comparable to the misclassifications made by developers.