Hermann, BenWinter, StefanSiegmund, JanetEngels, GregorHebig, ReginaTichy, Matthias2023-01-182023-01-182023978-3-88579-726-5https://dl.gi.de/handle/20.500.12116/40084Artifact evaluation has been introduced into the software engineering and programming languages research community with a pilot at ESEC/FSE 2011 and has since then enjoyed a healthy adoption throughout the conference landscape. We conducted a survey including all members of artifact evaluation committees of major conferences in the software engineering and programming language field from 2011 to 2019 and compared the answers to expectations set by calls for artifacts and reviewing guidelines. While we find that some expectations exceed the ones expressed in calls and reviewing guidelines, there is no consensus on a quality threshold for artifacts in general. We observe very specific quality expectations for specific artifact types for review and later usage, but also a lack of their communication in calls. We also find problematic inconsistencies in the terminology used to express artifact evaluation’s most important purpose. We derive several actionable suggestions which can help to mature artifact evaluation in the inspected community and also to aid its introduction into other communities in computer science.enResearch ArtifactsArtifact EvaluationReplicabilityReproducibilityStudyCommunity Expectations for Research Artifacts and Evaluation ProcessesText/Conference Paper1617-5468