Moosleitner, ManfredSpecht, GüntherZangerle, EvaKönig-Ries, BirgittaScherzinger, StefanieLehner, WolfgangVossen, Gottfried2023-02-232023-02-232023978-3-88579-725-8https://dl.gi.de/handle/20.500.12116/40315Textual reviews are an integral part of online shopping and a source of information for potential customers. However, a prerequisite is that the reviews are authentic. To this end, pre-trained large language models have been shown to generate convincing text reviews at scale. Therefore, a critical task is the automatic detection of reviews not composed by a human, in a generated review classification task. State-of-the-art approaches to detect generated texts use pre-trained large language models, which exhibit hefty hardware requirements to run and fine-tune the model. Related work has shown that texts generated by language models often show differences in writing style and choice of words compared to texts written by humans. This two properties, which are unique per author, should be able to be utilized to identify if a text is generated by these algorithms. In this paper, we investigate the performance of features prominently used in authorship attribution tasks, using robust classifiers with substantially lower computational resources required. We show that features and methods from authorship attribution can be successfully applied for the task of detecting generated text reviews, leveraging the consistent writing style exhibited by large language models like GPT2. We argue that our approach achieves similar performance as state-of-the-art approaches while providing shorter training times and lower hardware requirements, necessary for, e.g, detection on the fly.enText ClassificationStylometric Text FeaturesGenerated Text DetectionDetection of Generated Text Reviews by Leveraging Methods from Authorship Attribution: Predictive Performance vs. ResourcefulnessText/Conference Paper10.18420/BTW2023-11