RESUMO
Software teams increasingly adopt different tools and communication channels to aid the software collaborative development model and coordinate tasks. Among such resources, software development forums have become widely used by developers. Such environments enable developers to get and share technical information quickly. In line with this trend, GitHub announced GitHub Discussions-a native forum to facilitate collaborative discussions between users and members of communities hosted on the platform. Since GitHub Discussions is a software development forum, it faces challenges similar to those faced by systems used for asynchronous communication, including the problems caused by related posts (duplicated and near-duplicated posts). These related posts can add noise to the platform and compromise project knowledge sharing. Hence, this article addresses the problem of detecting related posts on GitHub Discussions. To achieve this, we propose an approach based on a Sentence-BERT pre-trained general-purpose model: the RD-Detector. We evaluated RD-Detector using data from three communities hosted in GitHub. Our dataset comprises 16,048 discussion posts. Three maintainers and three Software Engineering (SE) researchers manually evaluated the RD-Detector results, achieving 77-100% of precision and 66% of recall. In addition, maintainers pointed out practical applications of the approach, such as providing knowledge to support merging the discussion posts and converting the posts to comments on other related posts. Maintainers can benefit from RD-Detector to address the labor-intensive task of manually detecting related posts.
RESUMO
User experience (UX) is a quality aspect that considers the emotions evoked by the system, extending the usability concept beyond effectiveness, efficiency, and satisfaction. Practitioners and researchers are aware of the importance of evaluating UX. Thus, UX evaluation is a growing field with diverse approaches. Despite various approaches, most of them produce a general indication of the experience as a result and do not seek to capture the problem that gave rise to the bad UX. This information makes it difficult to obtain relevant results to improve the application, making it challenging to identify what caused a negative user experience. To address this gap, we developed a UX evaluation technique called UX-Tips. This paper presents UX-Tips and reports two empirical studies performed in an academic and an industrial setting to evaluate it. Our results show that UX-Tips had good performance in terms of efficiency and effectiveness, making it possible to identify the causes that led to a negative user experience, and it was easy to use. In this sense, we present a new technique suitable for use in both academic and industrial settings, allowing UX evaluation and finding the problems that may lead to a negative experience.