Options
Now showing
1 - 10 of 55
-
PublicationA Requirements Engineering Perspective to AI-Based Systems Development: A Vision Paper( 2023)
;Franch, XavierMartínez-Fernández, SilverioContext and motivation: AI-based systems (i.e., systems integrating some AI model or component) are becoming pervasive in society. A number of characteristics of AI-based systems challenge classical requirements engineering (RE) and raise questions yet to be answered. Question: This vision paper inquires the role that RE should play in the development of AI-based systems with a focus on three areas: roles involved, requirements’ scope and non-functional requirements. Principal Ideas: The paper builds upon the vision that RE shall become the cornerstone in AI-based system development and proposes some initial ideas and roadmap for these three areas. Contribution: Our vision is a step towards clarifying the role of RE in the context of AI-based systems development. The different research lines outlined in the paper call for further research in this area. -
PublicationData-Driven Technical Debt Management: Software Engineering or Data Science Challenge?( 2021)Software technical debt (TD) is a relevant software engineering problem. Only if properly managed can TD provide benefits while avoiding risks. Current TD management (TDM) support is limited. Recent advances in software engineering (SE) and data science (DS) promote data-driven TDM. In this paper, we summarize experiences concerning data-driven TDM gained in several research projects with industry. We report challenges and their consequences, propose solutions, and sketch improvement directions.
-
PublicationTackling consistency-related design challenges of distributed data-intensive systems an action research study( 2021)
;Deßloch, Stefan ;Wolff, EberhardBackground: Distributed data-intensive systems are increasingly designed to be only eventually consistent. Persistent data is no longer processed with serialized and transactional access, exposing applications to a range of potential concurrency anomalies that need to be handled by the application itself. Controlling concurrent data access in monolithic systems is already challenging, but the problem is exacerbated in distributed systems. To make it worse, only little systematic engineering guidance is provided by the software architecture community regarding this issue. Aims: In this paper, we report on our study of the effectiveness and applicability of the novel design guidelines we are proposing in this regard. Method: We used action research and conducted it in the context of the software architecture design process of a multi-site platform development project. Results: Our hypotheses regarding effectiveness and applicability have been accepted in the context of the study. The initial design guidelines were refined throughout the study. Thus, we also contribute concrete guidelines for architecting distributed data-intensive systems with eventually consistent data. The guidelines are an advancement of Domain-Driven Design and provide additional patterns for the tactical design part. Conclusions: Based on our results, we recommend using the guidelines to architect safe eventually consistent systems. Because of the relevance of distributed data-intensive systems, we will drive this research forward and evaluate it in further domains. -
PublicationDeveloping and Operating Artificial Intelligence Models in Trustworthy Autonomous Systems( 2021)
;Martínez-Fernández, Silverio ;Franch, Xavier ;Oriol, MarcCompanies dealing with Artificial Intelligence (AI) models in Autonomous Systems (AS) face several problems, such as users' lack of trust in adverse or unknown conditions, gaps between software engineering and AI model development, and operation in a continuously changing operational environment. This work-in-progress paper aims to close the gap between the development and operation of trustworthy AI-based AS by defining an approach that coordinates both activities. We synthesize the main challenges of AI-based AS in industrial settings. We reflect on the research efforts required to overcome these challenges and propose a novel, holistic DevOps approach to put it into practice. We elaborate on four research directions: (a) increased users' trust by monitoring operational AI-based AS and identifying self-adaptation needs in critical situations; (b) integrated agile process for the development and evolution of AI models and AS; (c) continuous deployment of different context-specific instances of AI models in a distributed setting of AS; and (d) holistic DevOps-based lifecycle for AI-based AS. -
-
PublicationDeutsche Normungsroadmap Künstliche Intelligenz(DIN, 2020)
;Adler, R. ;Marko, Angelina ;Nagel, Tobias ;Ruf, Miriam ;Schneider, Martin A. ;Tcholtchev, Nikolayet al.Die deutsche Normungsroadmap Künstliche Intelligenz (KI) verfolgt das Ziel, für die Normung Handlungsempfehlungen rund um KI zu geben, denn sie gilt in Deutschland und Europa in fast allen Branchen als eine der Schlüsseltechnologien für künftige Wettbewerbsfähigkeit. Die EU geht davon aus, dass die Wirtschaft in den kommenden Jahren mit Hilfe von KI stark wachsen wird. Umso wichtiger sind die Empfehlungen der Normungsroadmap, die die deutsche Wirtschaft und Wissenschaft im internationalen KI-Wettbewerb stärken, innovationsfreundliche Bedingungen schaffen und Vertrauen in die Technologie aufbauen sollen. -
PublicationContinuously Assessing and Improving Software Quality With Software Analytics Tools: A Case Study( 2019)
;Martínez-Fernández, Silverio ;Franch, Xavier ;López, Lidia ;Ram, Prabhat ;Rodríguez, Pilar ;Aaramaa, Sanja ;Bagnato, Alessandra ;Choras, MichalPartanen, JariIn the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving the software quality has also been the key targets for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This paper aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product and whether practitioners intend to use it. Over the course of more than a year, four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. The quantitative and qualitative analyses provided positive results, i.e., the practitioners' perception with regard to the tool's understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings, and constructive feedback can be used for future improvements. We conclude that the potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies. -
PublicationFour commentaries on the use of students and professionals in empirical software engineering experiments( 2018)
;Feldt, Robert ;Zimmermann, Thomas ;Bergersen, Gunnar R. ;Falessi, Davide ;Juristo, Natalia ;Münch, Jürgen ;Oivo, Markku ;Runeson, Per ;Shepperd, Martin ;Sjøberg, Dag I.K.Turhan, BurakThe relative pros and cons of using students or practitioners in experiments in empirical software engineering have been discussed for a long time and continue to be an important topic. Following the recent publication of ""Empirical software engineering experts on the use of students and professionals in experiments"" by Falessi, Juristo, Wohlin, Turhan, Münch, Jedlitschka, and Oivo (EMSE, February 2018) we received a commentary by Sjøberg and Bergersen. Given that the topic is of great methodological interest to the community and requires nuanced treatment, we invited two editorial board members, Martin Shepperd and Per Runeson, respectively, to provide additional views. Finally, we asked the authors of the original paper to respond to the three commentaries. Below you will find the result. Even though we are under no illusion that these views settle the issue we hope you find them interesting and illuminating, and that they can help the empirical software engineering community navigate some of the subtleties involved when selecting representable samples of human subjects. -
PublicationEmpirical software engineering experts on the use of students and professionals in experiments( 2018)
;Falessi, Davide ;Juristo, Natalia ;Wohlin, Claes ;Turhan, Burak ;Münch, JürgenOivo, Markku[Context] Controlled experiments are an important empirical method to generate and validate theories. Many software engineering experiments are conducted with students. It is often claimed that the use of students as participants in experiments comes at the cost of low external validity while using professionals does not. [Objective] We believe a deeper understanding is needed on the external validity of software engineering experiments conducted with students or with professionals. We aim to gain insight about the pros and cons of using students and professionals in experiments. [Method] We performed an unconventional, focus group approach and a follow-up survey. First, during a session at ISERN 2014, 65 empirical researchers, including the seven authors, argued and discussed the use of students in experiments with an open mind. Afterwards, we revisited the topic and elicited experts' opinions to foster discussions. Then we derived 14 statements and asked the ISERN attendees excluding the authors, to provide their level of agreement with the statements. Finally, we analyzed the researchers' opinions and used the findings to further discuss the statements. [Results] Our survey results showed that, in general, the respondents disagreed with us about the drawbacks of professionals. We, on the contrary, strongly believe that no population (students, professionals, or others) can be deemed better than another in absolute terms. [Conclusion] Using students as participants remains a valid simplification of reality needed in laboratory contexts. It is an effective way to advance software engineering theories and technologies but, like any other aspect of study settings, should be carefully considered during the design, execution, interpretation, and reporting of an experiment. The key is to understand which developer population portion is being represented by the participants in an experiment. Thus, a proposal for describing experimental participants is put forward. -
PublicationA Quality Model for Actionable Analytics in Rapid Software Development( 2018)
;Martínez-Fernández, Silverio ;Guzmán, LilianaBackground: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.