Now showing 1 - 10 of 46
  • Publication
    Data-Driven Technical Debt Management: Software Engineering or Data Science Challenge?
    Software technical debt (TD) is a relevant software engineering problem. Only if properly managed can TD provide benefits while avoiding risks. Current TD management (TDM) support is limited. Recent advances in software engineering (SE) and data science (DS) promote data-driven TDM. In this paper, we summarize experiences concerning data-driven TDM gained in several research projects with industry. We report challenges and their consequences, propose solutions, and sketch improvement directions.
  • Publication
    Developing and Operating Artificial Intelligence Models in Trustworthy Autonomous Systems
    ( 2021)
    Martínez-Fernández, Silverio
    ;
    Franch, Xavier
    ;
    ;
    Oriol, Marc
    ;
    Companies dealing with Artificial Intelligence (AI) models in Autonomous Systems (AS) face several problems, such as users' lack of trust in adverse or unknown conditions, gaps between software engineering and AI model development, and operation in a continuously changing operational environment. This work-in-progress paper aims to close the gap between the development and operation of trustworthy AI-based AS by defining an approach that coordinates both activities. We synthesize the main challenges of AI-based AS in industrial settings. We reflect on the research efforts required to overcome these challenges and propose a novel, holistic DevOps approach to put it into practice. We elaborate on four research directions: (a) increased users' trust by monitoring operational AI-based AS and identifying self-adaptation needs in critical situations; (b) integrated agile process for the development and evolution of AI models and AS; (c) continuous deployment of different context-specific instances of AI models in a distributed setting of AS; and (d) holistic DevOps-based lifecycle for AI-based AS.
  • Publication
    Tackling consistency-related design challenges of distributed data-intensive systems an action research study
    ( 2021) ;
    Deßloch, Stefan
    ;
    Wolff, Eberhard
    ;
    ;
    Background: Distributed data-intensive systems are increasingly designed to be only eventually consistent. Persistent data is no longer processed with serialized and transactional access, exposing applications to a range of potential concurrency anomalies that need to be handled by the application itself. Controlling concurrent data access in monolithic systems is already challenging, but the problem is exacerbated in distributed systems. To make it worse, only little systematic engineering guidance is provided by the software architecture community regarding this issue. Aims: In this paper, we report on our study of the effectiveness and applicability of the novel design guidelines we are proposing in this regard. Method: We used action research and conducted it in the context of the software architecture design process of a multi-site platform development project. Results: Our hypotheses regarding effectiveness and applicability have been accepted in the context of the study. The initial design guidelines were refined throughout the study. Thus, we also contribute concrete guidelines for architecting distributed data-intensive systems with eventually consistent data. The guidelines are an advancement of Domain-Driven Design and provide additional patterns for the tactical design part. Conclusions: Based on our results, we recommend using the guidelines to architect safe eventually consistent systems. Because of the relevance of distributed data-intensive systems, we will drive this research forward and evaluate it in further domains.
  • Publication
    Deutsche Normungsroadmap Künstliche Intelligenz
    Die deutsche Normungsroadmap Künstliche Intelligenz (KI) verfolgt das Ziel, für die Normung Handlungsempfehlungen rund um KI zu geben, denn sie gilt in Deutschland und Europa in fast allen Branchen als eine der Schlüsseltechnologien für künftige Wettbewerbsfähigkeit. Die EU geht davon aus, dass die Wirtschaft in den kommenden Jahren mit Hilfe von KI stark wachsen wird. Umso wichtiger sind die Empfehlungen der Normungsroadmap, die die deutsche Wirtschaft und Wissenschaft im internationalen KI-Wettbewerb stärken, innovationsfreundliche Bedingungen schaffen und Vertrauen in die Technologie aufbauen sollen.
  • Publication
    Continuously Assessing and Improving Software Quality With Software Analytics Tools: A Case Study
    ( 2019)
    Martínez-Fernández, Silverio
    ;
    ; ;
    Franch, Xavier
    ;
    López, Lidia
    ;
    Ram, Prabhat
    ;
    Rodríguez, Pilar
    ;
    Aaramaa, Sanja
    ;
    Bagnato, Alessandra
    ;
    Choras, Michal
    ;
    Partanen, Jari
    In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving the software quality has also been the key targets for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This paper aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product and whether practitioners intend to use it. Over the course of more than a year, four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. The quantitative and qualitative analyses provided positive results, i.e., the practitioners' perception with regard to the tool's understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings, and constructive feedback can be used for future improvements. We conclude that the potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
  • Publication
    Q-rapids tool prototype
    ( 2018)
    López, Lidia
    ;
    Martínez-Fernández, Silverio
    ;
    Gómez, Cristina
    ;
    Choras, Michal
    ;
    Kozik, Rafal
    ;
    Guzmán, Liliana
    ;
    ;
    Franch, Xavier
    ;
    Software quality is an essential competitive factor for the success of software companies today. Increasing the software quality levels of software products and services requires an adequate integration of quality requirements (QRs) in the software life-cycle, which is still scarcely supported in current rapid software development (RSD) approaches. One of the goals of the Q-Rapids (Quality-aware Rapid Software Development) method is providing tool support to decision-makers for QR management in RSD. The Q-Rapids method is based on gathering data from several and heterogeneous sources, to be aggregated into quality-related strategic indicators (e.g., customer satisfaction, product quality) and presented to decision-makers using a highly informative dashboard. The current release of Q-Rapids Tool provides four sets of functionality: (1) data gathering from source tools (e.g. GitLab, Jira, SonarQube, and Jenkins), (2) aggregation of data into three levels of abstraction (metrics, product/process factors, and strategic indicators), (3) visualization of the aggregated data, and (4) navigation through the aggregated data. The tool has been evaluated by four European companies that follow RSD processes.
  • Publication
    Four commentaries on the use of students and professionals in empirical software engineering experiments
    ( 2018)
    Feldt, Robert
    ;
    Zimmermann, Thomas
    ;
    Bergersen, Gunnar R.
    ;
    Falessi, Davide
    ;
    ;
    Juristo, Natalia
    ;
    Münch, Jürgen
    ;
    Oivo, Markku
    ;
    Runeson, Per
    ;
    Shepperd, Martin
    ;
    Sjøberg, Dag I.K.
    ;
    Turhan, Burak
    The relative pros and cons of using students or practitioners in experiments in empirical software engineering have been discussed for a long time and continue to be an important topic. Following the recent publication of ""Empirical software engineering experts on the use of students and professionals in experiments"" by Falessi, Juristo, Wohlin, Turhan, Münch, Jedlitschka, and Oivo (EMSE, February 2018) we received a commentary by Sjøberg and Bergersen. Given that the topic is of great methodological interest to the community and requires nuanced treatment, we invited two editorial board members, Martin Shepperd and Per Runeson, respectively, to provide additional views. Finally, we asked the authors of the original paper to respond to the three commentaries. Below you will find the result. Even though we are under no illusion that these views settle the issue we hope you find them interesting and illuminating, and that they can help the empirical software engineering community navigate some of the subtleties involved when selecting representable samples of human subjects.
  • Publication
    Quality-aware architectural model transformations in adaptive mashups user interfaces
    ( 2018)
    Criado, Javiera
    ;
    Martínez-Fernández, Silverio
    ;
    Ameller, David
    ;
    Iribarne, Luis
    ;
    Padilla, Nicolás
    ;
    Mashup user interfaces provides their functionality through the combination of different services. The integration of such services can be solved by using reusable and third-party components. Furthermore, these interfaces must be adapted to user preferences, context changes, user interactions and component availability. Model transformation is a useful mechanism to address this adaptation but normally these operations only focus on the functional requirements. In this sense, quality attributes should be included in the adaptation process to obtain the best adapted mashup user interface. This paper proposes a generic quality-aware transformation process to support the adaptation of software architectures. The transformation process has been applied in ENIA, a geographic information system, by constructing a specific quality model for the adaptation of mashup user interfaces. This model is taken into account for evaluating the different transformation alternatives and choosing the one that maximizes the quality assessments. The approach has been validated by a set of adaptation scenarios that are intended to maximize different quality factors and therefore apply distinct combinations of metrics.
  • Publication
    A Quality Model for Actionable Analytics in Rapid Software Development
    ( 2018)
    Martínez-Fernández, Silverio
    ;
    ;
    Guzmán, Liliana
    ;
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.
  • Publication
    Towards Automated Data Integration in Software Analytics
    ( 2018)
    Martínez-Fernández, Silverio
    ;
    Jovanovic, Petar
    ;
    Franch, Xavier
    ;
    Software organizations want to be able to base their decisions on the latest set of available data and the real-time analytics derived from them. In order to support "real-time enterprise" for software organizations and provide information transparency for diverse stakeholders, we integrate heterogeneous data sources about software analytics, such as static code analysis, testing results, issue tracking systems, network monitoring systems, etc. To deal with the heterogeneity of the underlying data sources, we follow an ontology-based data integration approach in this paper and define an ontology that captures the semantics of relevant data for software analytics. Furthermore, we focus on the integration of such data sources by proposing two approaches: a static and a dynamic one. We first discuss the current static approach with a predefined set of analytic views representing software quality factors and further envision how this process could be automated in order to dynamically build custom user analysis using a semi-automatic platform for managing the lifecycle of analytics infrastructures.