Software Engineering Experiments in a Systematic Way

Chin Wan*

Department of Computer and Information Science, University of Indianapolis, Indiana, USA

*Corresponding Author:
Chin Wan
Department of Computer and Information Science, University of Indianapolis, Indiana,
USA
E-mail: Wan_C@edu.us

Received date: November 18, 2022, Manuscript No. IJAREEIE-22-15695; Editor assigned date: November 21, 2022, PreQC No. IJAREEIE-22-15695 (PQ); Reviewed date: December 02, 2022, QC No. IJAREEIE-22-15695; Revised date: December 12, 2022, Manuscript No. IJAREEIE-22-15695 (R); Published date: December 19, 2022, DOI: 10.36648/ijareeie.5.12.59

Citation: Wan C (2022) Software Engineering Experiments in a Systematic Way. Int J Adv Res Vol.5 No.12: 59.

Description

The effects of an experimental treatment are measured by an effect size. If effect sizes aren't taken into account in addition to statistical significance, then conclusions based on the results of hypothesis testing could be incorrect. The review looks into the practice of reporting effect sizes, provides a summary of the standardized effect sizes that were found in the experiments, talks about the results, and suggests ways to make things better. In 29% of the experiments, effect sizes were either standardized or unstandardized. Beyond a few citations to established conventions, there was no discussion of how the effect sizes should be interpreted in terms of their practical significance. The reviewed experiments' standardized effect sizes were comparable to observations in psychology studies and slightly larger than behavioral science conventions. Software engineering experiments look at how different treatments (like a process, method, technique, language, or tool) affect each other and the outcomes (time, effectiveness, quality, efficiency, etc.) that were measured. The magnitude of the relationship between treatment variables and outcome variables is called an effect size, and it is calculated using sample data to draw conclusions about a population (similar to the idea of testing hypotheses). The degree to which the phenomenon being studied is present in the population is determined by the effect size. Effect size measures include correlations, odds ratios, and differences between means, among others. Effect sizes can be utilized for comparison purposes, as well as in meta-analyses, statistical power analyses, and the analysis and reporting of experimental results. Effect sizes or sufficient data for effect size estimation must be reported for this use. One example of an effect in software engineering that we wish to investigate through experiments is the difference in the number of defects detected between one inspection method and another. The population effect size is the name given to this unknown effect. As long as we do not have access to the entire population of subjects that fall within the scope of our investigation's research questions, it cannot be directly computed.

Agile Method

An emerging strategy to project timelines, and high costs that are typically associated with software development projects is the combination of agile methods and distributed software development via remote teams. However, there are a number of obstacles that must be taken into account and mitigated when projects are implemented using an agile model and distributed human resources. Our work aims to achieve multiple goals to begin; we wish to comprehend the circumstances and motives behind the adoption of Distributed Agile Software Engineering (DASE) practices. Second, we like to learn more about the most significant threats to the DASE method and the available measures to mitigate them. Last but not least, we like to point out which of the various agile methodologies have been successfully adopted by the community. By looking into how strong the evidence reported in the literature is, we intend to support our findings. We discovered that software tools are scarce and that most metrics focus on reusability at the class level. Additionally, not all reusability affecting factors have the same impact on determining software component reusability. We discovered that only a small number of complexity metrics were intended for the purpose of evaluating reusability, despite the fact that previous research has frequently discussed how complexity affects software reusability. We have identified a number of unsolved issues and gaps in the field, including a lack of quantifiable reusability measurements, inadequate software tools, and metrics that directly measure reusability. In an effort to circumvent the drawbacks of individual methods, a more recent strategy known as Ensemble Effort Estimation (EEE) has been utilized. The combination of the results of various methods results in the effort predicted by an EEE method says that the EEE techniques can be divided into two groups: both homogeneous and diverse.

Systematic Review

A heterogeneous ensemble is constructed from distinct individual techniques, whereas a homogeneous ensemble is constructed from a single technique with at least two distinct configurations. We conducted a systematic review of papers that reported experiences with SRs or discussed strategies for enhancing the SR process. The studies were categorized according to the stage of the SR process they addressed, their relevance to education or novice issues, and whether or not they advocated for the use of textual analysis tools. As a result, it seems appropriate to determine the state of such studies in software engineering at the present time and whether there is evidence supporting the revision and/or expansion of software engineering systematic review. To this end, we conducted a systematic review of papers that either discuss issues with the current SR guidelines or suggest solutions. The specific research questions we address are identified, related research is reported, and the aims of our study are discussed. The method we used to search for and select the papers is described as the fundamental limitations of our strategy. The validity of our search and selection process is reported. Additionally, we report on the dependability of our procedure for data extraction and quality assessment. The information we gathered and synthesized from the papers we included in the study is presented. The limitations of our study and our findings are discussed in contains our conclusions. We propose omitting the suggestion that structured questions be used to create search strings and substituting it with the suggestion that a limited manual search based standard be used to aid in the creation of search strings and evaluation of the search process. Tools for textual analysis may be useful for decisions about inclusion or exclusion and the construction of search strings, but they need to be evaluated more thoroughly. Tools to manage the SR process would be helpful to SE researchers, but they need to be independently validated. Using a variety of empirical methods, quality assessment of studies remains a major issue. This section presents our various validity checks, including the reliability of our data extraction and quality assessment processes, as well as the results of our search and selection process.

Select your language of interest to view the total content in your interested language

Viewing options

Flyer image

Share This Article