Which security holes and security methods do IEEE 802.11b and Bluetooth offer? Which standard provides best security methods for companies? These are two interesting questions that this thesis will be about. The purpose is to give companies more information of the security aspects that come with using WLANs. An introduction to the subject of WLAN is presented in order to give an overview before the description of the two WLAN standards; IEEE 802.11b and Bluetooth. The thesis will give an overview of how IEEE 802.11b and Bluetooth works, a in depth description about the security issues of the two standards will be presented, security methods available for companies, the security flaws and what can be done in order to create a secure WLAN are all important aspects to this thesis. In order to give a guidance of which WLAN standard to choose, a comparison of the two standards with the security issues in mind, from a company's point of view is described. We will present our conclusion which entails a recommendation to companies to use Bluetooth over IEEE 802.11b, since it offers better security methods.
The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.
The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.
Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.
Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.
Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.
The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.
There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.
An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.
Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.
Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.
Uppsatsen behandlar ämnet Digital Rights Management (DRM), mer specifikt innovationstrenderna inom DRM. Fokus är på tre drivkrafter i DRM. För det första, DRM teknologier, för det andra, DRM standarder, och för det tredje, DRM interoperabilitet. Dessa drivkrafter diskuteras och analyseras för att kunna utforska innovationstrenderna inom DRM. Avslutningsvis formas en multifacetterad översikt av dagens DRM-kontext. En slutsats är att aspekten av Intellectual Property Rights anses vara en viktig indikator av den riktning som DRM innovationen går mot.
Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements
There exist many methods to calculate forced response in mechanical systems. Some methods are slow and the errors introduced are unknown. The paper presents a method that uses digital filters and modal superposition. It is shown how aliasing can be avoided as well as phase errors. The parameters describing the mechanical system are residues and poles, taken from FEA models, from lumped MCK systems, from analytic solutions or from experimental modal analysis. Modal damping may be used. The error in the calculation is derived and is shown to be only a function of the sampling frequency used. When the method is applied to linear mechanical systems in MATLAB it is very fast. The method is extended to incorporate nonlinear components. The nonlinear components could be simple, like hardening or stiffening springs, but may also contain memory, like dampers with hysteresis. The simulations are used to generate test data for development and evaluation of methods for identification of non-linear systems.
Going offshore has become a norm in current software organizations due to several benefits like availability of competent people, cost, proximity to market and customers, time and so on. Despite the fact that Global Software Engineering (GSE) offers many benefits to software organizations but it has also created several challenges/issues for practitioners and researchers like culture, communication, co-ordination and collaboration, team building and so on. As Requirements Engineering (RE) is more human intensive activity and is one of the most challenging and important phase in software development. Therefore, RE becomes even more challenging when comes to GSE context because of culture, communication, coordination, collaboration and so on. Due to the fore mentioned GSE factors, requirements’ understanding has become a challenge for software organizations involved in GSE. Furthermore, Knowledge Management (KM) is considered to be the most important asset of an organization because it not only enables organizations to efficiently share and create knowledge but also helps in resolving culture, communication and co-ordination issues especially in GSE. The aim of this study is to present how KM practices helps globally dispersed software organizations in requirements understanding. For this purpose a thorough literature study is performed along with interviews in two industries with the intent to identify useful KM practices and challenges of requirements understanding in GSE. Then based on the analysis of identified challenges of requirements understanding in GSE both from literature review and industrial interviews, useful KM practices are shown and discussed to reduce requirements understanding issues faced in GSE.
Utveckling av programvara för hög funktionssäkra rymden applikationer och system är en formidabel uppgift. Med nya politiska och marknadsmässiga trycket på rymdindustrin att leverera mer mjukvara till en lägre kostnad, optimering av deras metoder och standarder måste utredas. Industrin har att följa standarder som absolut uppsättningar kvalitetsmål och föreskriver tekniska processer och metoder för att uppfylla dem. Det övergripande målet för denna studie är att utvärdera om den nuvarande användningen av ECSS standarder är kostnaden effektivt och om det finns sätt att göra processen smidigare och samtidigt bibehålla kvaliteten och för att analysera om V & V verksamhet kan optimeras. Detta dokument presenterar resultat från två industriella fallstudier av företag inom den europeiska rymdindustrin som är Följande ECSS krav och ha olika V & V verksamhet. Fallstudierna redovisas här fokuserat på hur ECSS standarder som används av företag och hur detta påverkat deras processer och hur deras V & V verksamhet kan optimeras.
During last decade UML have to face different tricky challenges. For instance as a single unified, general purpose modeling language it should offer simple and explicit semantic which can be applicable to wide range of domains. Due to significant shift of focus from software to system “software-centric” attitude of UML has been exposed. So need of certain domain specific language is always there which can address problems of system rather then software only i.e. motivation for SysML. In this thesis SysML is evaluated to analyze its suitability for system engineering applications. A evaluation criteria is established, through which appropriateness of SysML is observed over system development life cycle. The study is conducted by taking case example of real life i.e. automobile product. Results of research not only provide an opportunity to get inside into SysML architecture but also offer an idea of SysML appropriateness for multidisciplinary product development
Today’s software is more vulnerable to attacks due to increase in complexity, connectivity and extensibility. Securing software is usually considered as a post development activity and not much importance is given to it during the development of software. However the amount of loss that organizations have incurred over the years due to security flaws in software has invited researchers to find out better ways of securing software. In the light of research done by many researchers, this thesis presents how software can be secured by considering security in different phases of software development life cycle. A number of security activities have been identified that are needed to build secure software and it is shown that how these security activities are related with the software development activities of the software development lifecycle.
The thesis describes the following main question: Will the administrative labour during football tournaments be made easier with the help from a mobile web service or will it convey to unnecessary extra work? To be able to give a good and qualified answer to this question, the thesis will in the beginning describe how the technique for such a system might look like and also describe the existing administrative moments during a football tournament, this is too give the reader a deeper understanding for further reading. The thesis then goes on to describe a concrete system which is tested on three football tournaments. On the basis of the tests, interviews and through analysis of these the thesis will be able to answer our questions. The result which we present in the thesis have unfolded through testing of the system on three chosen football tournaments. The differences in way of labour pre and post the system have been analyzed and through interviews the value of these changes have been assessed. The results achieved in the thesis are as follows, the presented mobile web service does not only decrease the total work effort made by the tournament officials it also speeds up their work. With that information emerged we can conclude that the administrative work which occurs during a football tournament is made easier and it does not lead to any unnecessary extra work.
Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.
This thesis will analyse the pros and cons of a module-based approach versus the currently existing certificate schemes and the proposed requirements for a module-based certificate scheme to serve as a plausible identity verification system. We will present a possible model and evaluate it in respect to the existing solutions and our set of identified requirements.
Organizations are taking computer security more seriously every day, investing huge amounts of money in creating stronger defenses including firewalls, anti-virus software, biometrics and identity access badges. These measures have made the business world more effective at blocking threats from the outside, and made it increasingly difficult for hackers or viruses to penetrate systems. But there are still threats that put organizations at risk , this threats are not necessary from external attackers, in this paper we will analyze what are the internal threats in organizations, why are we vulnerable and the best methods to protect our organizations from inside threats.
This thesis compares the performance of Web Services when hosted on either the J2EE or the .NET platform. The thesis will investigate which platform should be choosen to host Web Services mainly based on performance.
In this study, an attempt is made to apply Decision Support Systems (DSS) in planning for the expansion of energy generation infrastructure in Nigeria. There is an increasing demand for energy in that country, and the study will try to show that DSS modelling, using A Mathematical Programming Language (AMPL) as the modelling tool, can offer satisficing results which would be a good decision support resource for motivating how to expend investment for energy generation.
In market-driven product development, large numbers of requirements flow in continuously. It is critical for product management to select the requirements aligned with overall business goals and discard others as early as possible. It has been suggested in literature to utilize product strategies for early requirements triage and selection. However, no explicit method/model/framework has been suggested as how to do it. This thesis presents a model for early requirements triage and selection utilizing product strategies based on a literature study and interviews with people at two organizations about the requirements triage and selection processes and product strategies formulation. The model is validated statically within the same two organizations.
As software development continues to increase in complexity, involving far-reaching consequences, there is a need for decision support to improve the decision making process in requirements engineering (RE) activities. This research begins with a detailed investigation of the complexity of decision making during RE activities on organizational, product and project levels. Secondly, it presents a conceptual model which describes the RE decision making environment in terms of stakeholders, information requirements, decision types and business objectives. The purpose of this model is to facilitate the development of decision support systems in RE and to help further structure and analyse the decision making process in RE.
Since its inception into software engineering software inspection has been viewed as a cost-effective way of increasing software quality. Despite this many questions remain unanswered regarding, for example, ideal team size or cost effectiveness. This paper addresses some of these questions by performing an analysis using 30 published data sets from empirical experiments of software inspections. The main question is concerned with determining a suitable team size for software inspections. The effectiveness of different team sizes is also studied. Furthermore, the differences in mean effectiveness between different team sizes are investigated based on the inspection environmental context, document types and reading technique. It is concluded that it is possible to choose a suitable team size based on the effectiveness of inspections. This can be used as a tool to assist in the planning of inspections. A particularly interesting result is that variation in the effectiveness between different teams is considerably higher for certain types of documents than for others. Our findings contain important information for anyone planning, controlling or managing software inspections.
Several software projects are over budgeted or have to face failures during operations. One big reason of this is Software Company develops wrong software due to wrong interpretation of requirements. Requirements engineering is one of the well known discipline within Software engineering which deals with this problem. RE is the process of eliciting, analyzing and specifying requirements so that there won’t be any ambiguity between the development company and the customers. Another emerging discipline within requirements engineering is requirements engineering for market driven projects. It deals with the requirements engineering of a product targeting a mass market. In this thesis, a maturity model is developed which can be used to assess the maturity of requirements engineering process for market driven projects. The objective of this model is to provide a quick assessment tool through which a company would be able to know what are the strengths and weaknesses of their requirements engineering process.
Vikten av förbättringar i affärsprocesser i syfte att öka produktkvaliteten i mjukvaruindustrin ökar stadigt. Kvalitetsansvariga i industrin behöver effektiva och påtagliga verktyg för beslutsfattande i utvecklingsprojekt och för lokalisering av förbättringsområden. Mätningsprogram är en utbredd ansats för kvalitetsförbättring i mjukvaruprocesser men användning av heltäckande kvalitetsmodeller är resurskrävande. Dessa fokuserar inte primärt på mätpunkter vilket blottar behovet av en snabb och direkt teknik för identifiering och definiering av mätpunkter i projekt som saknar resurser eller behov av heltäckande mätningsprogram. Denna uppsats undersöker och jämför rådande kvalitetssäkringsmodeller med mätpunkter, vilket resulterar i Measurement Discovery Process utifrån valda delar av PSM- och GQM-modellerna. Processen appliceras på ett industriellt projekt med nämnda förutsättningar, vilket skapar en uppsättning mätpunkter som sedan utvärderas. Detta ligger även till grund för utvärdering av Measurement Discovery Process. Appliceringen och utvärderingen av processen synliggör dess generella applicerbarhet på projekt med liknande begränsningar såväl som vikten av formella processer i målprojektet och omfattande domänkunskap hos de som implementerar mätningarna. Measurement Discovery Process är föremål för framtida förbättringar men samtidigt ett tydligt steg mot snabbt framtagande av konkreta prestandamått för kvalitetsförbättring i affärsprocesser.
Software vulnerabilities are added into programs during its development. Architectural flaws are introduced during planning and design, while implementation faults are created during coding. Penetration testing is often used to detect these vulnerabilities. This approach is expensive because it is performed late in development and any correction would increase lead-time. An alternative would be to detect and correct vulnerabilities in the phase of development where they are the least expensive to correct and detect. Source code audits have often been suggested and used to detect implementations vulnerabilities. However, manual audits are time consuming and require extended expertise to be efficient. A static code analysis tool could achieve the same results as a manual audit but at fraction of the time. Through a set of cases studies and experiments at Ericsson AB, this thesis investigates the technical capabilities and limitations of using a static analysis tool as an early vulnerability detector. The investigation is extended to studying the human factor by examining how the developers interact and use the static analysis tool. The contributions of this thesis include the identification of the tools capabilities so that further security improvements can focus on other types of vulnerabilities. By using static analysis early in development possible cost saving measures are identified. Additionally, the thesis presents the limitations of static code analysis. The most important limitation being the incorrect warnings that are reported by static analysis tools. In addition, a development process overhead was deemed necessary to successfully use static analysis in an industry setting.
Automated static code analysis is an efficient technique to increase the quality of software during early development. This paper presents a case study in which mature software with known vul-nerabilities is subjected to a static analysis tool. The value of the tool is estimated based on reported failures from customers. An average of 17% cost savings would have been possible if the static analysis tool was used. The tool also had a 30% success rate in detecting known vulnerabilities and at the same time found 59 new vulnerabilities in the three examined products.
Cohesion and coupling are considered amongst the most important properties to evaluate the quality of a design. In the context of OO software development, cohesion means relatedness of the public functionality of a class whereas coupling stands for the degree of dependence of a class on other classes in OO system. In this thesis, a new metric has been proposed that measures the class cohesion on the basis of relative relatedness of the public methods to the overall public functionality of a class. The proposed metric for class cohesion uses a new concept of subset tree to determine relative relatedness of the public methods to the overall public functionality of a class. A set of metrics has been proposed for measuring class coupling based on three types of UML relationships, namely association, inheritance and dependency. The reasonable metrics to measure cohesion and coupling are supposed to share the same set of input data. Sharing of input data by the metrics encourages the idea for the existence of mutual relationships between them. Based on potential relationships research questions have been formed. An attempt is made to find answers of these questions with the help of an experiment on OO system FileZilla. Mutual relationships between class cohesion and class coupling have been analyzed statistically while considering OO metrics for size and reuse. Relationships among the pairs of metrics have been discussed and results are drawn in accordance with observed correlation coefficients. A study on Software evolution with the help of class cohesion and class coupling metrics has also been performed and observed trends have been analyzed.
This thesis provides a software prototype of Container Terminal Management system with the help of a Multi Agent systems technology. The goal that has been tried to achieve during this research work was to solve the management issues residing in a CT. The software prototype can be implemented as simulation software that will help the Terminal Managers to take necessary decisions for the better productivity of CT. The CTs are struggling with taking proper management decisions. There are many policies implemented but the use of a certain policy at a proper time is the main issue. It is possible with simulation software to visualize the affects of decisions taken by the implementation of a policy and see the expected output. This can really improve the performance of a CT. The management decision problem is solved by modeling the whole CT in a computer modeling language. The prototype shows all the actors appearing in a CT in the form of Agents and these agents are responsible for carrying out certain tasks. The prototype is the final contribution along with partial implementation. The model is proposed to be a web based system which removes the platform dependability problem and provide availability online.
Inom dagens snabba marknad behöver företag kunna reagera snabbt och effektivt på förändringar i både sina konkurrenters beteende, men också på förändringar i sin egen tekniska miljö. I den här uppsatsen har jag undersökt de agila attributen hos ett antal företag i Stockholm, med fokus på tre agila koncept; Scrum, eXtreme Programming och Test Driven Development. Arbetet inleddes med en förstudie, där jag identifierade de kriterium ett företag behöver uppfylla för att kunna anses vara agilt. Detta resulterade i fyra kategorier av attribut; Kvalitet, Flexibilitet, Kommunikation och Kompetens. Efter förstudien undersöktes dessa attribut genom en kombination av en enkätstudie och en studie av ett enskilt företag. Resultaten lutar mestadels åt ett agilt beteende hos de undersökta företagen, men jag har också kunnat påvisa att mycket arbete återstår, exempelvis inom kommunikationsområdet, och även gällande hur de observerade företagen applicerar de tre undersökta agila koncepten.
It is important for a software company to maximize value creation for a given investment. The purpose of requirements engineering activities is to add business value that is accounted for in terms of return on investment of a software product. This paper provides insight into the release planning processes used in the software industry to create software product value, by presenting three case studies. It examines how IT professionals perceive value creation through requirements engineering and how the release planning process is conducted to create software product value. It also presents to what degree the major stakeholders' perspectives are represented in the decision-making process. Our findings show that the client and market base of the software product represents the most influential group in the decision to implement specific requirements. This is reflected both in terms of deciding the processes followed and the decision-making criteria applied when selecting requirements for the product. Furthermore, the management of software product value is dependant on the context in which the product exists. Factors, such as the maturity of the product, the marketplace in which it exists, and the development tools and methods available, influence the criteria that decide whether a requirement is included in a specific project or release. (C) 2007 Elsevier B.V. All rights reserved.
It is important for a software company to maximize value creation for a given investment. The purpose of requirements engineering activities is to add business value that is accounted for in terms of return on investment of a software product. This paper provides insight into the release planning processes used in the software industry to create software product value, by presenting three case studies. It examines how IT professionals perceive value creation through requirements engineering and how the release planning process is conducted to create software product value. It also presents to what degree the major stakeholders' perspectives are represented in the decision-making process. Our findings show that the client and market base of the software product represents the most influential group in the decision to implement specific requirements. This is reflected both in terms of deciding the processes followed and the decision-making criteria applied when selecting requirements for the product. Furthermore, the management of software product value is dependant on the context in which the product exists. Factors, such as the maturity of the product, the marketplace in which it exists, and the development tools and methods available, influence the criteria that decide whether a requirement is included in a specific project or release.
Software qualities are in many cases tacit and hard to measure. Thus, there is a potential risk that they get lower priority than deadlines, cost and functionality. Yet software qualities impact customers, profits and even developer efficiency. This paper presents a method to evaluate the priority of software qualities in an industrial context. The method is applied in an exploratory case study, where the ISO 9126 model for software quality is combined with Theory-W to create a process for evaluating the alignment between success- critical stakeholder groups in the area of software product quality. The results of the case study using this tool is then presented and discussed. It is shown that the method provides valuable information about software qualities.
Applying Rational Unified Process (RUP) in a project means to develop a set of models before the system could be implemented. The models depict the essentials of the system from requirements to detailed design. They facilitate getting a system that has appropriate and rich documentation (therefore highly maintainable) and addresses user needs. However, creation of the models may cause overheads since a lot of work has to be put to elaborate the artefacts. In this paper a method that makes RUP more efficient is proposed. The method makes use of the fact that every subsequent model is developed basing on the previous model. In other words, models are successively transformed from requirements up to executable code. In particular, design model bases on an analysis model. The proposed method applies automatic model transformation from an analysis model to a design model. Firstly, an approach for performing automatic transformation is chosen. Secondly, a tool applying this approach is implemented. Finally, the transformation tool is tested and evaluated in an empirical study. The results show that automation of model transformation may be beneficial, and therefore can help in getting better systems in shorten time.
När undervisningsmaterial översätts kan text baserade exempel förlora sin mening, analogier bli meningslösa, alfabet oförenliga, etc. Vi fokuserar på några principer för att undervisningsexempel ska fungera i helt olika språk och kulturer. Vi presenterar en fallstudie baserad på material som är fritt tillgängligt från CSunplugged.org. Det engelska materialet har anpassats till japanska, kinesiska, koreanska och svenska förhållande. (Dataundervisning, Kinestetisk lärande, Översättning)
The quality of a product is commonly defined by its ability to satisfy stakeholder needs and expectations. Therefore, it is important to find, select, and plan the content of a software product to maximize the value for internal and external stakeholders. This process is traditionally referred to as requirements engineering in the software industry, while it is often referred to as product management in industries with a larger market focus. As an increasing number of software products are delivered to a market instead of single customers, the need for product management in software companies is increasing. As a side effect, the need for mechanisms supporting decisions regarding the content of software products also increases. While decision-support within requirements engineering and product management is a broad area, requirements prioritization together with release planning and negotiation are considered as some of the most important decision activities. This is particularly true because these activities support decisions regarding the content of products, and are hence drivers for quality. At the same time, requirements prioritization is seen as an integral and important component in both requirements negotiation (with single customers) and release planning (with markets) in incremental software development. This makes requirements prioritization a key component in software engineering decision support, in particular as input to more sophisticated approaches for release planning and negotiation, where decisions about what and when to develop are made. This thesis primarily focuses on evolving the current body of knowledge in relation to release planning in general and requirements prioritization in particular. The research is carried out by performing qualitative and quantitative studies in industrial and academic environments with an empirical focus. Each of the presented studies has its own focus and scope while together contributing to the research area. Together they answer questions about why and how requirements prioritization should be conducted, as well as what aspects should be taken into account when making decisions about the content of products. The primary objective of the thesis is to give guidelines on how to evolve requirements prioritization to better facilitate decisions regarding the content of software products. This is accomplished by giving suggestions on how to perform research to evolve the area, by evaluating current approaches and suggest ways on how these can be improved, and by giving directions on how to align and focus future research to be more successful in development of decision-support approaches. This means that the thesis solves problems with requirements prioritization today, and gives directions and support on how to evolve the area in a successful way.
In everyday life, humans confront situations where different decisions have to be made. Such decisions can be non-trivial even though they often are relatively simple, such as which bus to take or which flavor of a soft drink to buy. When facing decisions of more complex nature, and when more is at stake, they tend to get much harder. It is often possible to deal with such decisions by prioritizing different alternatives to find the most suitable one. In software engineering, decision-makers are often confronted with situations where complex decisions have to be made, and where the concept of prioritization can be utilized. Traditionally in software engineering, discussions about prioritization have focused on the software product. However, when defining or improving software processes, complex decisions also have to be made. In fact, software products and software processes have many characteristics in common which invite thoughts about using prioritization when developing and evolving software processes as well. The results presented in this thesis indicate that it is possible to share results and knowledge regarding prioritization between the two areas. In this thesis, the area of prioritization of software products is investigated in detail and a number of studies where prioritizations are performed in both process and product settings are presented. It is shown that it is possible to use prioritization techniques commonly used in product development also when prioritizing improvement issues in a software company. It is also shown that priorities between stakeholders of a software process sometimes differ, just as they do when developing software products. The thesis also presents an experiment where different prioritization techniques are evaluated with regard to ease of use, time consumption, and accuracy. Finally, an investigation of the suitability of students as subjects when evaluating prioritization techniques is presented.
This chapter provides an overview of techniques for prioritization of requirements for software products. Prioritization is a crucial step towards making good decisions regarding product planning for single and multiple releases. Various aspects of functionality are considered, such as importance, risk, cost, etc. Prioritization decisions are made by stakeholders, including users, managers, developers, or their representatives. Methods are given how to combine individual prioritizations based on overall objectives and constraints. A range of different techniques and aspects are applied to an example to illustrate their use. Finally, limitations and shortcomings of current methods are pointed out, and open research questions in the area of requirements prioritization are discussed.