Ändra sökning
Avgränsa sökresultatet
1234567 51 - 100 av 5497
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51. Afzal, Wasif
    et al.
    Torkar, Richard
    A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 52.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Incorporating Metrics in an Organizational Test Strategy2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 53. Afzal, Wasif
    et al.
    Torkar, Richard
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011Ingår i: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, nr 9, s. 11984-11997Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

  • 54. Afzal, Wasif
    et al.
    Torkar, Richard
    Suitability of Genetic Programming for Software Reliability Growth Modeling2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

  • 55. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards benchmarking feature subset selection methods for software fault prediction2016Ingår i: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, s. 33-58Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 56. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A Systematic Mapping Study on Non-Functional Search-Based Software Testing2008Konferensbidrag (Refereegranskat)
  • 57. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009Ingår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, nr 6, s. 957-976Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 58. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Prediction of fault count data using genetic programming2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.

  • 59.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Feldt, Robert
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Resampling Methods in Software Quality Classification2012Ingår i: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940, Vol. 22, nr 2, s. 203-223Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

  • 60. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Konferensbidrag (Refereegranskat)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 61.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola.
    Torkar, Richard
    Blekinge Tekniska Högskola.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Genetic programming for cross-release fault count predictions in large and complex software projects2010Ingår i: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, IGI Global, Hershey, USA , 2010Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 62. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Feldt, Robert
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Prediction of faults-slip-through in large software projects: an empirical evaluation2014Ingår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, nr 1, s. 51-86Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 63.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola.
    Torkar, Richard
    Blekinge Tekniska Högskola.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wikstrand, Greger
    KnowIT YAHM Sweden AB, SWE.
    Search-based prediction of fault-slip-through in large software projects2010Ingår i: Proceedings - 2nd International Symposium on Search Based Software Engineering, SSBSE 2010, IEEE , 2010, s. 79-88Konferensbidrag (Refereegranskat)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 64.
    Agardh, Johannes
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Johansson, Martin
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Pettersson, Mårten
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Designing Future Interaction with Today's Technology1999Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Under vårt magisterarbete har vi under ett antal tillfällen följt med och studerat tre lastbilschaufförer. Syftet med studien var bland annat att få en förståelse för hur de hittar till rätt adress och därmed se om de skulle kunna vara behjälpta av ett navigeringsstöd. Vi gjorde ett designförslag inspirerat av vad vi fått fram då vi analyserat materialet från fältstudierna samt av designidéer såsom exempelvis Calm Technology och Tacit Interaction. I magisteruppsatsen beskriver vi vårt designförslag och diskuterar bland annat hur designparadigmen Calm Technology och Tacit Interaction kan användas i utformning av IT-artefakter. Vi kommer fram till att de nya designkoncepten Calm Technology och Tacit Interaction handlar om relationen mellan teknik, människor och mänsklig handling. Nyckelord: Människa-Dator Interaktion (HCI), Work Practice, IT-design, Calm Technology, Tacit Interaction, interaktionsdesign

  • 65. Agbesi, Collinson Colin Mawunyo
    Promoting Accountable Governance Through Electronic Government2016Självständigt arbete på avancerad nivå (magisterexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Electronic government (e-Government) is a purposeful system of organized delegation of power, control, management and resource allocation in a harmonized centralized or decentralized way via networks assuring efficiency, effectiveness and transparency of processes and transactions. This new phenomenon is changing the way of business and service of governments all over the world. The betterment of service to citizens as well as other groups and the efficient management of scarce resources have meant that governments seek alternatives to rendering services and efficient management processes. Analog and mechanical processes of governing and management have proved inefficient and unproductive in recent times. The search for alternative and better ways of governing and control have revealed that digital and electronic ways of governing is the best alternative and beneficial more than the mechanical process of governing. The internet, information and communication technology (ICT/IT) have registered a significant change in governments. There has also been an increased research in the area of electronic government but the field still lacks sound theoretical framework which is necessary for a better understanding of the factors influencing the adoption of electronic government systems, and the integration of various electronic government applications.

    Also the efficient and effective allocation and distribution of scarce resources has become an issue and there has been a concerted effort globally to improve the use and management of scarce resources in the last decade. The purpose of this research is to gain an in depth and better understanding of how electronic government can be used to provide accountability, security and transparency in government decision making processes in allocation and distribution of resources in the educational sector of Ghana. Research questions have been developed to help achieve the aim. The study has also provided detailed literature review, which helped to answer research questions and guide to data collection. A quantitative and qualitative research method was chosen to collect vital information and better understand the study area issue. Both self administered questionnaire as well as interviews were used to collect data relevant to the study. Also a thorough analysis of related works was conducted.

    Finally, the research concluded by addressing research questions, discussing results and providing some vital recommendations.  It was also found that electronic government is the best faster, reliable, accountable and transparent means of communication and interaction between governments, public institutions and citizens. Thus electronic government is crucial in transforming the educational sector of Ghana for better management of resources. It has also been noted that information and communication technology (ICT) is the enabling force that helps electronic government to communicate with its citizens, support e-government operation and provide efficiency, effectiveness and better services within the educational sector of Ghana.

    Ladda ner fulltext (pdf)
    fulltext
  • 66.
    Agushi, Camrie
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Innovation inom Digital Rights Management2005Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Uppsatsen behandlar ämnet Digital Rights Management (DRM), mer specifikt innovationstrenderna inom DRM. Fokus är på tre drivkrafter i DRM. För det första, DRM teknologier, för det andra, DRM standarder, och för det tredje, DRM interoperabilitet. Dessa drivkrafter diskuteras och analyseras för att kunna utforska innovationstrenderna inom DRM. Avslutningsvis formas en multifacetterad översikt av dagens DRM-kontext. En slutsats är att aspekten av Intellectual Property Rights anses vara en viktig indikator av den riktning som DRM innovationen går mot.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 67.
    Ahl, Viggo
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    An experimental comparison of five prioritization methods: Investigating ease of use, accuracy and scalability2005Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 68.
    Ahlberg, Mårten
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknokultur, humaniora och samhällsbyggnad.
    Liedstrand, Peter
    Blekinge Tekniska Högskola, Sektionen för teknokultur, humaniora och samhällsbyggnad.
    24-timmarsmyndighetens användbarhet2004Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    The communication with government and municipality through Internet has in-creased during the last couple of years. Therefore we have chosen to focus our bachelor thesis in this particularly area and the needs for usable web services for the citizens. In this bachelor thesis we are studying a increasing group users, namely elderly citizens. During the study we have analysed E-governments ser-vices usability through usability tests. The combination of conversations and meetings with individuals, observations of interactions and literature studies give us the opportunity to explore the users needs. The users need is the central in how they understand and interact with the E-government. The web sites we used dur-ing our user tests are all connected with the E-government. Through a analytic study of the information we could make five important design proposals and guidelines, that we suggest are required when e-services are developed for the E-government.

    Ladda ner fulltext (pdf)
    FULLTEXT01
    Ladda ner fulltext (pdf)
    FULLTEXT02
  • 69. Ahlgren, Filip
    Comparing state-of-the-art machine learning malware detection methods on Windows2021Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background. Malware has been a major issue for years and old signature scanning methods for detecting malware are outdated and can be bypassed by most advanced malware. With the help of machine learning, patterns of malware behavior and structure can be learned to detect the more advanced threats that are active today.

    Objectives. In this thesis, research to find state-of-the-art machine learning methods to detect malware is proposed. A dataset collection method will be found in research to be used in an experiment. Three selected methods will be re-implemented for an experiment to compare which has the best performance. All three algorithms will be trained and tested on the same dataset.

    Methods. A literature review with the snowballing technique was proposed to find the state-of-the-art detection methods. The malware was collected through the malware database VirusShare and the total number of samples was 14924. The algorithms were re-implemented, trained, tested, and compared by accuracy, true positive, true negative, false positive, and false negative.

    Results. The results showed that the best performing research available are image detection, N-Gram combined with meta-data and Function Call Graphs. However, a new method was proposed called Running Window Entropy which does not have a lot of research about it and still can achieve decent accuracy. The selected methods for comparison were image detection, N-Gram, and Running Window Entropy where the results show they had an accuracy of 94.64%, 96.45%, and 93.71% respectively.

    Conclusions. On this dataset, it showed that the N-Gram had the best performance of all three methods. The other two methods showed that, depending on the use case, either can be applicable. 

    Ladda ner fulltext (pdf)
    Comparing state-of-the-art machine learning malware detection methods on Windows
  • 70.
    Ahlgren, Filip
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Local And Network Ransomware Detection Comparison2019Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

    Ladda ner fulltext (pdf)
    BTH2019Ahlgren
  • 71.
    Ahlgren, Johan
    et al.
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Karlsson, Robert
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    En studie av inbyggda brandväggar: Microsoft XP och Red Hat Linux2003Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Detta kandidatarbete utreder hur väl två operativsystems inbyggda brandväggar fungerar i symbios med en användares vanligaste tjänsteutnyttjande på Internet, samt att se hur likartade de är i sitt skydd från hot. De två operativsystemen som vi utgick ifrån var Microsoft Windows XP samt Red Hat Linux 8.0. Den hypotes vi arbetat kring lyder enligt följande: De två inbyggda brandväggarna är i stort likartade rörande skydd från hot på Internet och uppfyller användarnas tjänsteutnyttjande. De metoder vi använt, för att svara på vår frågeställning, har delats upp i ett funktionalitetstest och ett säkerhetstest. I funktionalitetstestet provades de vanligaste Internettjänsterna med den inbyggda brandväggen och ifall det uppstod några komplikationer eller ej. De två inbyggda brandväggarna genom gick i säkerhetstestet skannings- och svaghetskontroll via ett flertal verktyg. Genom resultatet kan vi konstatera att de inbyggda brandväggarna klarar av de vanligaste tjänsterna på Internet, men att en skillnad föreligger hos dem vad gäller exponeringen ut mot Internet. Windows XP ligger helt osynligt utåt, medan Red Hats inbyggda brandvägg avslöjar en mängd information om värddatorn, som kan komma att användas i illvilliga syften. Slutsatsen blev att vi avslutningsvis falsifierade vår hypotes då de två inbyggda brandväggarna ej var jämlika i sitt skydd mot yttre hot på Internet.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 72.
    Ahlstrand, Jim
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. Telenor Sverige AB, Sweden..
    Boldt, Martin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Preliminary Results on the use of Artificial Intelligence for Managing Customer Life Cycles2023Ingår i: 35th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2023 / [ed] Håkan Grahn, Anton Borg and Martin Boldt, Linköping University Electronic Press, 2023, s. 68-76Konferensbidrag (Refereegranskat)
    Abstract [en]

    During the last decade we have witnessed how artificial intelligence (AI) have changed businesses all over the world. The customer life cycle framework is widely used in businesses and AI plays a role in each stage. However,implementing and generating value from AI in the customerlife cycle is not always simple. When evaluating the AI against business impact and value it is critical to consider both themodel performance and the policy outcome. Proper analysis of AI-derived policies must not be overlooked in order to ensure ethical and trustworthy AI. This paper presents a comprehensive analysis of the literature on AI in customer lifecycles (CLV) from an industry perspective. The study included 31 of 224 analyzed peer-reviewed articles from Scopus search result. The results show a significant research gap regardingoutcome evaluations of AI implementations in practice. This paper proposes that policy evaluation is an important tool in the AI pipeline and empathizes the significance of validating bothpolicy outputs and outcomes to ensure reliable and trustworthy AI.

    Ladda ner fulltext (pdf)
    fulltext
  • 73.
    Ahlström, Catharina
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Fridensköld, Kristina
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    How to support and enhance communication: in a student software development project2002Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    I denna rapport, som baserar sig på ett studentprojekt utfört under våren 2002, har vi fokuserat på ordet kommunikation. Vi beskriver hur användande av designverktyg kan spela en nyckelroll när det gäller att stöda kommunikation i gruppaktiviteter och i vilken utsträckning kommunikation kan stödas och förstärkas av verktyg som mockuper och metaforer. Vi beskriver också en designprogress från initiala skisser till färdig mockup av ett grafiskt användargränssnitt för en demoapplikation av en vykortstjänst.

    Ladda ner fulltext (pdf)
    FULLTEXT01
    Ladda ner fulltext (pdf)
    FULLTEXT02
    Ladda ner fulltext (pdf)
    FULLTEXT03
    Ladda ner fulltext (pdf)
    FULLTEXT04
  • 74.
    Ahlström, Eric
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Holmqvist, Lucas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Goswami, Prashant
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Comparing Traditional Key Frame and Hybrid Animation2017Ingår i: SCA '17 Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Digital Library, 2017, artikel-id nr. a20Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this research the authors explore a hybrid approach which usesthe basic concept of key frame animation together with proceduralanimation to reduce the number of key frames needed for an animationclip. The two approaches are compared by conducting anexperiment where the participating subjects were asked to ratethem based on their visual appeal.

    Ladda ner fulltext (pdf)
    fulltext
  • 75.
    Ahlström, Frida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Karlsson, Janni
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Utvecklarens förutsättningar för säkerställande av tillgänglig webb2022Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Det har sedan 2019 varit lagkrav att offentliga webbplatser i Sverige skall uppfylla viss nivå av digital tillgänglighet. När den här studien publiceras ska ytterligare EU-direktiv bli nationell lag, vilket kommer att innebära att även privata aktörer berörs av motsvarande krav, däribland banktjänster och e-handeln. Detta kommer att innebära ökade krav som leverantörer och deras utvecklare behöver kunna möta. 

    Målen med studien är att skapa en medvetenhet om digital tillgänglighet och tydliggöra, utifrån utvecklarens perspektiv, hur man arbetar för att uppnå denna grad av tillgänglighet och vad som behövs för att mer effektivt tillämpa digital tillgänglighet. 

    För att åstadkomma detta har en kvalitativ intervjustudie genomförts. Totalt åtta intervjuer har genomförts, som sedan har transkriberats och tematiserats i resultatavsnittet. En induktiv tematisk analys är genomförd utifrån forskningsfrågorna. Den jämför tidigare resultat mot utfall från undersökningen och visar tydligt på likheter men även skillnader och nya upptäckter.

    Av undersökningen framgår att utvecklare har tillgång till utvärderingsverktyg och riktlinjer som ger ett gott stöd i arbetet, men att ansvaret ofta ligger på enskilda utvecklare snarare än på verksamheten som helhet. Detta är en av de största utmaningarna, tillsammans med att det fortfarande utvecklas otillgängligt parallellt och att tidspress gör att tillgänglighet kan prioriteras ned. Respondenterna är dock överens om att det inte tar längre tid att utveckla tillgängligt än otillgängligt, förutsatt att det tas i beaktande från början. Framgångsfaktorer i arbetet är att sälja in tillgänglighet till kunden, att arbeta strukturerat med kunskapsdelning och att dokumentera lösningar för att spara tid. Utöver detta framgår att tillgänglighetsfrågan skulle vinna på att ägarskapet lyfts till en högre beslutsnivå och kompetensen breddas i leverantörens organisation, samt att utvecklare får tillgång till specialistkompetens och användartester som stöd i arbetet. En grundkunskap om tillgänglighet skulle kunna inkluderas i webbutvecklingsutbildningar i större utsträckning, och en utökning av lagkraven skulle kunna skapa ytterligare incitament hos kunden. 

    Ladda ner fulltext (pdf)
    fulltext
  • 76. Ahmad, A
    et al.
    Shahzad, Aamir
    Padmanabhuni, Kumar
    Mansoor, Ali
    Joseph, Sushma
    Arshad, Zaki
    Requirements prioritization with respect to Geographically Distributed Stakeholders2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Requirements selection for software releases can play a vital role in the success of software product. This selection of requirements is done by different requirements prioritization techniques. This paper discusses limitations of these Requirements Prioritization Techniques (100$ Method and Binary Search Tree) with respect to Geographical Distribution of Stakeholders. We conducted two experiments, in this paper, in order to analyze the variations among the results of these Requirements Prioritization Techniques. This paper also discusses attributes that can affect the requirements prioritization when dealing with Geographically Distributed Stakeholders. We conducted first experiment with 100$ Dollar method and Binary Search Tree technique and second experiment has been conducted with modified 100$ Dollar method and Binary search tree technique. Results of these experiments have been discussed in this paper. This paper provides a framework that can be used to identify those requirements that can play an important role in a product success during distributed development.

  • 77.
    Ahmad, Al Ghaith
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Abd ULRAHMAN, Ibrahim
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Självständigt arbete på grundnivå (kandidatexamen), 12 poäng / 18 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Ladda ner fulltext (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 78.
    Ahmad, Arshad
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Khan, Hashim
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    The Importance of Knowledge Management Practices in Overcoming the Global Software Engineering Challenges in Requirements Understanding2008Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Going offshore has become a norm in current software organizations due to several benefits like availability of competent people, cost, proximity to market and customers, time and so on. Despite the fact that Global Software Engineering (GSE) offers many benefits to software organizations but it has also created several challenges/issues for practitioners and researchers like culture, communication, co-ordination and collaboration, team building and so on. As Requirements Engineering (RE) is more human intensive activity and is one of the most challenging and important phase in software development. Therefore, RE becomes even more challenging when comes to GSE context because of culture, communication, coordination, collaboration and so on. Due to the fore mentioned GSE factors, requirements’ understanding has become a challenge for software organizations involved in GSE. Furthermore, Knowledge Management (KM) is considered to be the most important asset of an organization because it not only enables organizations to efficiently share and create knowledge but also helps in resolving culture, communication and co-ordination issues especially in GSE. The aim of this study is to present how KM practices helps globally dispersed software organizations in requirements understanding. For this purpose a thorough literature study is performed along with interviews in two industries with the intent to identify useful KM practices and challenges of requirements understanding in GSE. Then based on the analysis of identified challenges of requirements understanding in GSE both from literature review and industrial interviews, useful KM practices are shown and discussed to reduce requirements understanding issues faced in GSE.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 79. Ahmad, Azeem
    et al.
    Göransson, Magnus
    Shahzad, Aamir
    Limitations of the analytic hierarchy process technique with respect to geographically distributed stakeholders2010Ingår i: Proceedings of World Academy of Science, Engineering and Technology, ISSN 2010-376X, Vol. 70, nr Sept., s. 111-116Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders' (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.

  • 80.
    Ahmad, Azeem
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Kolla, Sushma Joseph
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Effective Distribution of Roles and Responsibilities in Global Software Development Teams2012Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context. Industry is moving from co-located form of development to a distributed development in order to achieve different benefits such as cost reduction, access to skillful labor and around the clock working etc. This transfer requires industry to face different challenges such as communication, coordination and monitoring problems. Risk of project failure can be increased, if industry does not address these problems. This thesis is about providing the solutions of these problems in term of effective roles and responsibilities that may have positive impact on GSD team. Objectives. In this study we have developed framework for suggesting roles and responsibilities for GSD team. This framework consists of problems and casual dependencies between them which are related to team’s ineffectiveness, then suggestions in terms of roles and responsibilities have been presented in order to have an effective team in GSD. This framework, further, has been validated in industry through a survey that determines which are the effective roles and responsibilities in GSD. Methods. We have two research methods in this study 1) systematic literature review and 2) survey. Complete protocol for planning, conducting and reporting the review as well as survey has been described in their respective sections in this thesis. A systematic review is used to develop the framework whereas survey is used for framework validation. We have done static validation of framework. Results. Through SLR, we have identified 30 problems, 33 chains of problems. We have identified 4 different roles and 40 different responsibilities to address these chains of problems. During the validation of the framework, we have validated the links between suggested roles and responsibilities and chains of problems. Addition to this, through survey, we have identified 20 suggestions that represents strong positive impact on chains of problems in GSD in relation to team’s effectiveness. Conclusions. We conclude that implementation of effective roles and responsibilities in GSD team to avoid different problems require considerable attention from researchers and practitioners which can guarantee team’s effectiveness. Implementation of proper roles and responsibilities has been mentioned as one of the successful strategies for increasing team’s effectiveness in the literature, but which particular roles and responsibilities should be implemented still need to be addressed. We also conclude that there must be basic responsibilities associated with any particular role. Moreover, we conclude that there is a need for further development and empirical validation of different frameworks for suggesting roles and responsibilities in full scale industry trials.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 81.
    Ahmad, Ehsan
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Raza, Bilal
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Towards Optimization of Software V&V Activities in the Space Industry [Two Industrial Case Studies]2009Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Utveckling av programvara för hög funktionssäkra rymden applikationer och system är en formidabel uppgift. Med nya politiska och marknadsmässiga trycket på rymdindustrin att leverera mer mjukvara till en lägre kostnad, optimering av deras metoder och standarder måste utredas. Industrin har att följa standarder som absolut uppsättningar kvalitetsmål och föreskriver tekniska processer och metoder för att uppfylla dem. Det övergripande målet för denna studie är att utvärdera om den nuvarande användningen av ECSS standarder är kostnaden effektivt och om det finns sätt att göra processen smidigare och samtidigt bibehålla kvaliteten och för att analysera om V & V verksamhet kan optimeras. Detta dokument presenterar resultat från två industriella fallstudier av företag inom den europeiska rymdindustrin som är Följande ECSS krav och ha olika V & V verksamhet. Fallstudierna redovisas här fokuserat på hur ECSS standarder som används av företag och hur detta påverkat deras processer och hur deras V & V verksamhet kan optimeras.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 82. Ahmad, Ehsan
    et al.
    Raza, Bilal
    Feldt, Robert
    Assessment and support for software capstone projects at the undergraduate level: A survey and rubrics2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Software engineering and computer science students conduct a capstone project during the final year of their degree programs. These projects are essential in validating that students have gained required knowledge and they can synthesize and use that knowledge to solve real world problems. However, the external requirements on educational programs often do not provide detailed guidelines for how to conduct or support these capstone projects, which may lead to variations among universities. This paper presents the results from a survey conducted at 19 different Pakistani universities of the current management practices and assessment criteria used for the capstone project courses at Undergraduate level. Based upon the results of this survey and similar work on Master Thesis capstone projects in Sweden, we present assessment rubrics for software-related undergraduate capstone projects. We also present recommendations for the continuous improvement of capstone projects.

  • 83.
    AHMAD, MUHAMMAD ZEESHAN
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
    Comparative Analysis of Iptables and Shorewall2012Studentuppsats (Examensarbete)
    Abstract [en]

    The use of internet has increased over the past years. Many users may not have good intentions. Some people use the internet to gain access to the unauthorized information. Although absolute security of information is not possible for any network connected to the Internet however, firewalls make an important contribution to the network security. A firewall is a barrier placed between the network and the outside world to prevent the unwanted and potentially damaging intrusion of the network. This thesis compares the performance of Linux packet filtering firewalls, i.e. iptables and shorewall. The firewall performance testing helps in selecting the right firewall as needed. In addition, it highlights the strength and weakness of each firewall. Both firewalls were tested by using the identical parameters. During the experiments, recommended benchmarking methodology for firewall performance testing is taken into account as described in RFC 3511. The comparison process includes experiments which are performed by using different tools. To validate the effectiveness of firewalls, several performance metrics such as throughput, latency, connection establishment and teardown rate, HTTP transfer rate and system resource consumption are used. The experimental results indicate that the performance of Iptables firewall decreases as compared to shorewall in all the aspects taken into account. All the selected metrics show that large numbers of filtering rules have a negative impact on the performance of both firewalls. However, UDP throughput is not affected by the number of filtering rules. The experimental results also indicate that traffic sent with different packet sizes do not affect the performance of firewalls.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 84.
    Ahmad, Nadeem
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Habib, M. Kashif
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Analysis of Network Security Threats and Vulnerabilities by Development & Implementation of a Security Network Monitoring Solution2010Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Konfidentiella uppgifter via Internet blir vanligare varje dag. Personer och organisationer skickar sina konfidentiella uppgifter elektroniskt. Det är också vanligt att hackare mot dessa nät. I dagens tider, skydd av data, programvara och hårdvara från virus är, nu mer än någonsin ett behov och inte bara en oro. Vad du behöver veta om nätverk i dessa dagar? Hur säkerheten genomförs för att säkerställa ett nätverk? Hur säkerheten hanteras? I denna skrift kommer vi att försöka ta itu med dessa frågor och ge en uppfattning om var vi nu står med säkerheten för nätet.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 85.
    Ahmad, Raheel
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    On the Scalability of Four Multi-Agent Architectures for Load Control Management in Intelligent Networks2003Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Paralleling the rapid advancement in the network evolution is the need for advanced network traffic management surveillance. The increasing number and variety of services being offered by communication networks has fuelled the demand for optimized load management strategies. The problem of Load Control Management in Intelligent Networks has been studied previously and four Multi-Agent architectures have been proposed. The objective of this thesis is to investigate one of the quality attributes namely, scalability of the four Multi-Agent architectures. The focus of this research would be to resize the network and study the performance of the different architectures in terms of Load Control Management through different scalability attributes. The analysis has been based on experimentation through simulations. It has been revealed through the results that different architectures exhibit different performance behaviors for various scalability attributes at different network sizes. It has been observed that there exists a trade-off in different scalability attributes as the network grows. The factors affecting the network performance at different network settings have been observed. Based on the results from this study it would be easier to design similar networks for optimal performance by controlling the influencing factors and considering the trade-offs involved.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 86.
    Ahmad, Saleem Zubair
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Analyzing Suitability of SysML for System Engineering Applications2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    During last decade UML have to face different tricky challenges. For instance as a single unified, general purpose modeling language it should offer simple and explicit semantic which can be applicable to wide range of domains. Due to significant shift of focus from software to system “software-centric” attitude of UML has been exposed. So need of certain domain specific language is always there which can address problems of system rather then software only i.e. motivation for SysML. In this thesis SysML is evaluated to analyze its suitability for system engineering applications. A evaluation criteria is established, through which appropriateness of SysML is observed over system development life cycle. The study is conducted by taking case example of real life i.e. automobile product. Results of research not only provide an opportunity to get inside into SysML architecture but also offer an idea of SysML appropriateness for multidisciplinary product development

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 87.
    Ahmad, Waqar
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Riaz, Asim
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Predicting Friendship Levels in Online Social Networks2010Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Abstract Context: Online social networks such as Facebook, Twitter, and MySpace have become the preferred interaction, entertainment and socializing facility on the Internet. However, these social network services also bring privacy issues in more limelight than ever. Several privacy leakage problems are highlighted in the literature with a variety of suggested countermeasures. Most of these measures further add complexity and management overhead for the user. One ignored aspect with the architecture of online social networks is that they do not offer any mechanism to calculate the strength of relationship between individuals. This information is quite useful to identify possible privacy threats. Objectives: In this study, we identify users’ privacy concerns and their satisfaction regarding privacy control measures provided by online social networks. Furthermore, this study explores data mining techniques to predict the levels/intensity of friendship in online social networks. This study also proposes a technique to utilize predicted friendship levels for privacy preservation in a semi-automatic privacy framework. Methods: An online survey is conducted to analyze Facebook users’ concerns as well as their interaction behavior with their good friends. On the basis of survey results, an experiment is performed to justify practical demonstration of data mining phases. Results: We found that users are concerned to save their private data. As a precautionary measure, they restrain to show their private information on Facebook due to privacy leakage fears. Additionally, individuals also perform some actions which they also feel as privacy vulnerability. This study further identifies that the importance of interaction type varies while communication. This research also discovered, “mutual friends” and “profile visits”, the two non-interaction based estimation metrics. Finally, this study also found an excellent performance of J48 and Naïve Bayes algorithms to classify friendship levels. Conclusions: The users are not satisfied with the privacy measures provided by the online social networks. We establish that the online social networks should offer a privacy mechanism which does not require a lot of privacy control effort from the users. This study also concludes that factors such as current status, interaction type need to be considered with the interaction count method in order to improve its performance. Furthermore, data mining classification algorithms are tailor-made for the prediction of friendship levels.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 88.
    Ahmadi Mehri, Vida
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Towards Automated Context-aware Vulnerability Risk Management2023Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.

    The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. 

    ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.

    Ladda ner fulltext (pdf)
    fulltext
  • 89.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Arlos, Patrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Casalicchio, Emiliano
    Sapienza University of Rome, Italy.
    Automated Patch Management: An Empirical Evaluation Study2023Ingår i: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023, IEEE, 2023, s. 321-328Konferensbidrag (Refereegranskat)
    Abstract [en]

    Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.

    Ladda ner fulltext (pdf)
    fulltext
  • 90.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Privacy and trust in cloud-based marketplaces for AI and data resources2017Ingår i: IFIP Advances in Information and Communication Technology, Springer New York LLC , 2017, Vol. 505, s. 223-225Konferensbidrag (Refereegranskat)
    Abstract [en]

    The processing of the huge amounts of information from the Internet of Things (IoT) has become challenging. Artificial Intelligence (AI) techniques have been developed to handle this task efficiently. However, they require annotated data sets for training, while manual preprocessing of the data sets is costly. The H2020 project “Bonseyes” has suggested a “Market Place for AI”, where the stakeholders can engage trustfully in business around AI resources and data sets. The MP permits trading of resources that have high privacy requirements (e.g. data sets containing patient medical information) as well as ones with low requirements (e.g. fuel consumption of cars) for the sake of its generality. In this abstract we review trust and privacy definitions and provide a first requirement analysis for them with regards to Cloud-based Market Places (CMPs). The comparison of definitions and requirements allows for the identification of the research gap that will be addressed by the main authors PhD project. © IFIP International Federation for Information Processing 2017.

  • 91.
    Ahmed, Abdifatah
    et al.
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Lindhe, Magnus
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Efficient And Maintainable Test Automation2002Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    More and more companies experience problems with maintainability and time-consuming development of automated testing tools. The MPC department at Ericsson Software Technology AB use methods and tools often developed during time pressure that results in time-consuming testing and requires more effort and resources than planned. The tools are also such nature that they are hard to expand, maintain and in some cases they have been thrown out between releases. For this reason, we could identify two major objectives that MPC want to achieve; efficient and maintainable test automation. Efficient test automation is related to mainly how to perform tests with less effort, or in a shorter time. Maintainable test automation aims to keep tests up to date with the software. In order to decide how to achieve these objectives, we decided to investigate which test to automate, what should be improved in the testing process, what techniques to use, and finally whether or not the use of automated testing can reduce the cost of testing. These issues will be discussed in this paper.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 92.
    Ahmed, Adnan
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för för interaktion och systemdesign.
    Hussain, Syed Shahram
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för för interaktion och systemdesign.
    Meta-Model of Resilient information System2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    The role of information systems has become very important in today’s world. It is not only the business organizations who use information systems but the governments also posses’ very critical information systems. The need is to make information systems available at all times under any situation. Information systems must have the capabilities to resist against the dangers to its services,performance & existence, and recover to its normal working state with the available resources in catastrophic situations. The information systems with such a capability can be called resilient information systems. This thesis is written to define resilient information systems, suggest its meta-model and to explain how existing technologies can be utilized for the development of resilient information system.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 93.
    Ahmed, Israr
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Nadeem, Shahid
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Minimizing Defects Originating from Elicitation, Analysis and Negotiation (E and A&N) Phase in Bespoke Requirements Engineering2009Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Defect prevention (DP) in early stages of software development life cycle (SDLC) is very cost effective than in later stages. The requirements elicitation and analysis & negotiation (E and A&N) phases in requirements engineering (RE) process are very critical and are major source of requirements defects. A poor E and A&N process may lead to a software requirements specifications (SRS) full of defects like missing, ambiguous, inconsistent, misunderstood, and incomplete requirements. If these defects are identified and fixed in later stages of SDLC then they could cause major rework by spending extra cost and effort. Organizations are spending about half of their total project budget on avoidable rework and majority of defects originate from RE activities. This study is an attempt to prevent requirements level defects from penetrates into later stages of SDLC. For this purpose empirical and literature studies are presented in this thesis. The empirical study is carried out with the help of six companies from Pakistan & Sweden by conducting interviews and literature study is done by using literature reviews. This study explores the most common requirements defect types, their reasons, severity level of defects (i.e. major or minor), DP techniques (DPTs) & methods, defect identification techniques that have been using in software development industry and problems in these DPTs. This study also describes possible major differences between Swedish and Pakistani software companies in terms of defect types and rate of defects originating from E and A&N phases. On the bases of study results, some solutions have been proposed to prevent requirements defects during the RE process. In this way we can minimize defects originating from E and A&N phases of RE in the bespoke requirements engineering (BESRE).

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 94.
    Ahmed, Mamun
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
    Adaptive Sub band GSC Beam forming using Linear Microphone-Array for Noise Reduction/Speech Enhancement.2012Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    This project presents the description, design and the implementation of a 4-channel microphone array that is an adaptive sub-band generalized side lobe canceller (GSC) beam former uses for video conferencing, hands-free telephony etc, in a noisy environment for speech enhancement as well as noise suppression. The side lobe canceller evaluated with both Least Mean Square (LMS) and Normalized Least Mean Square (NLMS) adaptation. A testing structure is presented; which involves a linear 4-microphone array connected to collect the data. Tests were done using one target signal source and one noise source. In each microphone’s, data were collected via fractional time delay filtering then it is divided into sub-bands and applied GSC to each of the subsequent sub-bands. The overall Signal to Noise Ratio (SNR) improvement is determined from the main signal and noise input and output powers, with signal-only and noise-only as the input to the GSC. The NLMS algorithm significantly improves the speech quality with noise suppression levels up to 13 dB while LMS algorithm is giving up to 10 dB. All of the processing for this thesis is implemented on a computer using MATLAB and validated by considering different SNR measure under various types of blocking matrix, different step sizes, different noise locations and variable SNR with noise.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 95.
    Ahmed, Mohammad Abdur Razzak and Rajib
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Knowledge Management in Distributed Agile Projects2013Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Knowledge management (KM) is essential for success in Global Soft- ware Development (GSD); Distributed Software Development (DSD); or Global Software Engineering (GSE). Software organizations are managing knowledge in innovative ways to increase productivity. One of the major objectives of KM is to improve productivity through effective knowledge sharing and transfer. Therefore, to maintain effective knowledge sharing in distributed agile projects, practitioners need to adopt different types of knowledge sharing techniques and strategies. Distributed projects introduce new challenges to KM. So, practices that are used in agile teams become difficult to put into action in distributed development. Though, informal communication is the key enabler for knowledge sharing, when an agile project is distributed, informal communication and knowledge sharing are challenged by the low communication bandwidth between distributed team members, as well as by social and cultural distance. In the work presented in this thesis, we have made an overview of empirical studies of knowledge management in distributed agile projects. Based on the main theme of this study, we have categorized and reported our findings on major concepts that need empirical investigation. We have classified the main research theme in this thesis within two sub-themes: • RT1: Knowledge sharing activities in distributed agile projects. • RT2: Spatial knowledge sharing in a distributed agile project. The main contributions are: • C1: Empirical observations regarding knowledge sharing activities in distributed agile projects. • C2: Empirical observations regarding spatial knowledge sharing in a distributed agile project. • C3: Process improvement scope and guidelines for the studied project.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 96.
    Ahmed, Nisar
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Yousaf, Shahid
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    For Improved Energy Economy – How Can Extended Smart Metering Be Displayed?2011Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context: A District Heating System (DHS) uses a central heating plant to produce and distribute hot water in a community. Such a plant is connected with consumers’ premises to provide them with hot water and space heating facilities. Variations in the consumption of heat energy depend upon different factors like difference in energy prices, living standards, environmental effects and economical conditions etc. These factors can manage intelligently by advanced tools of Information and Communication Technology (ICT) such as smart metering. That is a new and emerging technology; used normally for metering of District Heating (DH), district cooling, electricity and gas. Traditional meters measures overall consumption of energy, in contrast smart meters have the ability to frequently record and transmit energy consumption statistics to both energy providers and consumers by using their communication networks and network management systems. Objectives: First objective of conducted study was providing energy consumption/saving suggestions on smart metering display for accepted consumer behavior, proposed by the energy providers. Our second objective was analysis of financial benefits for the energy provides, which could be expected through better consumer behavior. Third objective was analysis of energy consumption behavior of the residential consumes that how we can support it. Moreover, forth objective of the study was to use extracted suggestions of consumer behaviors to propose Extended Smart Metering Display for improving energy economy. Methods: In this study a background study was conducted to develop basic understanding about District Heat Energy (DHE), smart meters and their existing display, consumer behaviors and its effects on energy consumption. Moreover, interviews were conducted with representatives of smart heat meters’ manufacturer, energy providers and residential consumers. Interviews’ findings enabled us to propose an Extended Smart Metering Display, that satisfies recommendations received from all the interviewees and background study. Further in this study, a workshop was conducted for the evaluation of the proposed Extended Smart Metering Display which involved representatives of smart heat meters’ manufacture and residential energy consumers. DHE providers also contributed in this workshop through their comments in online conversation, for which an evaluation request was sent to member companies of Swedish District Heating Association. Results: Informants in this research have different levels of experiences. Through a systematic procedure we have obtained and analyzed findings from all the informants. To fulfill the energy demands during peak hours, the informants emphasized on providing efficient energy consumption behavior to be displayed on smart heat meters. According to the informants, efficient energy consumption behavior can be presented through energy consumption/saving suggestions on display of smart meters. These suggestions are related to daily life activities like taking bath and shower, cleaning, washing and heating usage. We analyzed that efficient energy consumption behavior recommended by the energy providers can provide financial improvements both for the energy providers and the residential consumers. On the basis of these findings, we proposed Extended Smart Metering Display to present information in simple and interactive way. Furthermore, the proposed Extended Smart Metering Display can also be helpful in measuring consumers’ energy consumption behavior effectively. Conclusions: After obtaining answers of the research questions, we concluded that extension of existing smart heat meters’ display can effectively help the energy providers and the residential consumers to utilize the resources efficiently. That is, it will not only reduce energy bills for the residential consumers, but it will also help the energy provider to save scarce energy and enable them to serve the consumers better in peak hours. After deployment of the proposed Extended Smart Metering Display the energy providers will able to support the consumers’ behavior in a reliable way and the consumers will find/follow the energy consumption/saving guidelines easily.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 97.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Mujib, Saifullah Bin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 98.
    Ahmed, Sabbir
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för signalbehandling.
    Performance of Multi-Channel Medium Access Control Protocol incorporating Opportunistic Cooperative Diversity over Rayleigh Fading Channel2006Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    This thesis paper proposes a Medium Access Control (MAC) protocol for wireless networks, termed as CD-MMAC that utilizes multiple channels and incorporates opportunistic cooperative diversity dynamically to improve its performance. The IEEE 802.11b standard protocol allows the use of multiple channels available at the physical layer but its MAC protocol is designed only for a single channel. The proposed protocol utilizes multiple channels by using single interface and incorporates opportunistic cooperative diversity by using cross-layer MAC. The new protocol leverages the multi-rate capability of IEEE 802.11b and allows wireless nodes far away from destination node to transmit at a higher rate by using intermediate nodes as a relays. The protocol improves network throughput and packet delivery ratio significantly and reduces packet delay. The performance improvement is further evaluated by simulation and analysis.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 99.
    Ahmed Sheik, Kareem
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    A Comparative Study on Optimization Algorithms and its efficiency2022Självständigt arbete på avancerad nivå (masterexamen), 20 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2].

    Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation.

    Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better.

    Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained.

    Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted.

    Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.

    Ladda ner fulltext (pdf)
    A Comparative Study on Optimization Algorithms and its efficiency
  • 100.
    Ahmed, Soban
    et al.
    Natl Univ Comp & Emerging Sci, PAK.
    Bhatti, Muhammad Tahir
    Natl Univ Comp & Emerging Sci, PAK.
    Khan, Muhammad Gufran
    Natl Univ Comp & Emerging Sci, PAK.
    Lövström, Benny
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för matematik och naturvetenskap.
    Shahid, Muhammad
    Natl Univ Comp & Emerging Sci, PAK.
    Development and Optimization of Deep Learning Models for Weapon Detection in Surveillance Videos2022Ingår i: Applied Sciences, E-ISSN 2076-3417, Vol. 12, nr 12, artikel-id 5772Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Featured Application This work has applied computer vision and deep learning technology to develop a real-time weapon detector system and tested it on different computing devices for large-scale deployment. Weapon detection in CCTV camera surveillance videos is a challenging task and its importance is increasing because of the availability and easy access of weapons in the market. This becomes a big problem when weapons go into the wrong hands and are often misused. Advances in computer vision and object detection are enabling us to detect weapons in live videos without human intervention and, in turn, intelligent decisions can be made to protect people from dangerous situations. In this article, we have developed and presented an improved real-time weapon detection system that shows a higher mean average precision (mAP) score and better inference time performance compared to the previously proposed approaches in the literature. Using a custom weapons dataset, we implemented a state-of-the-art Scaled-YOLOv4 model that resulted in a 92.1 mAP score and frames per second (FPS) of 85.7 on a high-performance GPU (RTX 2080TI). Furthermore, to achieve the benefits of lower latency, higher throughput, and improved privacy, we optimized our model for implementation on a popular edge-computing device (Jetson Nano GPU) with the TensorRT network optimizer. We have also performed a comparative analysis of the previous weapon detector with our presented model using different CPU and GPU machines that fulfill the purpose of this work, making the selection of model and computing device easier for the users for deployment in a real-time scenario. The analysis shows that our presented models result in improved mAP scores on high-performance GPUs (such as RTX 2080TI), as well as on low-cost edge computing GPUs (such as Jetson Nano) for weapon detection in live CCTV camera surveillance videos.

    Ladda ner fulltext (pdf)
    fulltext
1234567 51 - 100 av 5497
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf