Endre søk
Begrens søket
1234567 1 - 50 of 3057
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    ABBAS, FAHEEM
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Intelligent Container Stacking System at Seaport Container Terminal2016Independent thesis Advanced level (degree of Master (One Year)), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context: The workload at seaport container terminal is increasing gradually. We need to improve the performance of terminal to fulfill the demand. The key section of the container terminal is container stacking yard which is an integral part of the seaside and the landside. So its performance has the effects on both sides. The main problem in this area is unproductive moves of containers. However, we need a well-planned stacking area in order to increase the performance of terminal and maximum utilization of existing resources.

    Objectives: In this work, we have analyzed the existing container stacking system at Helsingborg seaport container terminal, Sweden, investigated the already provided solutions of the problem and find the best optimization technique to get the best possible solution. After this, suggest the solution, test the proposed solution and analyzed the simulation based results with respect to the desired solution.

    Methods: To identify the problem, methods and proposed solutions of the given problem in the domain of container stacking yard management, a literature review has been conducted by using some e-resources/databases. A GA with best parametric values is used to get the best optimize solution. A discrete event simulation model for container stacking in the yard has been build and integrated with genetic algorithm. A proposed mathematical model to show the dependency of cost minimization on the number of containers’ moves.

    Results: The GA has been achieved the high fitness value versus generations for 150 containers to storage at best location in a block with 3 tier levels and to minimize the unproductive moves in the yard. A comparison between Genetic Algorithm and Tabu Search has been made to verify that the GA has performed better than other algorithm or not. A simulation model with GA has been used to get the simulation based results and to show the container handling by using resources like AGVs, yard crane and delivery trucks and container stacking and retrieval system in the yard. The container stacking cost is directly proportional to the number of moves has been shown by the mathematical model.

    Conclusions: We have identified the key factor (unproductive moves) that is the base of other key factors (time & cost) and has an effect on the performance of the stacking yard and overall the whole seaport terminal. We have focused on this drawback of stacking system and proposed a solution that makes this system more efficient. Through this, we can save time and cost both. A Genetic Algorithm is a best approach to solve the unproductive moves problem in container stacking system.

    Fulltekst (pdf)
    fulltext
  • 2.
    Abbas, Gulfam
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Asif, Naveed
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Performance Tradeoffs in Software Transactional Memory2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Transactional memory (TM), a new programming paradigm, is one of the latest approaches to write programs for next generation multicore and multiprocessor systems. TM is an alternative to lock-based programming. It is a promising solution to a hefty and mounting problem that programmers are facing in developing programs for Chip Multi-Processor (CMP) architectures by simplifying synchronization to shared data structures in a way that is scalable and compos-able. Software Transactional Memory (STM) a full software approach of TM systems can be defined as non-blocking synchronization mechanism where sequential objects are automatically converted into concurrent objects. In this thesis, we present performance comparison of four different STM implementations – RSTM of V. J. Marathe, et al., TL2 of D. Dice, et al., TinySTM of P. Felber, et al. and SwissTM of A. Dragojevic, et al. It helps us in deep understanding of potential tradeoffs involved. It further helps us in assessing, what are the design choices and configuration parameters that may provide better ways to build better and efficient STMs. In particular, suitability of an STM is analyzed against another STM. A literature study is carried out to sort out STM implementations for experimentation. An experiment is performed to measure performance tradeoffs between these STM implementations. The empirical evaluations done as part of this thesis conclude that SwissTM has significantly higher throughput than state-of-the-art STM implementations, namely RSTM, TL2, and TinySTM, as it outperforms consistently well while measuring execution time and aborts per commit parameters on STAMP benchmarks. The results taken in transaction retry rate measurements show that the performance of TL2 is better than RSTM, TinySTM and SwissTM.

    Fulltekst (pdf)
    FULLTEXT01
  • 3.
    Abbireddy, Sharath
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Model for Capacity Planning in Cassandra: Case Study on Ericsson’s Voucher System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.

    Fulltekst (pdf)
    fulltext
  • 4.
    Abdelrasoul, Nader
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Optimization Techniques For an Artificial Potential Fields Racing Car Controller2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Building autonomous racing car controllers is a growing field of computer science which has been receiving great attention lately. An approach named Artificial Potential Fields (APF) is used widely as a path finding and obstacle avoidance approach in robotics and vehicle motion controlling systems. The use of APF results in a collision free path, it can also be used to achieve other goals such as overtaking and maneuverability. Objectives. The aim of this thesis is to build an autonomous racing car controller that can achieve good performance in terms of speed, time, and damage level. To fulfill our aim we need to achieve optimality in the controller choices because racing requires the highest possible performance. Also, we need to build the controller using algorithms that does not result in high computational overhead. Methods. We used Particle Swarm Optimization (PSO) in combination with APF to achieve optimal car controlling. The Open Racing Car Simulator (TORCS) was used as a testbed for the proposed controller, we have conducted two experiments with different configuration each time to test the performance of our APF- PSO controller. Results. The obtained results showed that using the APF-PSO controller resulted in good performance compared to top performing controllers. Also, the results showed that the use of PSO proved to enhance the performance compared to using APF only. High performance has been proven in the solo driving and in racing competitions, with the exception of an increased level of damage, however, the level of damage was not very high and did not result in a controller shut down. Conclusions. Based on the obtained results we have concluded that the use of PSO with APF results in high performance while taking low computational cost.

    Fulltekst (pdf)
    FULLTEXT01
  • 5.
    Abghari, Shahrooz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Data Mining Approaches for Outlier Detection Analysis2020Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Outlier detection is studied and applied in many domains. Outliers arise due to different reasons such as fraudulent activities, structural defects, health problems, and mechanical issues. The detection of outliers is a challenging task that can reveal system faults, fraud, and save people's lives. Outlier detection techniques are often domain-specific. The main challenge in outlier detection relates to modelling the normal behaviour in order to identify abnormalities. The choice of model is important, i.e., an unsuitable data model can lead to poor results. This requires a good understanding and interpretation of the data, the constraints, and requirements of the domain problem. Outlier detection is largely an unsupervised problem due to unavailability of labeled data and the fact that labeled data is expensive. 

    In this thesis, we study and apply a combination of both machine learning and data mining techniques to build data-driven and domain-oriented outlier detection models. We focus on three real-world application domains: maritime surveillance, district heating, and online media and sequence datasets. We show the importance of data preprocessing as well as feature selection in building suitable methods for data modelling. We take advantage of both supervised and unsupervised techniques to create hybrid methods. 

    More specifically, we propose a rule-based anomaly detection system using open data for the maritime surveillance domain. We exploit sequential pattern mining for identifying contextual and collective outliers in online media data. We propose a minimum spanning tree clustering technique for detection of groups of outliers in online media and sequence data. We develop a few higher order mining approaches for identifying manual changes and deviating behaviours in the heating systems at the building level. The proposed approaches are shown to be capable of explaining the underlying properties of the detected outliers. This can facilitate domain experts in narrowing down the scope of analysis and understanding the reasons of such anomalous behaviours. We also investigate the reproducibility of the proposed models in similar application domains.

    Fulltekst (pdf)
    fulltext
  • 6.
    Abghari, Shahrooz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Data Modeling for Outlier Detection2018Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    This thesis explores the data modeling for outlier detection techniques in three different application domains: maritime surveillance, district heating, and online media and sequence datasets. The proposed models are evaluated and validated under different experimental scenarios, taking into account specific characteristics and setups of the different domains.

    Outlier detection has been studied and applied in many domains. Outliers arise due to different reasons such as fraudulent activities, structural defects, health problems, and mechanical issues. The detection of outliers is a challenging task that can reveal system faults, fraud, and save people's lives. Outlier detection techniques are often domain-specific. The main challenge in outlier detection relates to modeling the normal behavior in order to identify abnormalities. The choice of model is important, i.e., an incorrect choice of data model can lead to poor results. This requires a good understanding and interpretation of the data, the constraints, and the requirements of the problem domain. Outlier detection is largely an unsupervised problem due to unavailability of labeled data and the fact that labeled data is expensive.

    We have studied and applied a combination of both machine learning and data mining techniques to build data-driven and domain-oriented outlier detection models. We have shown the importance of data preprocessing as well as feature selection in building suitable methods for data modeling. We have taken advantage of both supervised and unsupervised techniques to create hybrid methods. For example, we have proposed a rule-based outlier detection system based on open data for the maritime surveillance domain. Furthermore, we have combined cluster analysis and regression to identify manual changes in the heating systems at the building level. Sequential pattern mining for identifying contextual and collective outliers in online media data have also been exploited. In addition, we have proposed a minimum spanning tree clustering technique for detection of groups of outliers in online media and sequence data. The proposed models have been shown to be capable of explaining the underlying properties of the detected outliers. This can facilitate domain experts in narrowing down the scope of analysis and understanding the reasons of such anomalous behaviors. We have also investigated the reproducibility of the proposed models in similar application domains.

    Fulltekst (pdf)
    fulltext
  • 7.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Multi-view Clustering Analyses for District Heating Substations2020Inngår i: DATA 2020 - Proceedings of the 9th International Conference on Data Science, Technology and Applications2020, / [ed] Hammoudi S.,Quix C.,Bernardino J., SciTePress, 2020, s. 158-168Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this study, we propose a multi-view clustering approach for mining and analysing multi-view network datasets. The proposed approach is applied and evaluated on a real-world scenario for monitoring and analysing district heating (DH) network conditions and identifying substations with sub-optimal behaviour. Initially, geographical locations of the substations are used to build an approximate graph representation of the DH network. Two different analyses can further be applied in this context: step-wise and parallel-wise multi-view clustering. The step-wise analysis is meant to sequentially consider and analyse substations with respect to a few different views. At each step, a new clustering solution is built on top of the one generated by the previously considered view, which organizes the substations in a hierarchical structure that can be used for multi-view comparisons. The parallel-wise analysis on the other hand, provides the opportunity to analyse substations with regards to two different views in parallel. Such analysis is aimed to represent and identify the relationships between substations by organizing them in a bipartite graph and analysing the substations’ distribution with respect to each view. The proposed data analysis and visualization approach arms domain experts with means for analysing DH network performance. In addition, it will facilitate the identification of substations with deviating operational behaviour based on comparative analysis with their closely located neighbours.

    Fulltekst (pdf)
    Multi-view Clustering Analyses for District Heating Substations
  • 8.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Lavesson, Niklas
    Jönköping University, SWE.
    Higher order mining for monitoring district heating substations2019Inngår i: Proceedings - 2019 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, s. 382-391Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a higher order mining (HOM) approach for modelling, monitoring and analyzing district heating (DH) substations' operational behaviour and performance. HOM is concerned with mining over patterns rather than primary or raw data. The proposed approach uses a combination of different data analysis techniques such as sequential pattern mining, clustering analysis, consensus clustering and minimum spanning tree (MST). Initially, a substation's operational behaviour is modeled by extracting weekly patterns and performing clustering analysis. The substation's performance is monitored by assessing its modeled behaviour for every two consecutive weeks. In case some significant difference is observed, further analysis is performed by integrating the built models into a consensus clustering and applying an MST for identifying deviating behaviours. The results of the study show that our method is robust for detecting deviating and sub-optimal behaviours of DH substations. In addition, the proposed method can facilitate domain experts in the interpretation and understanding of the substations' behaviour and performance by providing different data analysis and visualization techniques. © 2019 IEEE.

    Fulltekst (pdf)
    Higher Order Mining for Monitoring DistrictHeating Substations
  • 9.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Gustafsson, Jörgen
    Ericsson AB.
    Shaikh, Junaid
    Ericsson AB.
    Outlier Detection for Video Session Data Using Sequential Pattern Mining2018Inngår i: ACM SIGKDD Workshop On Outlier Detection De-constructed, 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The growth of Internet video and over-the-top transmission techniqueshas enabled online video service providers to deliver highquality video content to viewers. To maintain and improve thequality of experience, video providers need to detect unexpectedissues that can highly affect the viewers’ experience. This requiresanalyzing massive amounts of video session data in order to findunexpected sequences of events. In this paper we combine sequentialpattern mining and clustering to discover such event sequences.The proposed approach applies sequential pattern mining to findfrequent patterns by considering contextual and collective outliers.In order to distinguish between the normal and abnormal behaviorof the system, we initially identify the most frequent patterns. Thena clustering algorithm is applied on the most frequent patterns.The generated clustering model together with Silhouette Index areused for further analysis of less frequent patterns and detectionof potential outliers. Our results show that the proposed approachcan detect outliers at the system level.

    Fulltekst (pdf)
    fulltext
  • 10.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ickin, Selim
    Ericsson, SWE.
    Gustafsson, Jörgen
    Ericsson, SWE.
    A Minimum Spanning Tree Clustering Approach for Outlier Detection in Event Sequences2018Inngår i: 2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA) / [ed] Wani M.A.,Sayed-Mouchaweh M.,Lughofer E.,Gama J.,Kantardzic M., IEEE, 2018, s. 1123-1130, artikkel-id 8614207Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Outlier detection has been studied in many domains. Outliers arise due to different reasons such as mechanical issues, fraudulent behavior, and human error. In this paper, we propose an unsupervised approach for outlier detection in a sequence dataset. The proposed approach combines sequential pattern mining, cluster analysis, and a minimum spanning tree algorithm in order to identify clusters of outliers. Initially, the sequential pattern mining is used to extract frequent sequential patterns. Next, the extracted patterns are clustered into groups of similar patterns. Finally, the minimum spanning tree algorithm is used to find groups of outliers. The proposed approach has been evaluated on two different real datasets, i.e., smart meter data and video session data. The obtained results have shown that our approach can be applied to narrow down the space of events to a set of potential outliers and facilitate domain experts in further analysis and identification of system level issues.

  • 11.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Kazemi, Samira
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Open Data for Anomaly Detection in Maritime Surveillance2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Maritime Surveillance (MS) has received increased attention from a civilian perspective in recent years. Anomaly detection (AD) is one of the many techniques available for improving the safety and security in the MS domain. Maritime authorities utilize various confidential data sources for monitoring the maritime activities; however, a paradigm shift on the Internet has created new sources of data for MS. These newly identified data sources, which provide publicly accessible data, are the open data sources. Taking advantage of the open data sources in addition to the traditional sources of data in the AD process will increase the accuracy of the MS systems. Objectives: The goal is to investigate the potential open data as a complementary resource for AD in the MS domain. To achieve this goal, the first step is to identify the applicable open data sources for AD. Then, a framework for AD based on the integration of open and closed data sources is proposed. Finally, according to the proposed framework, an AD system with the ability of using open data sources is developed and the accuracy of the system and the validity of its results are evaluated. Methods: In order to measure the system accuracy, an experiment is performed by means of a two stage random sampling on the vessel traffic data and the number of true/false positive and negative alarms in the system is verified. To evaluate the validity of the system results, the system is used for a period of time by the subject matter experts from the Swedish Coastguard. The experts check the detected anomalies against the available data at the Coastguard in order to obtain the number of true and false alarms. Results: The experimental outcomes indicate that the accuracy of the system is 99%. In addition, the Coastguard validation results show that among the evaluated anomalies, 64.47% are true alarms, 26.32% are false and 9.21% belong to the vessels that remain unchecked due to the lack of corresponding data in the Coastguard data sources. Conclusions: This thesis concludes that using open data as a complementary resource for detecting anomalous behavior in the MS domain is not only feasible but also will improve the efficiency of the surveillance systems by increasing the accuracy and covering some unseen aspects of maritime activities.

    Fulltekst (pdf)
    FULLTEXT01
  • 12.
    Abrahamsson, Charlotte
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Wessman, Mattias
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    WLAN Security: IEEE 802.11b or Bluetooth - which standard provides best security methods for companies?2004Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    Which security holes and security methods do IEEE 802.11b and Bluetooth offer? Which standard provides best security methods for companies? These are two interesting questions that this thesis will be about. The purpose is to give companies more information of the security aspects that come with using WLANs. An introduction to the subject of WLAN is presented in order to give an overview before the description of the two WLAN standards; IEEE 802.11b and Bluetooth. The thesis will give an overview of how IEEE 802.11b and Bluetooth works, a in depth description about the security issues of the two standards will be presented, security methods available for companies, the security flaws and what can be done in order to create a secure WLAN are all important aspects to this thesis. In order to give a guidance of which WLAN standard to choose, a comparison of the two standards with the security issues in mind, from a company's point of view is described. We will present our conclusion which entails a recommendation to companies to use Bluetooth over IEEE 802.11b, since it offers better security methods.

    Fulltekst (pdf)
    FULLTEXT01
  • 13.
    Abualhana, Munther
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Tariq, Ubaid
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Improving QoE over IPTV using FEC and Retransmission2009Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    IPTV (Internet Protocol Television), a new and modern concept of emerging technologies with focus on providing cutting edge high-resolution television, broadcast, and other fascinating services, is now easily available with only requirement of high-speed internet. Everytime a new technology is made local, it faces tremendous problems whether from technological point of view to enhance the performance or when it comes down to satisfy the customers. This cutting edge technology has provided researchers to embark and play with different tools to provide better quality while focusing on existing tools. Our target in dissertation is to provide a few interesting facets of IPTV and come up with a concept of introducing an imaginary cache that can re-collect the packets travelling from streaming server to the end user. In the access node this cache would be fixed and then on the basis of certain pre-assumed research work we can conclude how quick retransmission can take place when the end user responds back using RTCP protocol and asks for the retransmission of corrupted/lost packets. In the last section, we plot our scenario of streaming server on one side and client, end user on the other end and make assumption on the basis of throughput, response time and traffic.

    Fulltekst (pdf)
    FULLTEXT01
  • 14.
    Adamala, Szymon
    et al.
    Blekinge Tekniska Högskola, Sektionen för management.
    Cidrin, Linus
    Blekinge Tekniska Högskola, Sektionen för management.
    Key Success Factors in Business Intelligence2011Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Business Intelligence can bring critical capabilities to an organization, but the implementation of such capabilities is often plagued with problems and issues. Why is it that certain projects fail, while others succeed? The theoretical problem and the aim of this thesis is to identify the factors that are present in successful Business Intelligence projects and organize them into a framework of critical success factors. A survey was conducted during the spring of 2011 to collect primary data on Business Intelligence projects. It was directed to a number of different professionals operating in the Business Intelligence field in large enterprises, primarily located in Poland and primarily vendors, but given the similarity of Business Intelligence initiatives across countries and increasing globalization of large enterprises, the conclusions from this thesis may well have relevance and be applicable for projects conducted in other countries. Findings confirm that Business Intelligence projects are wrestling with both technological and nontechnological problems, but the non-technological problems are found to be harder to solve as well as more time consuming than their technological counterparts. The thesis also shows that critical success factors for Business Intelligence projects are different from success factors for IS projects in general and Business Intelligences projects have critical success factors that are unique to the subject matter. Major differences can be predominately found in the non-technological factors, such as the presence of a specific business need to be addressed by the project and a clear vision to guide the project. Results show that successful projects have specific factors present more frequently than nonsuccessful. Such factors with great differences are the type of project funding, business value provided by each iteration of the project and the alignment of the project to a strategic vision for Business Intelligence. Furthermore, the thesis provides a framework of critical success factors that, according to the results of the study, explains 61% of variability of success of projects. Given these findings, managers responsible for introducing Business Intelligence capabilities should focus on a number of non-technological factors to increase the likelihood of project success. Areas which should be given special attention are: making sure that the Business Intelligence solution is built with end users in mind, that the Business Intelligence solution is closely tied to company‟s strategic vision and that the project is properly scoped and prioritized to concentrate on best opportunities first. Keywords: Critical Success Factors, Business Intelligence, Enterprise Data Warehouse Projects, Success Factors Framework, Risk Management

    Fulltekst (pdf)
    FULLTEXT01
  • 15.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radio Electronics, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Reinforcement Learning for Anti-Ransomware Testing2020Inngår i: 2020 IEEE East-West Design and Test Symposium, EWDTS 2020 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2020, artikkel-id 9225141Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we are going to verify the possibility to create a ransomware simulation that will use an arbitrary combination of known tactics and techniques to bypass an anti-malware defense. To verify this hypothesis, we conducted an experiment in which an agent was trained with the help of reinforcement learning to run the ransomware simulator in a way that can bypass anti-ransomware solution and encrypt the target files. The novelty of the proposed method lies in applying reinforcement learning to anti-ransomware testing that may help to identify weaknesses in the anti-ransomware defense and fix them before a real attack happens. © 2020 IEEE.

    Fulltekst (pdf)
    fulltext
  • 16.
    Adamov, Alexander
    et al.
    NioGuard Security Lab, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Surmacz, Tomasz
    Wrocław University of Science and Technology, POL.
    An analysis of lockergoga ransomware2019Inngår i: 2019 IEEE East-West Design and Test Symposium, EWDTS 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper contains an analysis of the LockerGoga ransomware that was used in the range of targeted cyberattacks in the first half of 2019 against Norsk Hydra-A world top 5 aluminum manufacturer, as well as the US chemical enterprises Hexion, and Momentive-Those companies are only the tip of the iceberg that reported the attack to the public. The ransomware was executed by attackers from inside a corporate network to encrypt the data on enterprise servers and, thus, taking down the information control systems. The intruders asked for a ransom to release a master key and decryption tool that can be used to decrypt the affected files. The purpose of the analysis is to find out tactics and techniques used by the LockerGoga ransomware during the cryptolocker attack as well as an encryption model to answer the question if the encrypted files can be decrypted with or without paying a ransom. The scientific novelty of the paper lies in an analysis methodology that is based on various reverse engineering techniques such as multi-process debugging and using open source code of a cryptographic library to find out a ransomware encryption model. © 2019 IEEE.

  • 17. Adams, Liz
    et al.
    Börstler, Jürgen
    What It's Like to Participate in an ITiCSE Working Group2011Inngår i: ACM SIGCSE Bulletin, Vol. 43, nr 1Artikkel i tidsskrift (Annet vitenskapelig)
  • 18.
    Adebomi, OYEKANLU Emmanuel
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Mwela, JOHN Samson
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Impact of Packet Losses on the Quality of Video Streaming2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    In this thesis, the impact of packet losses on the quality of received videos sent across a network that exhibit normal network perturbations such as jitters, delays, packet drops etc has been examined. Dynamic behavior of a normal network has been simulated using Linux and the Network Emulator (NetEm). Peoples’ perceptions on the quality of the received video were used in rating the qualities of several videos with differing speeds. In accordance with ITU’s guideline of using Mean Opinion Scores (MOS), the effects of packet drops were analyzed. Excel and Matlab were used as tools in analyzing the peoples’ opinions which indicates the impacts that different loss rates has on the transmitted videos. Statistical methods used for evaluation of data are mean and variance. We conclude that people have convergence of opinions when losses become extremely high on videos with highly variable scene changes

    Fulltekst (pdf)
    FULLTEXT01
  • 19.
    Adeopatoye, Remilekun
    et al.
    Federal University of Technology, Nigeria.
    Ikuesan, Richard Adeyemi
    Zayed University, United Arab Emirates.
    Sookhak, Mehdi
    Texas A&m University, United States.
    Hungwe, Taurai
    Sefako Makgatho University of Health Sciences, South Africa.
    Kebande, Victor R.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Towards an Open-Source Based E-Mail Forensic Tool that uses Headers in Digital Investigation2023Inngår i: ACM International Conference Proceeding Series, ACM Digital Library, 2023Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Email-related incidents/crimes are on the rise owing to the fact that communication by electronic mail (e-mail) has become an important part of our daily lives. The technicality behind e-mail plays an important role when looking for digital evidence that can be used to create a hypothesis that can be used during litigation. During this process, it is needful to have a tool that can help to isolate email incidents as a potential crime scene in the wake of suspected attacks. The problem that this paper is addressing paper, is more centered on realizing an open-source email-forensic tool that used the header analysis approach. One advantage of this approach is that it helps investigators to collect digital evidence from e-mail systems, organize the collected data, analyze and discover any discrepancies in the header fields of an e-mail, and generates an evidence report. The main contribution of this paper focuses on generating a freshly computed hash that is attached to every generated report, to ensure the verifiability, reliability, and integrity of the reports to prove that they have not been modified in any way. Finally, this ensures that the sanctity and forensic soundness of the collected evidence are maintained. © 2023 ACM.

  • 20.
    Adeyinka, Oluwaseyi
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för för interaktion och systemdesign.
    Service Oriented Architecture & Web Services: Guidelines for Migrating from Legacy Systems and Financial Consideration2008Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    The purpose of this study is to present guidelines that can be followed when introducing Service-oriented architecture through the use of Web services. This guideline will be especially useful for organizations migrating from their existing legacy systems where the need also arises to consider the financial implications of such an investment whether it is worthwhile or not. The proposed implementation guide aims at increasing the chances of IT departments in organizations to ensure a successful integration of SOA into their system and secure strong financial commitment from the executive management. Service oriented architecture technology is a new concept, a new way of looking at a system which has emerged in the IT world and can be implemented by several methods of which Web services is one platform. Since it is a developing technology, organizations need to be cautious on how to implement this technology to obtain maximum benefits. Though a well-designed, service-oriented environment can simplify and streamline many aspects of information technology and business, achieving this state is not an easy task. Traditionally, management finds it very difficult to justify the considerable cost of modernization, let alone shouldering the risk without achieving some benefits in terms of business value. The study identifies some common best practices of implementing SOA and the use of Web services, steps to successfully migrate from legacy systems to componentized or service enabled systems. The study also identified how to present financial return on investment and business benefits to the management in order to secure the necessary funds. This master thesis is based on academic literature study, professional research journals and publications, interview with business organizations currently working on service oriented architecture. I present guidelines that can be of assistance to migrate from legacy systems to service-oriented architecture based on the analysis from comparing information sources mentioned above.

    Fulltekst (pdf)
    FULLTEXT01
  • 21.
    Adigun, Jubril Gbolahan
    et al.
    University of Innsbruck, DEU.
    Camilli, Matteo
    Free University of Bozen–Bolzano, ITA.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Giusti, Andrea
    Fraunhofer Italia Research, ITA.
    Matt, Dominik T.
    Free University of Bozen–Bolzano, ITA.
    Perini, Anna
    University of Trento, ITA.
    Russo, Barbara
    Free University of Bozen–Bolzano, ITA.
    Susi, Angelo
    Fondazione Bruno Kessler, ITA.
    Collaborative Artificial Intelligence Needs Stronger Assurances Driven by Risks2022Inngår i: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 55, nr 3, s. 52-63Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Collaborative artificial intelligence systems (CAISs) aim to work with humans in a shared space to achieve a common goal, but this can pose hazards that could harm human beings. We identify emerging problems in this context and report our vision of and progress toward a risk-driven assurance process for CAISs.

  • 22.
    Adolfsson, Victor
    Blekinge Tekniska Högskola, Institutionen för ekonomi och samhällsvetenskap.
    Säkerhetskapital En del av det Intellektuella Kapitalet2002Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Det saknas metoder att mäta informationssäkerhet inom företag och företagets tillgångar har förändrats från ett fokus på maskiner och råvaror till kunskap (intellektuellt kapital). Rapporten utforskar om det finns delar av företags intellektuella kapital som beskyddar företagets tillgångar och processer. Detta kapital kallas säkerhetskapital. Hur skulle företags informationssäkerhet kunna tydliggöras genom dess intellektuella kapital och hur kan begrepp inom informationssäkerhet och företagsvärdering hänga samman? Syftet med uppsatsen är att öka förståelsen hur informationssäkerhet är relaterat till intellektuellt kapital. Rapporten bygger på litteraturstudier om intellektuellt kapital och informationssäkerhet. Data har samlats in från dels börsnoterade företags årsredovisningar och dels från pressreleaser och börsinformation. Denna information har sedan analyserats både kvantitativt och kvalitativt och begreppet säkerhetskapital har växt fram. Teorier om företagsvärdering, intellektuellt kapital, risk management och informationssäkerhet presenteras och blir den referensram i vilket begreppet säkerhetskapital sätts i sitt sammanhang. Begreppet säkerhetskapital presenteras i form av modeller och situationer vari olika perspektiv på säkerhetskapital analyseras och utvärderas. Slutsatserna är främst i form av modeller och beskrivningar av hur man kan se på säkerhetskapital i förhållande till intellektuellt kapital och andra begrepp. Området är komplext men delar av resultaten (som är på en hög abstraktionsnivå) kan användas för att värdera andra typer av immateriella tillgångar.

    Fulltekst (pdf)
    FULLTEXT01
  • 23.
    Adolfsson, Victor
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    The State of the Art in Distributed Mobile Robotics2001Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Distributed Mobile Robotics (DMR) is a multidisciplinary research area with many open research questions. This is a survey of the state of the art in Distributed Mobile Robotics research. DMR is sometimes referred to as cooperative robotics or multi-robotic systems. DMR is about how multiple robots can cooperate to achieve goals and complete tasks better than single robot systems. It covers architectures, communication, learning, exploration and many other areas presented in this master thesis.

    Fulltekst (pdf)
    FULLTEXT01
  • 24.
    Adurti, Devi Abhiseshu
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Battu, Mohit
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Optimization of Heterogeneous Parallel Computing Systems using Machine Learning2021Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Background: Heterogeneous parallel computing systems utilize the combination of different resources CPUs and GPUs to achieve high performance and, reduced latency and energy consumption. Programming applications that target various processing units requires employing different tools and programming models/languages. Furthermore, selecting the most optimal implementation, which may either target different processing units (i.e. CPU or GPU) or implement the various algorithms, is not trivial for a given context. In this thesis, we investigate the use of machine learning to address the selection problem of various implementation variants for an application running on a heterogeneous system.

    Objectives: This study is focused on providing an approach for optimization of heterogeneous parallel computing systems at runtime by building the most efficient machine learning model to predict the optimal implementation variant of an application.

    Methods: The six machine learning models KNN, XGBoost, DTC, Random Forest Classifier, LightGBM, and SVM are trained and tested using stratified k-fold on the dataset generated from the matrix multiplication application for square matrix input dimension ranging from 16x16 to 10992x10992.

    Results: The results of each machine learning algorithm’s finding are presented through accuracy, confusion matrix, classification report for parameters precision, recall, and F-1 score, and a comparison between the machine learning models in terms of accuracy, run-time training, and run-time prediction are provided to determine the best model.

    Conclusions: The XGBoost, DTC, SVM algorithms achieved 100% accuracy. In comparison to the other machine learning models, the DTC is found to be the most suitable due to its low time required for training and prediction in predicting the optimal implementation variant of the heterogeneous system application. Hence the DTC is the best suitable algorithm for the optimization of heterogeneous parallel computing.

    Fulltekst (pdf)
    fulltext
  • 25.
    Aftarczuk, Kamila
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

    Fulltekst (pdf)
    FULLTEXT01
  • 26. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

    Fulltekst (pdf)
    FULLTEXT01
  • 27. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

    Fulltekst (pdf)
    FULLTEXT01
  • 28.
    Afzal, Wasif
    Blekinge Tekniska Högskola.
    Using faults-slip-through metric as a predictor of fault-proneness2010Inngår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC, IEEE , 2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

    Fulltekst (pdf)
    fulltext
  • 29. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, nr 6, s. 957-976Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 30. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

    Fulltekst (pdf)
    FULLTEXT01
  • 31.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola.
    Torkar, Richard
    Blekinge Tekniska Högskola.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Genetic programming for cross-release fault count predictions in large and complex software projects2010Inngår i: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, IGI Global, Hershey, USA , 2010Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 32.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola.
    Torkar, Richard
    Blekinge Tekniska Högskola.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wikstrand, Greger
    KnowIT YAHM Sweden AB, SWE.
    Search-based prediction of fault-slip-through in large software projects2010Inngår i: Proceedings - 2nd International Symposium on Search Based Software Engineering, SSBSE 2010, IEEE , 2010, s. 79-88Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 33.
    Agardh, Johannes
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Johansson, Martin
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Pettersson, Mårten
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Designing Future Interaction with Today's Technology1999Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Information Technology has an increasing part of our lives. In this thesis we will discuss how technology can relate to humans and human activity. We take our standing point in concepts like Calm Technology and Tacit Interaction and examine how these visions and concepts can be used in the process of designing an artifact for a real work practice. We have done work-place studies of truck-drivers and traffic leaders regarding how they find their way to the right addresses and design a truck navigation system that aims to suit the truck drivers work practice.

  • 34.
    Agushi, Camrie
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Innovation inom Digital Rights Management2005Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    The thesis deals with the topic of Digital Rights Management (DRM), more specifically the innovation trends within DRM. It is focused on three driving forces of DRM. Firstly, DRM technologies, secondly, DRM standards and thirdly, DRM interoperability. These driving forces are discussed and analyzed in order to explore innovation trends within DRM. In the end, a multi-facetted overview of today’s DRM context is formed. One conclusion is that the aspect of Intellectual Property Rights is considered to be an important indicator of the direction DRM innovation is heading.

    Fulltekst (pdf)
    FULLTEXT01
  • 35.
    Ahl, Viggo
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    An experimental comparison of five prioritization methods: Investigating ease of use, accuracy and scalability2005Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements

    Fulltekst (pdf)
    FULLTEXT01
  • 36. Ahlgren, Filip
    Comparing state-of-the-art machine learning malware detection methods on Windows2021Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Background. Malware has been a major issue for years and old signature scanning methods for detecting malware are outdated and can be bypassed by most advanced malware. With the help of machine learning, patterns of malware behavior and structure can be learned to detect the more advanced threats that are active today.

    Objectives. In this thesis, research to find state-of-the-art machine learning methods to detect malware is proposed. A dataset collection method will be found in research to be used in an experiment. Three selected methods will be re-implemented for an experiment to compare which has the best performance. All three algorithms will be trained and tested on the same dataset.

    Methods. A literature review with the snowballing technique was proposed to find the state-of-the-art detection methods. The malware was collected through the malware database VirusShare and the total number of samples was 14924. The algorithms were re-implemented, trained, tested, and compared by accuracy, true positive, true negative, false positive, and false negative.

    Results. The results showed that the best performing research available are image detection, N-Gram combined with meta-data and Function Call Graphs. However, a new method was proposed called Running Window Entropy which does not have a lot of research about it and still can achieve decent accuracy. The selected methods for comparison were image detection, N-Gram, and Running Window Entropy where the results show they had an accuracy of 94.64%, 96.45%, and 93.71% respectively.

    Conclusions. On this dataset, it showed that the N-Gram had the best performance of all three methods. The other two methods showed that, depending on the use case, either can be applicable. 

    Fulltekst (pdf)
    Comparing state-of-the-art machine learning malware detection methods on Windows
  • 37.
    Ahlgren, Filip
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Local And Network Ransomware Detection Comparison2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

    Fulltekst (pdf)
    BTH2019Ahlgren
  • 38.
    Ahlgren, Johan
    et al.
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Karlsson, Robert
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    En studie av inbyggda brandväggar: Microsoft XP och Red Hat Linux2003Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Detta kandidatarbete utreder hur väl två operativsystems inbyggda brandväggar fungerar i symbios med en användares vanligaste tjänsteutnyttjande på Internet, samt att se hur likartade de är i sitt skydd från hot. De två operativsystemen som vi utgick ifrån var Microsoft Windows XP samt Red Hat Linux 8.0. Den hypotes vi arbetat kring lyder enligt följande: De två inbyggda brandväggarna är i stort likartade rörande skydd från hot på Internet och uppfyller användarnas tjänsteutnyttjande. De metoder vi använt, för att svara på vår frågeställning, har delats upp i ett funktionalitetstest och ett säkerhetstest. I funktionalitetstestet provades de vanligaste Internettjänsterna med den inbyggda brandväggen och ifall det uppstod några komplikationer eller ej. De två inbyggda brandväggarna genom gick i säkerhetstestet skannings- och svaghetskontroll via ett flertal verktyg. Genom resultatet kan vi konstatera att de inbyggda brandväggarna klarar av de vanligaste tjänsterna på Internet, men att en skillnad föreligger hos dem vad gäller exponeringen ut mot Internet. Windows XP ligger helt osynligt utåt, medan Red Hats inbyggda brandvägg avslöjar en mängd information om värddatorn, som kan komma att användas i illvilliga syften. Slutsatsen blev att vi avslutningsvis falsifierade vår hypotes då de två inbyggda brandväggarna ej var jämlika i sitt skydd mot yttre hot på Internet.

    Fulltekst (pdf)
    FULLTEXT01
  • 39.
    Ahlstrand, Jim
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. Telenor Sverige AB, Sweden..
    Boldt, Martin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Preliminary Results on the use of Artificial Intelligence for Managing Customer Life Cycles2023Inngår i: 35th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2023 / [ed] Håkan Grahn, Anton Borg and Martin Boldt, Linköping University Electronic Press, 2023, s. 68-76Konferansepaper (Fagfellevurdert)
    Abstract [en]

    During the last decade we have witnessed how artificial intelligence (AI) have changed businesses all over the world. The customer life cycle framework is widely used in businesses and AI plays a role in each stage. However,implementing and generating value from AI in the customerlife cycle is not always simple. When evaluating the AI against business impact and value it is critical to consider both themodel performance and the policy outcome. Proper analysis of AI-derived policies must not be overlooked in order to ensure ethical and trustworthy AI. This paper presents a comprehensive analysis of the literature on AI in customer lifecycles (CLV) from an industry perspective. The study included 31 of 224 analyzed peer-reviewed articles from Scopus search result. The results show a significant research gap regardingoutcome evaluations of AI implementations in practice. This paper proposes that policy evaluation is an important tool in the AI pipeline and empathizes the significance of validating bothpolicy outputs and outcomes to ensure reliable and trustworthy AI.

    Fulltekst (pdf)
    fulltext
  • 40.
    Ahlström, Catharina
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Fridensköld, Kristina
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    How to support and enhance communication: in a student software development project2002Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    This report, in which we have put an emphasis on the word communication, is based on a student software development project conducted during spring 2002. We describe how the use of design tools plays a key role in supporting communication in group activities and to what extent communication can be supported and enhanced by tools such as mock-ups and metaphors in a group project. We also describe a design progress from initial sketches to a final mock-up of a GUI for a postcard demo application.

    Fulltekst (pdf)
    FULLTEXT01
    Fulltekst (pdf)
    FULLTEXT02
    Fulltekst (pdf)
    FULLTEXT03
    Fulltekst (pdf)
    FULLTEXT04
  • 41.
    Ahlström, Eric
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Holmqvist, Lucas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Goswami, Prashant
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Comparing Traditional Key Frame and Hybrid Animation2017Inngår i: SCA '17 Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Digital Library, 2017, artikkel-id nr. a20Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this research the authors explore a hybrid approach which usesthe basic concept of key frame animation together with proceduralanimation to reduce the number of key frames needed for an animationclip. The two approaches are compared by conducting anexperiment where the participating subjects were asked to ratethem based on their visual appeal.

    Fulltekst (pdf)
    fulltext
  • 42.
    Ahmad, Al Ghaith
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Abd ULRAHMAN, Ibrahim
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Independent thesis Basic level (degree of Bachelor), 12 poäng / 18 hpOppgave
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Fulltekst (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 43.
    Ahmad, Azeem
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Kolla, Sushma Joseph
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Effective Distribution of Roles and Responsibilities in Global Software Development Teams2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Industry is moving from co-located form of development to a distributed development in order to achieve different benefits such as cost reduction, access to skillful labor and around the clock working etc. This transfer requires industry to face different challenges such as communication, coordination and monitoring problems. Risk of project failure can be increased, if industry does not address these problems. This thesis is about providing the solutions of these problems in term of effective roles and responsibilities that may have positive impact on GSD team. Objectives. In this study we have developed framework for suggesting roles and responsibilities for GSD team. This framework consists of problems and casual dependencies between them which are related to team’s ineffectiveness, then suggestions in terms of roles and responsibilities have been presented in order to have an effective team in GSD. This framework, further, has been validated in industry through a survey that determines which are the effective roles and responsibilities in GSD. Methods. We have two research methods in this study 1) systematic literature review and 2) survey. Complete protocol for planning, conducting and reporting the review as well as survey has been described in their respective sections in this thesis. A systematic review is used to develop the framework whereas survey is used for framework validation. We have done static validation of framework. Results. Through SLR, we have identified 30 problems, 33 chains of problems. We have identified 4 different roles and 40 different responsibilities to address these chains of problems. During the validation of the framework, we have validated the links between suggested roles and responsibilities and chains of problems. Addition to this, through survey, we have identified 20 suggestions that represents strong positive impact on chains of problems in GSD in relation to team’s effectiveness. Conclusions. We conclude that implementation of effective roles and responsibilities in GSD team to avoid different problems require considerable attention from researchers and practitioners which can guarantee team’s effectiveness. Implementation of proper roles and responsibilities has been mentioned as one of the successful strategies for increasing team’s effectiveness in the literature, but which particular roles and responsibilities should be implemented still need to be addressed. We also conclude that there must be basic responsibilities associated with any particular role. Moreover, we conclude that there is a need for further development and empirical validation of different frameworks for suggesting roles and responsibilities in full scale industry trials.

    Fulltekst (pdf)
    FULLTEXT01
  • 44.
    AHMAD, MUHAMMAD ZEESHAN
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
    Comparative Analysis of Iptables and Shorewall2012Oppgave
    Abstract [en]

    The use of internet has increased over the past years. Many users may not have good intentions. Some people use the internet to gain access to the unauthorized information. Although absolute security of information is not possible for any network connected to the Internet however, firewalls make an important contribution to the network security. A firewall is a barrier placed between the network and the outside world to prevent the unwanted and potentially damaging intrusion of the network. This thesis compares the performance of Linux packet filtering firewalls, i.e. iptables and shorewall. The firewall performance testing helps in selecting the right firewall as needed. In addition, it highlights the strength and weakness of each firewall. Both firewalls were tested by using the identical parameters. During the experiments, recommended benchmarking methodology for firewall performance testing is taken into account as described in RFC 3511. The comparison process includes experiments which are performed by using different tools. To validate the effectiveness of firewalls, several performance metrics such as throughput, latency, connection establishment and teardown rate, HTTP transfer rate and system resource consumption are used. The experimental results indicate that the performance of Iptables firewall decreases as compared to shorewall in all the aspects taken into account. All the selected metrics show that large numbers of filtering rules have a negative impact on the performance of both firewalls. However, UDP throughput is not affected by the number of filtering rules. The experimental results also indicate that traffic sent with different packet sizes do not affect the performance of firewalls.

    Fulltekst (pdf)
    FULLTEXT01
  • 45.
    Ahmad, Nadeem
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Habib, M. Kashif
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Analysis of Network Security Threats and Vulnerabilities by Development & Implementation of a Security Network Monitoring Solution2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Communication of confidential data over the internet is becoming more frequent every day. Individuals and organizations are sending their confidential data electronically. It is also common that hackers target these networks. In current times, protecting the data, software and hardware from viruses is, now more than ever, a need and not just a concern. What you need to know about networks these days? How security is implemented to ensure a network? How is security managed? In this paper we will try to address the above questions and give an idea of where we are now standing with the security of the network.

    Fulltekst (pdf)
    FULLTEXT01
  • 46.
    Ahmad, Raheel
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    On the Scalability of Four Multi-Agent Architectures for Load Control Management in Intelligent Networks2003Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Paralleling the rapid advancement in the network evolution is the need for advanced network traffic management surveillance. The increasing number and variety of services being offered by communication networks has fuelled the demand for optimized load management strategies. The problem of Load Control Management in Intelligent Networks has been studied previously and four Multi-Agent architectures have been proposed. The objective of this thesis is to investigate one of the quality attributes namely, scalability of the four Multi-Agent architectures. The focus of this research would be to resize the network and study the performance of the different architectures in terms of Load Control Management through different scalability attributes. The analysis has been based on experimentation through simulations. It has been revealed through the results that different architectures exhibit different performance behaviors for various scalability attributes at different network sizes. It has been observed that there exists a trade-off in different scalability attributes as the network grows. The factors affecting the network performance at different network settings have been observed. Based on the results from this study it would be easier to design similar networks for optimal performance by controlling the influencing factors and considering the trade-offs involved.

    Fulltekst (pdf)
    FULLTEXT01
  • 47.
    Ahmad, Waqar
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Riaz, Asim
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Predicting Friendship Levels in Online Social Networks2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Abstract Context: Online social networks such as Facebook, Twitter, and MySpace have become the preferred interaction, entertainment and socializing facility on the Internet. However, these social network services also bring privacy issues in more limelight than ever. Several privacy leakage problems are highlighted in the literature with a variety of suggested countermeasures. Most of these measures further add complexity and management overhead for the user. One ignored aspect with the architecture of online social networks is that they do not offer any mechanism to calculate the strength of relationship between individuals. This information is quite useful to identify possible privacy threats. Objectives: In this study, we identify users’ privacy concerns and their satisfaction regarding privacy control measures provided by online social networks. Furthermore, this study explores data mining techniques to predict the levels/intensity of friendship in online social networks. This study also proposes a technique to utilize predicted friendship levels for privacy preservation in a semi-automatic privacy framework. Methods: An online survey is conducted to analyze Facebook users’ concerns as well as their interaction behavior with their good friends. On the basis of survey results, an experiment is performed to justify practical demonstration of data mining phases. Results: We found that users are concerned to save their private data. As a precautionary measure, they restrain to show their private information on Facebook due to privacy leakage fears. Additionally, individuals also perform some actions which they also feel as privacy vulnerability. This study further identifies that the importance of interaction type varies while communication. This research also discovered, “mutual friends” and “profile visits”, the two non-interaction based estimation metrics. Finally, this study also found an excellent performance of J48 and Naïve Bayes algorithms to classify friendship levels. Conclusions: The users are not satisfied with the privacy measures provided by the online social networks. We establish that the online social networks should offer a privacy mechanism which does not require a lot of privacy control effort from the users. This study also concludes that factors such as current status, interaction type need to be considered with the interaction count method in order to improve its performance. Furthermore, data mining classification algorithms are tailor-made for the prediction of friendship levels.

    Fulltekst (pdf)
    FULLTEXT01
  • 48.
    Ahmadi Mehri, Vida
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Towards Automated Context-aware Vulnerability Risk Management2023Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.

    The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. 

    ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.

    Fulltekst (pdf)
    fulltext
  • 49.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Arlos, Patrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
    Casalicchio, Emiliano
    Sapienza University of Rome, Italy.
    Automated Patch Management: An Empirical Evaluation Study2023Inngår i: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023, IEEE, 2023, s. 321-328Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.

    Fulltekst (pdf)
    fulltext
  • 50.
    Ahmed, Adnan
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för för interaktion och systemdesign.
    Hussain, Syed Shahram
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för för interaktion och systemdesign.
    Meta-Model of Resilient information System2007Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    The role of information systems has become very important in today’s world. It is not only the business organizations who use information systems but the governments also posses’ very critical information systems. The need is to make information systems available at all times under any situation. Information systems must have the capabilities to resist against the dangers to its services,performance & existence, and recover to its normal working state with the available resources in catastrophic situations. The information systems with such a capability can be called resilient information systems. This thesis is written to define resilient information systems, suggest its meta-model and to explain how existing technologies can be utilized for the development of resilient information system.

    Fulltekst (pdf)
    FULLTEXT01
1234567 1 - 50 of 3057
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf