Change search
Refine search result
1234567 1 - 50 of 902
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdsharifi, Mohammad Hossein
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Dhar, Ripan Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Service Management for P2P Energy Sharing Using Blockchain – Functional Architecture2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Blockchain has become the most revolutionary technology in the 21st century. In recent years, one of the concerns of world energy isn't just sustainability yet, in addition, being secure and reliable also. Since information and energy security are the main concern for the present and future services, this thesis is focused on the challenge of how to trade energy securely on the background of using distributed marketplaces that can be applied. The core technology used in this thesis is distributed ledger, specifically blockchain. Since this technology has recently gained much attention because of its functionalities such as transparency, immutability, irreversibility, security, etc, we tried to convey a solution for the implementation of a secure peer-to-peer (P2P) energy trading network over a suitable blockchain platform. Furthermore, blockchain enables traceability of the origin of data which is called data provenience.

    In this work, we applied a secure blockchain technology in peer-to-peer energy sharing or trading system where the prosumer and consumer can trade their energies through a secure channel or network. Furthermore, the service management functionalities such as security, reliability, flexibility, and scalability are achieved through the implementation. \\

    This thesis is focused on the current proposals for p2p energy trading using blockchain and how to select a suitable blockchain technique to implement such a p2p energy trading network. In addition, we provide an implementation of such a secure network under blockchain and proper management functions. The choices of the system models, blockchain technology, and the consensus algorithm are based on literature review, and it carried to an experimental implementation where the feasibility of that system model has been validated through the output results. 

    Download full text (pdf)
    Service Management for P2P Energy Sharing Using Blockchain – Functional Architecture
  • 2.
    Abghari, Shahrooz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Data Mining Approaches for Outlier Detection Analysis2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Outlier detection is studied and applied in many domains. Outliers arise due to different reasons such as fraudulent activities, structural defects, health problems, and mechanical issues. The detection of outliers is a challenging task that can reveal system faults, fraud, and save people's lives. Outlier detection techniques are often domain-specific. The main challenge in outlier detection relates to modelling the normal behaviour in order to identify abnormalities. The choice of model is important, i.e., an unsuitable data model can lead to poor results. This requires a good understanding and interpretation of the data, the constraints, and requirements of the domain problem. Outlier detection is largely an unsupervised problem due to unavailability of labeled data and the fact that labeled data is expensive. 

    In this thesis, we study and apply a combination of both machine learning and data mining techniques to build data-driven and domain-oriented outlier detection models. We focus on three real-world application domains: maritime surveillance, district heating, and online media and sequence datasets. We show the importance of data preprocessing as well as feature selection in building suitable methods for data modelling. We take advantage of both supervised and unsupervised techniques to create hybrid methods. 

    More specifically, we propose a rule-based anomaly detection system using open data for the maritime surveillance domain. We exploit sequential pattern mining for identifying contextual and collective outliers in online media data. We propose a minimum spanning tree clustering technique for detection of groups of outliers in online media and sequence data. We develop a few higher order mining approaches for identifying manual changes and deviating behaviours in the heating systems at the building level. The proposed approaches are shown to be capable of explaining the underlying properties of the detected outliers. This can facilitate domain experts in narrowing down the scope of analysis and understanding the reasons of such anomalous behaviours. We also investigate the reproducibility of the proposed models in similar application domains.

    Download full text (pdf)
    fulltext
  • 3.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Higher Order Mining Approach for the Analysis of Real-World Datasets2020In: Energies, E-ISSN 1996-1073, Vol. 13, no 21, article id 5781Article in journal (Refereed)
    Abstract [en]

    In this study, we propose a higher order mining approach that can be used for the analysis of real-world datasets. The approach can be used to monitor and identify the deviating operational behaviour of the studied phenomenon in the absence of prior knowledge about the data. The proposed approach consists of several different data analysis techniques, such as sequential pattern mining, clustering analysis, consensus clustering and the minimum spanning tree (MST). Initially, a clustering analysis is performed on the extracted patterns to model the behavioural modes of the studied phenomenon for a given time interval. The generated clustering models, which correspond to every two consecutive time intervals, can further be assessed to determine changes in the monitored behaviour. In cases in which significant differences are observed, further analysis is performed by integrating the generated models into a consensus clustering and applying an MST to identify deviating behaviours. The validity and potential of the proposed approach is demonstrated on a real-world dataset originating from a network of district heating (DH) substations. The obtained results show that our approach is capable of detecting deviating and sub-optimal behaviours of DH substations.

    Download full text (pdf)
    fulltext
  • 4.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Multi-view Clustering Analyses for District Heating Substations2020In: DATA 2020 - Proceedings of the 9th International Conference on Data Science, Technology and Applications2020, / [ed] Hammoudi S.,Quix C.,Bernardino J., SciTePress, 2020, p. 158-168Conference paper (Refereed)
    Abstract [en]

    In this study, we propose a multi-view clustering approach for mining and analysing multi-view network datasets. The proposed approach is applied and evaluated on a real-world scenario for monitoring and analysing district heating (DH) network conditions and identifying substations with sub-optimal behaviour. Initially, geographical locations of the substations are used to build an approximate graph representation of the DH network. Two different analyses can further be applied in this context: step-wise and parallel-wise multi-view clustering. The step-wise analysis is meant to sequentially consider and analyse substations with respect to a few different views. At each step, a new clustering solution is built on top of the one generated by the previously considered view, which organizes the substations in a hierarchical structure that can be used for multi-view comparisons. The parallel-wise analysis on the other hand, provides the opportunity to analyse substations with regards to two different views in parallel. Such analysis is aimed to represent and identify the relationships between substations by organizing them in a bipartite graph and analysing the substations’ distribution with respect to each view. The proposed data analysis and visualization approach arms domain experts with means for analysing DH network performance. In addition, it will facilitate the identification of substations with deviating operational behaviour based on comparative analysis with their closely located neighbours.

    Download full text (pdf)
    Multi-view Clustering Analyses for District Heating Substations
  • 5.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    District Heating Substation Behaviour Modelling for Annotating the Performance2020In: Communications in Computer and Information Science / [ed] Cellier, P, Driessens, K, Springer , 2020, Vol. 1168, p. 3-11Conference paper (Refereed)
    Abstract [en]

    In this ongoing study, we propose a higher order data mining approach for modelling district heating (DH) substations’ behaviour and linking operational behaviour representative profiles with different performance indicators. We initially create substation’s operational behaviour models by extracting weekly patterns and clustering them into groups of similar patterns. The built models are further analyzed and integrated into an overall substation model by applying consensus clustering. The different operational behaviour profiles represented by the exemplars of the consensus clustering model are then linked to performance indicators. The labelled behaviour profiles are deployed over the whole heating season to derive diverse insights about the substation’s performance. The results show that the proposed method can be used for modelling, analyzing and understanding the deviating and sub-optimal DH substation’s behaviours. © 2020, Springer Nature Switzerland AG.

  • 6.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Jönköping University, SWE.
    Higher order mining for monitoring district heating substations2019In: Proceedings - 2019 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 382-391Conference paper (Refereed)
    Abstract [en]

    We propose a higher order mining (HOM) approach for modelling, monitoring and analyzing district heating (DH) substations' operational behaviour and performance. HOM is concerned with mining over patterns rather than primary or raw data. The proposed approach uses a combination of different data analysis techniques such as sequential pattern mining, clustering analysis, consensus clustering and minimum spanning tree (MST). Initially, a substation's operational behaviour is modeled by extracting weekly patterns and performing clustering analysis. The substation's performance is monitored by assessing its modeled behaviour for every two consecutive weeks. In case some significant difference is observed, further analysis is performed by integrating the built models into a consensus clustering and applying an MST for identifying deviating behaviours. The results of the study show that our method is robust for detecting deviating and sub-optimal behaviours of DH substations. In addition, the proposed method can facilitate domain experts in the interpretation and understanding of the substations' behaviour and performance by providing different data analysis and visualization techniques. © 2019 IEEE.

    Download full text (pdf)
    Higher Order Mining for Monitoring DistrictHeating Substations
  • 7.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exner, Peter
    Sony R&D Center Lund Laboratory, SWE.
    An Inductive System Monitoring Approach for GNSS Activation2022In: IFIP Advances in Information and Communication Technology / [ed] Maglogiannis, I, Iliadis, L, Macintyre, J, Cortez, P, Springer Science+Business Media B.V., 2022, Vol. 647, p. 437-449Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a Global Navigation Satellite System (GNSS) component activation model for mobile tracking devices that automatically detects indoor/outdoor environments using the radio signals received from Long-Term Evolution (LTE) base stations. We use an Inductive System Monitoring (ISM) technique to model environmental scenarios captured by a smart tracker via extracting clusters of corresponding value ranges from LTE base stations’ signal strength. The ISM-based model is built by using the tracker’s historical data labeled with GPS coordinates. The built model is further refined by applying it to additional data without GPS location collected by the same device. This procedure allows us to identify the clusters that describe semi-outdoor scenarios. In that way, the model discriminates between two outdoor environmental categories: open outdoor and semi-outdoor. The proposed ISM-based GNSS activation approach is studied and evaluated on a real-world dataset contains radio signal measurements collected by five smart trackers and their geographical location in various environmental scenarios.

  • 8.
    Adabala, Yashwanth Venkata Sai Kumar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Devanaboina, Lakshmi Venkata Raghava Sudheer
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Prevention Technique for DDoS Attacks in SDN using Ryu Controller Application2024Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software Defined Networking (SDN) modernizes network control, offering streamlined management. However, its centralized structure makes it more vulnerable to distributed Denial of Service (DDoS) attacks, posing serious threats to network stability. This thesis explores the development of a DDoS attack prevention technique in SDN environments using the Ryu controller application. The research aims to address the vulnerabilities in SDN, particularly focusing on flooding and Internet Protocol (IP) spoofing attacks, which are a significant threat to network security. The study employs an experimental approach, utilizing tools like Mininet-VM (VirtualMachine), Oracle VM VirtualBox, and hping3 to simulate a virtual SDN environment and conduct DDoS attack scenarios. Key methodologies include packet sniffing and rule-based detection by integrating Snort IDS (Intrusion Detection System), which is critical for identifying and mitigating such attacks. The experiments demonstrate the effectiveness of the proposed prevention technique, highlighting the importance of proper configuration and integration of network security tools in SDN. This work contributes to enhancing the resilience of SDN architectures against DDoS attacks, offering insights into future developments in network security. 

    Download full text (pdf)
    A_Prevention_Technique_for_DDoS_Attacks_in_SDN_using_Ryu_Controller_Application
  • 9.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radio Electronics, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Reinforcement Learning for Anti-Ransomware Testing2020In: 2020 IEEE East-West Design and Test Symposium, EWDTS 2020 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2020, article id 9225141Conference paper (Refereed)
    Abstract [en]

    In this paper, we are going to verify the possibility to create a ransomware simulation that will use an arbitrary combination of known tactics and techniques to bypass an anti-malware defense. To verify this hypothesis, we conducted an experiment in which an agent was trained with the help of reinforcement learning to run the ransomware simulator in a way that can bypass anti-ransomware solution and encrypt the target files. The novelty of the proposed method lies in applying reinforcement learning to anti-ransomware testing that may help to identify weaknesses in the anti-ransomware defense and fix them before a real attack happens. © 2020 IEEE.

    Download full text (pdf)
    fulltext
  • 10.
    Adamov, Alexander
    et al.
    NioGuard Security Lab, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Surmacz, Tomasz
    Wrocław University of Science and Technology, POL.
    An analysis of lockergoga ransomware2019In: 2019 IEEE East-West Design and Test Symposium, EWDTS 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    This paper contains an analysis of the LockerGoga ransomware that was used in the range of targeted cyberattacks in the first half of 2019 against Norsk Hydra-A world top 5 aluminum manufacturer, as well as the US chemical enterprises Hexion, and Momentive-Those companies are only the tip of the iceberg that reported the attack to the public. The ransomware was executed by attackers from inside a corporate network to encrypt the data on enterprise servers and, thus, taking down the information control systems. The intruders asked for a ransom to release a master key and decryption tool that can be used to decrypt the affected files. The purpose of the analysis is to find out tactics and techniques used by the LockerGoga ransomware during the cryptolocker attack as well as an encryption model to answer the question if the encrypted files can be decrypted with or without paying a ransom. The scientific novelty of the paper lies in an analysis methodology that is based on various reverse engineering techniques such as multi-process debugging and using open source code of a cryptographic library to find out a ransomware encryption model. © 2019 IEEE.

  • 11.
    Adeopatoye, Remilekun
    et al.
    Federal University of Technology, Nigeria.
    Ikuesan, Richard Adeyemi
    Zayed University, United Arab Emirates.
    Sookhak, Mehdi
    Texas A&m University, United States.
    Hungwe, Taurai
    Sefako Makgatho University of Health Sciences, South Africa.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards an Open-Source Based E-Mail Forensic Tool that uses Headers in Digital Investigation2023In: ACM International Conference Proceeding Series, ACM Digital Library, 2023Conference paper (Refereed)
    Abstract [en]

    Email-related incidents/crimes are on the rise owing to the fact that communication by electronic mail (e-mail) has become an important part of our daily lives. The technicality behind e-mail plays an important role when looking for digital evidence that can be used to create a hypothesis that can be used during litigation. During this process, it is needful to have a tool that can help to isolate email incidents as a potential crime scene in the wake of suspected attacks. The problem that this paper is addressing paper, is more centered on realizing an open-source email-forensic tool that used the header analysis approach. One advantage of this approach is that it helps investigators to collect digital evidence from e-mail systems, organize the collected data, analyze and discover any discrepancies in the header fields of an e-mail, and generates an evidence report. The main contribution of this paper focuses on generating a freshly computed hash that is attached to every generated report, to ensure the verifiability, reliability, and integrity of the reports to prove that they have not been modified in any way. Finally, this ensures that the sanctity and forensic soundness of the collected evidence are maintained. © 2023 ACM.

  • 12.
    Adurti, Devi Abhiseshu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Battu, Mohit
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Optimization of Heterogeneous Parallel Computing Systems using Machine Learning2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Heterogeneous parallel computing systems utilize the combination of different resources CPUs and GPUs to achieve high performance and, reduced latency and energy consumption. Programming applications that target various processing units requires employing different tools and programming models/languages. Furthermore, selecting the most optimal implementation, which may either target different processing units (i.e. CPU or GPU) or implement the various algorithms, is not trivial for a given context. In this thesis, we investigate the use of machine learning to address the selection problem of various implementation variants for an application running on a heterogeneous system.

    Objectives: This study is focused on providing an approach for optimization of heterogeneous parallel computing systems at runtime by building the most efficient machine learning model to predict the optimal implementation variant of an application.

    Methods: The six machine learning models KNN, XGBoost, DTC, Random Forest Classifier, LightGBM, and SVM are trained and tested using stratified k-fold on the dataset generated from the matrix multiplication application for square matrix input dimension ranging from 16x16 to 10992x10992.

    Results: The results of each machine learning algorithm’s finding are presented through accuracy, confusion matrix, classification report for parameters precision, recall, and F-1 score, and a comparison between the machine learning models in terms of accuracy, run-time training, and run-time prediction are provided to determine the best model.

    Conclusions: The XGBoost, DTC, SVM algorithms achieved 100% accuracy. In comparison to the other machine learning models, the DTC is found to be the most suitable due to its low time required for training and prediction in predicting the optimal implementation variant of the heterogeneous system application. Hence the DTC is the best suitable algorithm for the optimization of heterogeneous parallel computing.

    Download full text (pdf)
    fulltext
  • 13.
    Ahlgren, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Local And Network Ransomware Detection Comparison2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

    Download full text (pdf)
    BTH2019Ahlgren
  • 14.
    Ahlstrand, Jim
    et al.
    Telenor Sverige AB, Karlskrona, Sweden.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Predicting B2B Customer Churn using a Time Series Approach2024In: 2024 5th International Conference on Intelligent Data Science Technologies and Applications, IDSTA 2024 / [ed] Alsmirat M., Jararweh Y., Aloqaily M., Salameh H.B., Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 44-51Conference paper (Refereed)
    Abstract [en]

    Preventing customer churn, i.e., termination of business commitments, is essential for companies operating in saturated markets, especially for subscription-based models such as telecommunication. Knowing when customers decide to terminate services is instrumental to effective churn prevention. In this study, we investigate how churn prediction performs in practice when training models on different time intervals of historic data (1-4 weeks back) and predicting churn at different numbers of weeks ahead (1-4 weeks). We use a real-world, time-series dataset of mobile subscription usage to examine churn prediction for business-to-business (B2B) customers. We utilize the timeseries data at a higher temporal resolution than prior studies and investigate different forecasting horizons. Leveraging popular machine learning algorithms such as Random Forests, Gradient Boosting, Neural Networks, and Gated Recurrent Unit, we show that the best model achieves an average F1-score of 79.3% for one-week ahead predictions. However, the average F1-score decreases to 63.3% and 61.8% for two and four weeks ahead, respectively. A model interpretation framework (SHAP) evaluates the feature impact on the models' internal decision logic. We also discuss the challenges in applying churn prediction for the B2B segment. 

    Download full text (pdf)
    fulltext
  • 15.
    Ahlstrand, Jim
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Telenor Sverige AB, Sweden..
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Preliminary Results on the use of Artificial Intelligence for Managing Customer Life Cycles2023In: 35th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2023 / [ed] Håkan Grahn, Anton Borg and Martin Boldt, Linköping University Electronic Press, 2023, p. 68-76Conference paper (Refereed)
    Abstract [en]

    During the last decade we have witnessed how artificial intelligence (AI) have changed businesses all over the world. The customer life cycle framework is widely used in businesses and AI plays a role in each stage. However,implementing and generating value from AI in the customerlife cycle is not always simple. When evaluating the AI against business impact and value it is critical to consider both themodel performance and the policy outcome. Proper analysis of AI-derived policies must not be overlooked in order to ensure ethical and trustworthy AI. This paper presents a comprehensive analysis of the literature on AI in customer lifecycles (CLV) from an industry perspective. The study included 31 of 224 analyzed peer-reviewed articles from Scopus search result. The results show a significant research gap regardingoutcome evaluations of AI implementations in practice. This paper proposes that policy evaluation is an important tool in the AI pipeline and empathizes the significance of validating bothpolicy outputs and outcomes to ensure reliable and trustworthy AI.

    Download full text (pdf)
    fulltext
  • 16.
    Ahlström, Frida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Karlsson, Janni
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Utvecklarens förutsättningar för säkerställande av tillgänglig webb2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Since 2019, all public websites in Sweden are legally bound to meet a certain degree of digital accessibility. An additional EU directive is being transposed into national law at the time of publication of this thesis, which will impose corresponding requirements on part of the private sector, such as banking services and e-commerce. This will likely cause increased demand which suppliers of web development and, in turn, their developers must be able to meet. 

    The aims of this study are to create an increased awareness of digital accessibility as well as to clarify, from the developer’s perspective, how this degree of accessibility is achieved and what could make application of digital accessibility more efficient. 

    In order to achieve this, eight qualitative interviews were conducted, transcribed and thematized in the results section. An inductive thematic analysis has been carried out related to the research questions. It compares the results of previous studies with the outcomes from this study, and shows clear similarities but also differences and new discoveries. 

    The study shows that developers have access to evaluation tools and guidelines that provide good support in their work, but that the responsibility often lies with individual developers rather than with the business as a whole. This is one of the main challenges, together with the fact that inaccessible development is still being carried out in parallel, and that time pressure leads to deprioritization of accessibility. However, the respondents agree that it does not take any more time to develop accessible rather than inaccessible websites, provided that this is taken into account from the outset. Success factors for digital accessibility are to sell the idea to the customer, to work in a structured way with knowledge sharing and to document solutions in order to save time. In addition to this, it appears that the implementation of accessibility would benefit from the ownership being raised to a higher decision level and the competence being broadened in the supplier's organization, and that developers gain access to specialist competence and user tests to support their work. A basic knowledge of accessibility could be included in web development training to a greater extent, and an extension of the legal requirements could also create additional incentives for the customer. 

    Download full text (pdf)
    fulltext
  • 17.
    Ahmad, Al Ghaith
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd ULRAHMAN, Ibrahim
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Download full text (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 18.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Automated Context-aware Vulnerability Risk Management2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.

    The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. 

    ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.

    Download full text (pdf)
    fulltext
  • 19.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Secure Collaborative AI Service Chains2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.

    Download full text (pdf)
    fulltext
  • 20.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated Context-Aware Vulnerability Risk Management for Patch Prioritization2022In: Electronics, E-ISSN 2079-9292, Vol. 11, no 21, article id 3580Article in journal (Refereed)
    Abstract [en]

    The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 21.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Sapienza University of Rome, Italy.
    Automated Patch Management: An Empirical Evaluation Study2023In: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023, IEEE, 2023, p. 321-328Conference paper (Refereed)
    Abstract [en]

    Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.

    Download full text (pdf)
    fulltext
  • 22.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Normalization Framework for Vulnerability Risk Management in Cloud2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, IEEE, 2021, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.

    Download full text (pdf)
    fulltext
  • 23.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB, Sweden.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management2020In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 200-205, article id 9196350Conference paper (Refereed)
    Abstract [en]

    In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.

    In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.

    Download full text (pdf)
    fulltext
  • 24.
    Ahmed Sheik, Kareem
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Comparative Study on Optimization Algorithms and its efficiency2022Independent thesis Advanced level (degree of Master (Two Years)), 20 HE creditsStudent thesis
    Abstract [en]

    Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2].

    Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation.

    Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better.

    Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained.

    Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted.

    Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.

    Download full text (pdf)
    A Comparative Study on Optimization Algorithms and its efficiency
  • 25.
    Ahmed, Syed Saif
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arepalli, Harshini Devi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Auto-scaling Prediction using MachineLearning Algorithms: Analysing Performance and Feature Correlation2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.

    Download full text (pdf)
    Auto-scaling Prediction using Machine Learning Algorithms: Analysing Performance and Feature Correlation
  • 26.
    Aiyatham Prabakar, Rishi Kiran
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    WebAssembly Performance Analysis: A Comparative Study of C++ and Rust Implementations2024Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: In the contemporary digital landscape, web browsers have evolved from mere text renderers to sophisticated platforms supporting interactive 3D visualizations, multimedia, and gaming applications. This evolution has been catalysed by the advent of WebAssembly, a binary instruction format designed to enhance browser performance by executing high-level language code with near-native efficiency. This thesis investigates the performance implications of WebAssembly modules generated from programs written in C++ and Rust programming languages.

    Objectives: The primary aim is to assess the performance of WebAssembly(Wasm) modules generated from Rust and C++ programming languages. This involves conducting a mini literature review on WebAssembly compilation, C++, and Rust semantics and compilation processes. Furthermore, the study aims to evaluate the performance of C++ and Rust Wasm modules encompassing tasks such as sorting and matrix multiplication. Performance metrics including execution time and file size of the obtained Wasm modules are analysed using Chrome’s DevTools. Ultimately, the research endeavours to provide insights into Wasm performance.

    Method: In this study, the research method relies on a quantitative experimental approach by writing programs like quicksort and matrix multiplication in both C++and Rust programming languages and compiling them into Wasm modules using compilers like Emscripten for C++ and RustC for Rust programs respectively. Since, the Wasm module is a byte code type it is converted into WebAssembly Text Format (WAT) file for obtaining the readable machine instructions. Performance metrics like execution speed, file size, and number of assembly instructions like add, load, loop, etc are evaluated and calculated using Chrome’s DevTools.

    Results: The study which involves a comparative analysis between Wasm modules of C++ and Rust programming languages showcase that the performance of the Rust Wasm module is more fast and efficient in terms of execution time, file size, etc than C++ Wasm module. The findings aim to assist developers in making informed decisions regarding programming language selection for web development, thereby enhancing the efficiency of web applications.

    Conclusion: The study has determined that the performance characteristics of WebAssembly modules originating from both C++ and Rust programs vary. The results underscore the superior speed and efficiency of Rust-generated Wasm modules when contrasted with those produced from C++. These insights establish a robust basis for future research and optimization initiatives within the field of web development.

    Download full text (pdf)
    fulltext
  • 27.
    Ajjapu, Siva Babu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lokireddy, Sasank Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Indoor VLC behaviour  with RGB spectral power distribution using simulation.2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    n recent years visible light communication (VLC) has been one of the technologies overgrowing in this competitive world and breaking through the wireless transmission of future mobile communications. This VLC replaces radio frequency (RF), which has several important features like large bandwidth, low cost, unlicensed spectrum. In telecommunications, there is a need for high bandwidth and secure transmission of data through a network. Communication can be done through wired and wireless. Wired communication such as coaxial cable, twisted wire, fiber optics, and wireless are RF, light fidelity (Li-Fi), optical wireless communication(OWC). In our daily lives, we are transferring data from one place to another through a network connection. The network is connected to multiple devices as the network bandwidth provided by VLC is higher than the RF communications. When multiple devices are connected to RF, the latency is high. In the case of VLC, the latency is low. In this research, the light emitting diode (LED) bulbs act as the transmitter(Tx), and the avalanche photodiode (APD) acts as a receiver(Rx).

    This research mainly focuses on creating a MATLAB simulation environment for a two-room VLC system with given spectral power distributions. We have simulated two rooms with the exact dimensions. The LEDs are placed in opposite positions in each room. LED is placed at the middle top of the ceiling in one room, and a photodiode (PD) is placed on top of the table under the light in the same room. Moreover, in another room, the light is placed on top of the table at the bottom, and PD is placed at the middle top of the ceiling.Moreover, these two rooms are connected to the same network.   The input parameters are taken from the previous studies, but the transmitting power is calculated from the Red-Green-Blue(RGB or White) light spectrum distribution using the OOK modulation technique. We obtained responsivity of APD at a single point and bit error rates(BER) of APD at multi-points inside both the rooms.

    Download full text (pdf)
    fulltext
  • 28.
    Aklilu, Yohannes T.
    et al.
    University of Skövde, SWE.
    Ding, Jianguo
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Survey on blockchain for smart grid management, control, and operation2022In: Energies, E-ISSN 1996-1073, Vol. 15, no 1, article id 193Article in journal (Refereed)
    Abstract [en]

    Power generation, distribution, transmission, and consumption face ongoing challenges such as smart grid management, control, and operation, resulting from high energy demand, the diversity of energy sources, and environmental or regulatory issues. This paper provides a comprehensive overview of blockchain-based solutions for smart grid management, control, and operations. We systematically summarize existing work on the use and implementation of blockchain technology in various smart grid domains. The paper compares related reviews and highlights the challenges in the management, control, and operation for a blockchain-based smart grid as well as future research directions in the five categories: collaboration among stakeholders; data analysis and data manage-ment; control of grid imbalances; decentralization of grid management and operations; and security and privacy. All these aspects have not been covered in previous reviews. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

    Download full text (pdf)
    fulltext
  • 29.
    AKULA, SAI PANKAJ
    Blekinge Institute of Technology, Education Development Unit. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A critical evaluation on SRK STORE APP by using the Heuristic Principles of Usability2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis aim is to do a critical evaluation on SRK STORE APP (Shopping app for android) by applying the heuristic principles of usability, to obtain the usability issues or problems for the respective mobile application. Another vital element of this thesis is to attain the necessary suggestions for the respective mobile application by accomplishing the principles of the heuristic evaluation required for the mobile application. On the other hand, the outcome should be demonstrated that the mobile application is user flexible by following the principles of heuristic.

    Background: To be aesthetic and attractive, the mobile application should be given an ideal user experience with usability. So we decided to focus on this utility field and while looking through the different articles, we came across one that talks about design principles and their concepts. The current thesis idea has been obtained by the literature survey we have done on the design principles of the heuristic evaluation and its concepts. This thesis is to attain the necessary suggestions and as well as the complemented solutions/recommendations for the specific mobile application by accomplishing the principles of the heuristic evaluation required for the mobile application.

    Objectives: The main objectives of this project are examining the design principles and identifying the usability issues or problems of the respective mobile application and anthologizing a list of necessary suggestions for enhancing the mobile application and providing absolute recommendations to the existing application.

    Methods: To compile a list of necessary suggestions and providing absolute recommendations for the mobile application, we have applied Jakob Neilson’s design principles. This specific method aids in determining the utility of design criteria and aids in the transformation of the interactive system by analyzing factors such as usability. Using this method, we will provide a concise detailed overview of the importance of design principles in an interaction. The key aim of employing design principles of usability is to ensure the performance and reliability of the effective interaction design, to provide meaningful user interaction assistance, and as well as to dispense an acceptable and optimal user experience.

    Results: The results here obtained are the usability issues of the respective mobile application i.e., SRK STORE APP, and the heuristic principles which are not satisfied by the specific mobile application. The severity level of the heuristic principles will have resulted and the list of necessary suggestions for enhancing the mobile application and providing absolute recommendations to the existing application.

    Conclusions: This study was conducted to evaluate the mobile application. Heuristic evaluation methodology was used to evaluate the system. Jakob Neilson’sdesign principles were used to depict the usability issues of the mobile application. The required suggestions and absolute recommendations/solutions are provided to the existing mobile application. 

    Download full text (pdf)
    A critical evaluation on SRK STORE APP by using the Heuristic Principles of Usability
  • 30.
    Akula, Venkata Sai Abhiram
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bojja, Vaishnavi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cross-Dataset Generalization of Deep Learning Models for Melanoma Prediction: A Comparative Study of AlexNet, GoogLeNet, DenseNet, ResNet and Ensemble Approaches Beyond HAM100002024Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background:

    Melanoma is a severe form of skin cancer that requires early diagnosis for effective treatment. Deep learning models, especially Convolutional Neural Networks (CNNs), have shown promise in medical image classification. This study aims to evaluate the generalization capabilities of individual deep learning models and an ensemble approach for melanoma prediction, trained on the HAM10000 dataset and validated on other test datasets, such as PH2 and DermNet.

    Objectives:

    The primary objective of this thesis is to evaluate the performance of AlexNet, DenseNet, ResNet, GoogLeNet, and ensemble model for classifying melanoma images by training the models on HAM10000 dataset and testing on PH2 and DermNet datasets. The secondary objective is to analyse the generalization ability of HAM10000 dataset to produce effective models.

    Methods:

    The datasets for the study are collected from Kaggle and official databases and are preprocessed using techniques like normalization and LabelEncoder. The models are compiled after applying data augmentation and reinforcement. Next, the models are trained with different train_test_split configurations (80%, 90% and 100%) of HAM10000 dataset and evaluated all the models (12 individual models, 4 per training group and 3 ensemble models, 1 per training group) on a custom dataset (merged PH2 and DermNet datasets).

    Results:

    The ensemble model (en_V100) achieved the best overall performance, with the highest accuracy of 83.56%, precision of 0.947, recall of 0.943, F1-score of 0.945, and the lowest Hamming Loss of 0.164. Amongst the individual models, ResNet demonstrated best performance due to its architecture with residual connections. The AlexNet models performed poorly in every training group due to its simple architecture failing to capture the complex patterns in the data. Additionally, HAM10000 dataset is proved to be effective as all the models performed with generalizable results.

    Conclusions:

    The ensemble approach outperformed the individual models in every training group, suggesting combining multiple architectures will results in an overall reliable model for melanoma prediction. The models trained on the HAM10000 dataset showed good generalization when evaluated on other diverse datasets (PH2 and DermNet), indicating the effectiveness of the dataset. 

    Download full text (pdf)
    fulltext
  • 31.
    Akula, Venkata Sai Abhiram
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Bojja, Vaishnavi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cross-Dataset Generalization of Deep Learning Models for Melanoma Prediction: A Comparative Study of AlexNet, GoogLeNet, DenseNet, ResNet and Ensemble Approaches Beyond HAM100002024Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background:

    Melanoma is a severe form of skin cancer that requires early diagnosis for effective treatment. Deep learning models, especially Convolutional Neural Networks (CNNs), have shown promise in medical image classification. This study aims to evaluate the generalization capabilities of individual deep learning models and an ensemble approach for melanoma prediction, trained on the HAM10000 dataset and validated onother test datasets, such as PH2 and DermNet.

    Objectives:

    The primary objective of this thesis is to evaluate the performance of AlexNet, DenseNet, ResNet, GoogLeNet, and ensemble model for classifying melanoma images by training the models on HAM10000 dataset and testing on PH2 and DermNet datasets. The secondary objective is to analyse the generalization ability of HAM10000 dataset to produce effective models.

    Methods:

    The datasets for the study are collected from Kaggle and official databases and are preprocessed using techniques like normalization and LabelEncoder. The models are compiled after applying data augmentation and reinforcement. Next, the models are trained with different train_test_split configurations (80%, 90% and 100%) of HAM10000 dataset and evaluated all the models (12 individual models, 4 per training group and 3 ensemble models, 1 per training group) on a custom dataset (merged PH2 and DermNet datasets).

    Results:

    The ensemble model (en_V100) achieved the best overall performance, with the highest accuracy of 83.56%, precision of 0.947, recall of 0.943, F1-score of 0.945, and the lowest Hamming Loss of 0.164. Amongst the individual models, ResNet demonstrated best performance due to its architecture with residual connections. The AlexNet models performed poorly in every training group due to its simple architecture failing to capture the complex patterns in the data. Additionally, HAM10000 dataset is proved to be effective as all the models performed with generalizable results.

    Conclusions:

    The ensemble approach outperformed the individual models in every training group, suggesting combining multiple architectures will results in an overall reliable model for melanoma prediction. The models trained on the HAM10000 dataset showed good generalization when evaluated on other diverse datasets (PH2 and DermNet), indicating the effectiveness of the dataset. 

    Download full text (pdf)
    fulltext
  • 32.
    Akurathi, Lakshmikanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chilluguri, Surya Teja Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Decode and Forward Relay Assisting Active Jamming in NOMA System2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Non-orthogonal multiple access (NOMA), with its exceptional spectrum efficiency, was thought to be a promising technology for upcoming wireless communications. Physical layer security has also been investigated to improve the security performance of the system. Power-domain NOMA has been considered for this paper, where multiple users can share the same spectrum which bases this sharing on distinct power values. Power allocation is used to allocate different power to the users based on their channel condition. Data signals of different users are superimposed on the transmitter's side, and the receiver uses successive interference cancellation (SIC) to remove the unwanted signals before decoding its own signal. There exist an eavesdropper whose motive is to eavesdrop on the confidential information that is being shared with the users. The network model developed in this way consists of two links, one of which considers the relay transmission path from the source to Near User to Far User and the other of which takes into account the direct transmission path from the source to the destination, both of which experience Nakagami-m fading. To degrade the eavesdropper's channel, the jamming technique is used against the eavesdropper where users are assumed to be in a full-duplex mode which aims to improve the security of the physical layer. Secrecy performance metrics such as secrecy outage probability, secrecy capacity, etc. are evaluated and analyzed for the considered system. Mathematical analysis and simulation using MATLAB are done to assess, analyze and visualize the system's performance in the presence of an eavesdropper when the jamming technique is applied. According to simulation results, the active jamming approach enhances the secrecy performance of the entire system and leads to a positive improvement in the secrecy rate.

    Download full text (pdf)
    Decode and Forward Relay Assisting Active Jamming in NOMA System
  • 33.
    Alanko Öberg, John
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Svensson, Carl
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Time-based Key for Coverless Audio Steganography: A Proposed Behavioral Method to Increase Capacity2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Coverless steganography is a relatively unexplored area of steganography where the message is not embedded into a cover media. Instead the message is derived from one or several properties already existing in the carrier media. This renders steganalysis methods used for traditional steganography useless. Early coverless methods were applied to images or texts but more recently the possibilities in the video and audio domain have been explored. The audio domain still remains relatively unexplored however, with the earliest work being presented in 2022. In this thesis, we narrow the existing research gap by proposing an audio-compatible method which uses the timestamp that marks when a carrier media was received to generate a time-based key which can be applied to the hash produced by said carrier. This effectively allows one carrier to represent a range of different hashes depending on the timestamp specifying when it was received, increasing capacity.

    Objectives. The objectives of the thesis are to explore what features of audio are suitable for steganographic use, to establish a method for finding audio clips which can represent a specific message to be sent and to improve on the current state-of-the-art method, taking capacity, robustness and cost into consideration.

    Methods. A literature review was first conducted to gain insight on techniques used in previous works. This served both to illuminate features of audio that could be used to good effect in a coverless approach, and to identify coverless approaches which could work but had not been tested yet. Experiments were then performed on two datasets to show the effective capacity increase of the proposed method when used in tandem with the existing state-of-the-art method for coverless audio steganography. Additional robustness tests for said state-of-the-art method were also performed.

    Results. The results show that the proposed method could increase the per-message capacity from eight bits to 16 bits, while still retaining 100% effective capacity using only 200 key permutations, given a database consisting of 50 one-minute long audio clips. They further show that the time cost added by the proposed method is in total less than 0.1 seconds for 2048 key permutations. The robustness experiments show that the hashing algorithms used in the state-of-the-art method have high robustness against additive white gaussian noise, low-pass filters, and resampling attacks but are weaker against compression and band-pass filters. 

    Conclusions. We address the scientific gap and complete our objectives by proposing a method which can increase capacity of existing coverless steganography methods. We demonstrate the capacity increase our method brings by using it in tandem with the state-of-the-art method for the coverless audio domain. We argue that our method is not limited to the audio domain, or to the coverless method with which we performed our experiments. Finally, we discuss several directions for future works. 

    Download full text (pdf)
    fulltext
  • 34.
    Alawadi, Sadi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ait-Mlouk, Addi
    University of Skövde.
    Toor, Salman
    Uppsala University.
    Hellander, Andreas
    Uppsala University.
    Toward efficient resource utilization at edge nodes in federated learning2024In: Progress in Artificial Intelligence, ISSN 2192-6352, E-ISSN 2192-6360, Vol. 13, no 2, p. 101-117Article in journal (Refereed)
    Abstract [en]

    Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning (DL) applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the FL framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different DL model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75% and 53% when we train 25%, and 50% of the model layers, respectively, without harming the resulting global model accuracy. Furthermore, our results demonstrate a negative correlation between the number of participating clients in the training process and the number of layers that need to be trained on each client’s side. As the number of clients increases, there is a decrease in the required number of layers. This observation highlights the potential of the approach, particularly in cross-device use cases. © The Author(s) 2024.

    Download full text (pdf)
    fulltext
  • 35.
    Alawadi, Sadi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Alkharabsheh, Khalid
    Al-Balqa Applied University, Jordan.
    Alkhabbas, Fahed
    Malmö University, Internet of Things and People Research Center.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Awaysheh, Feras M.
    Institute of Computer Science, Estonia.
    Palomba, Fabio
    University of Salerno, Italy.
    Awad, Mohammed
    Arab American University, Palestine.
    FedCSD: A Federated Learning Based Approach for Code-Smell Detection2024In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 44888-44904Article in journal (Refereed)
    Abstract [en]

    Software quality is critical, as low quality, or 'Code smell,' increases technical debt and maintenance costs. There is a timely need for a collaborative model that detects and manages code smells by learning from diverse and distributed data sources while respecting privacy and providing a scalable solution for continuously integrating new patterns and practices in code quality management. However, the current literature is still missing such capabilities. This paper addresses the previous challenges by proposing a Federated Learning Code Smell Detection (FedCSD) approach, specifically targeting 'God Class,' to enable organizations to train distributed ML models while safeguarding data privacy collaboratively. We conduct experiments using manually validated datasets to detect and analyze code smell scenarios to validate our approach. Experiment 1, a centralized training experiment, revealed varying accuracies across datasets, with dataset two achieving the lowest accuracy (92.30%) and datasets one and three achieving the highest (98.90% and 99.5%, respectively). Experiment 2, focusing on cross-evaluation, showed a significant drop in accuracy (lowest: 63.80%) when fewer smells were present in the training dataset, reflecting technical debt. Experiment 3 involved splitting the dataset across 10 companies, resulting in a global model accuracy of 98.34%, comparable to the centralized model's highest accuracy. The application of federated ML techniques demonstrates promising performance improvements in code-smell detection, benefiting both software developers and researchers. © 2013 IEEE.

    Download full text (pdf)
    fulltext
  • 36.
    Al-Dhaqm, Arafat
    et al.
    Univ Teknol Malaysia UTM, MYS.
    Ikuesan, Richard Adeyemi
    Community Coll Qatar, QAT.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd Razak, Shukor
    Univ Teknol Malaysia UTM, MYS.
    Grispos, George
    Univ Nebraska, USA.
    Choo, Kim-Kwang Raymond
    Univ Texas San Antonio, USA.
    Al-Rimy, Bander Ali Saleh
    Univ Teknol Malaysia UTM, MYS.
    Alsewari, Abdulrahman A.
    Univ Malaysia Pahang, MYS.
    Digital Forensics Subdomains: The State of the Art and Future Directions2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 152476-152502Article in journal (Refereed)
    Abstract [en]

    For reliable digital evidence to be admitted in a court of law, it is important to apply scientifically proven digital forensic investigation techniques to corroborate a suspected security incident. Mainly, traditional digital forensics techniques focus on computer desktops and servers. However, recent advances in digital media and platforms have seen an increased need for the application of digital forensic investigation techniques to other subdomains. This includes mobile devices, databases, networks, cloud-based platforms, and the Internet of Things (IoT) at large. To assist forensic investigators to conduct investigations within these subdomains, academic researchers have attempted to develop several investigative processes. However, many of these processes are domain-specific or describe domain-specific investigative tools. Hence, in this paper, we hypothesize that the literature is saturated with ambiguities. To further synthesize this hypothesis, a digital forensic model-orientated Systematic Literature Review (SLR) within the digital forensic subdomains has been undertaken. The purpose of this SLR is to identify the different and heterogeneous practices that have emerged within the specific digital forensics subdomains. A key finding from this review is that there are process redundancies and a high degree of ambiguity among investigative processes in the various subdomains. As a way forward, this study proposes a high-level abstract metamodel, which combines the common investigation processes, activities, techniques, and tasks for digital forensics subdomains. Using the proposed solution, an investigator can effectively organize the knowledge process for digital investigation.

    Download full text (pdf)
    fulltext
  • 37.
    Aleti, Siddharth Reddy
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kurakula, Karthik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluation of Lightweight CNN Architectures for Multi-Species Animal Image Classification2024Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Recently, animal image classification has involved the impracticalityof deep learning models due to high computational demands, limiting their use inwildlife monitoring. This calls for the crucial necessity of lightweight models for real-time animal identification in resource-limited environments like wildlife cameras andmobile devices. Achieving accurate animal classification in these settings becomes acrucial concern for advancing classification methods.

    Objectives: The goal of this research is to evaluate lightweight transfer learningmodels for classifying animal images while balancing computational efficiency andaccuracy. The objectives include analyzing the models’ performance and providingmodel selection criteria based on performance and efficiency for resource-constraintenvironments. This study contributes to the advancement of machine learning inwildlife preservation and environmental monitoring, which is critical for accuratespecies identification.

    Methods: The proposed methodology involves conducting a thorough literaturereview to identify lightweight transfer learning models for image classification. TheAnimal-90 dataset was utilized, comprising images of ninety distinct animal species.To train the dataset, selected pre-trained models, MobileNetV2, EfficientNetB3,ShuffleNet, SqueezeNet, and MnasNet were employed with custom classificationheads. A 5-fold Cross-Validation technique was used to validate the model. Acombined metric approach is applied to rank the models based on the results fromthe metrics, Accuracy, Inference time, and number of parameters.

    Results: The experimental outcomes revealed EfficientNetB3 to be the most ac-curate but also at the same time it has the highest number of parameters amongother models. Friedman’s test has rejected the null hypothesis of models havingsimilar performance. The combined metric approach ranked ShuffletNet as the topmodel among the compared models in terms of performance and efficiency.

    Conclusions: The research unveiled the commendable performance of all the mod-els in animal image classification, with ShuffleNet achieving the top rank among allother models in terms of accuracy and efficiency. These lightweight models, espe-cially ShuffleNet, show promise in managing limited resources while ensuring accurateanimal classification and confirming their reliability in wildlife conservation

    Download full text (pdf)
    fulltext
  • 38.
    Alkhabbas, Fahed
    et al.
    Malmö University.
    Alawadi, Sadi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ayyad, Majed
    Birzeit University, Palestine.
    Spalazzese, Romina
    Malmö University.
    Davidsson, Paul
    Malmö University.
    ART4FL: An Agent-based Architectural Approach for Trustworthy Federated Learning in the IoT2023In: 8th International Conference on Fog and Mobile Edge Computing, FMEC 2023 / [ed] Quwaider M., Awaysheh F.M., Jararweh Y., Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 270-275Conference paper (Refereed)
    Abstract [en]

    The integration of the Internet of Things (IoT) and Machine Learning (ML) technologies has opened up for the development of novel types of systems and services. Federated Learning (FL) has enabled the systems to collaboratively train their ML models while preserving the privacy of the data collected by their IoT devices and objects. Several FL frameworks have been developed, however, they do not enable FL in open, distributed, and heterogeneous IoT environments. Specifically, they do not support systems that collect similar data to dynamically discover each other, communicate, and negotiate about the training terms (e.g., accuracy, communication latency, and cost). Towards bridging this gap, we propose ART4FL, an end-to-end framework that enables FL in open IoT settings. The framework enables systems’ users to configure agents that participate in FL on their behalf. Those agents negotiate and make commitments (i.e., contractual agreements) to dynamically form federations. To perform FL, the framework deploys the needed services dynamically, monitors the training rounds, and calculates agents’ trust scores based on the established commitments. ART4FL exploits a blockchain network to maintain the trust scores, and it provides those scores to negotiating agents’ during the federations’ formation phase. © 2023 IEEE.

    Download full text (pdf)
    fulltext
  • 39.
    Alkhabbas, Fahed
    et al.
    Malmö University, SWE.
    Alsadi, Mohammed
    Norwegian University of Science and Technology, NOR.
    Alawadi, Sadi
    Uppsala University, SWE.
    Awaysheh, Feras M.
    University of Tartu, EST.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moghaddam, Mahyar T.
    University of Southern Denmark, DEN.
    ASSERT: A Blockchain-Based Architectural Approach for Engineering Secure Self-Adaptive IoT Systems2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 18, article id 6842Article in journal (Refereed)
    Abstract [en]

    Internet of Things (IoT) systems are complex systems that can manage mission-critical, costly operations or the collection, storage, and processing of sensitive data. Therefore, security represents a primary concern that should be considered when engineering IoT systems. Additionally, several challenges need to be addressed, including the following ones. IoT systems’ environments are dynamic and uncertain. For instance, IoT devices can be mobile or might run out of batteries, so they can become suddenly unavailable. To cope with such environments, IoT systems can be engineered as goal-driven and self-adaptive systems. A goal-driven IoT system is composed of a dynamic set of IoT devices and services that temporarily connect and cooperate to achieve a specific goal. Several approaches have been proposed to engineer goal-driven and self-adaptive IoT systems. However, none of the existing approaches enable goal-driven IoT systems to automatically detect security threats and autonomously adapt to mitigate them. Toward bridging these gaps, this paper proposes a distributed architectural Approach for engineering goal-driven IoT Systems that can autonomously SElf-adapt to secuRity Threats in their environments (ASSERT). ASSERT exploits techniques and adopts notions, such as agents, federated learning, feedback loops, and blockchain, for maintaining the systems’ security and enhancing the trustworthiness of the adaptations they perform. The results of the experiments that we conducted to validate the approach’s feasibility show that it performs and scales well when detecting security threats, performing autonomous security adaptations to mitigate the threats and enabling systems’ constituents to learn about security threats in their environments collaboratively. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 40.
    Alkharabsheh, Khalid
    et al.
    Al-Balqa Applied University, Jordan.
    Alawadi, Sadi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Crespo, Yania
    Universidad de Valladolid, Spain.
    Taboada, José A.
    Universidad de Santiago de Compostela, Spain.
    Exploring the role of project status information in effective code smell detection2025In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543, Vol. 28, no 1, article id 29Article in journal (Refereed)
    Abstract [en]

    Repairing code smells detected in the code or design of the system is one of the activities contributing to increasing the software quality. In this study, we investigate the impact of non-numerical information of software, such as project status information combined with machine learning techniques, on improving code smell detection. For this purpose, we constructed a dataset consisting of 22 systems with various project statuses, 12,040 classes, and 18 features that included 1935 large classes. A set of experiments was conducted with ten different machine learning techniques by dividing the dataset into training, validation, and testing sets to detect the large class code smell. Feature selection and data balancing techniques have been applied. The classifier’s performance was evaluated using six indicators: precision, recall, F-measure, MCC, ROC area, and Kappa tests. The preliminary experimental results reveal that feature selection and data balancing have poor influence on the accuracy of machine learning classifiers. Moreover, they vary their behavior when utilized in sets with different values for the selected project status information of their classes. The average value of classifiers performance when fed with status information is better than without. The Random Forest achieved the best behavior according to all performance indicators (100%) with status information, while AdaBoostM1 and SMO achieved the worst in most of them (> 86%). According to the findings of this study, providing machine learning techniques with project status information about the classes to be analyzed can improve the results of large class detection. © The Author(s) 2024.

    Download full text (pdf)
    fulltext
  • 41.
    Alkharabsheh, Khalid
    et al.
    Al-Balqa Applied University, JOR.
    Alawadi, Sadi
    Uppsala University, SWE.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Crespo, Yania
    Universidad de Valladolid, ESP.
    Fernández-Delgado, Manuel
    Universidad de Santiago de Compostela, ESP.
    Taboada, José A.
    Universidad de Santiago de Compostela, ESP.
    A comparison of machine learning algorithms on design smell detection using balanced and imbalanced dataset: A study of God class2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 143, article id 106736Article in journal (Refereed)
    Abstract [en]

    Context: Design smell detection has proven to be a significant activity that has an aim of not only enhancing the software quality but also increasing its life cycle. Objective: This work investigates whether machine learning approaches can effectively be leveraged for software design smell detection. Additionally, this paper provides a comparatively study, focused on using balanced datasets, where it checks if avoiding dataset balancing can be of any influence on the accuracy and behavior during design smell detection. Method: A set of experiments have been conducted-using 28 Machine Learning classifiers aimed at detecting God classes. This experiment was conducted using a dataset formed from 12,587 classes of 24 software systems, in which 1,958 classes were manually validated. Results: Ultimately, most classifiers obtained high performances,-with Cat Boost showing a higher performance. Also, it is evident from the experiments conducted that data balancing does not have any significant influence on the accuracy of detection. This reinforces the application of machine learning in real scenarios where the data is usually imbalanced by the inherent nature of design smells. Conclusions: Machine learning approaches can effectively be used as a leverage for God class detection. While in this paper we have employed SMOTE technique for data balancing, it is worth noting that there exist other methods of data balancing and with other design smells. Furthermore, it is also important to note that application of those other methods may improve the results, in our experiments SMOTE did not improve God class detection. The results are not fully generalizable because only one design smell is studied with projects developed in a single programming language, and only one balancing technique is used to compare with the imbalanced case. But these results are promising for the application in real design smells detection scenarios as mentioned above and the focus on other measures, such as Kappa, ROC, and MCC, have been used in the assessment of the classifier behavior. © 2021 The Authors

    Download full text (pdf)
    fulltext
  • 42.
    Alladi, Sai Sumeeth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Prioritized Database Synchronization using Optimization Algorithms2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    Prioritized Database Synchronization using Optimization Algorithms
  • 43.
    Alluri, Gayathri Thanuja
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In the field of computer science and communication systems, cloud computing plays animportant role in Information and Technology industry, it allows users to start from small and increase resources when there is a demand. AWS (Amazon Web Services) and GCP (Google cloud Platform) are two different cloud platform providers. Many organizations are still relying onstructured databases like MySQL etc. Structured databases cannot handle huge requests and data efficiently when number of requests and data increase. To overcome this problem, the organizations shift to NoSQL unstructured databases like Apache cassandra, Mongo DB etc.

    Conclusions: From the literature review, I have gained knowledge regarding the cloud computing, problems existed in cloud, which leads to setup this research in evaluating the performance of cassandra on AWS and GCP. The conclusion from the experiment is that as the thread count increases throughput and latency has increased gradually till thread count 600 in both the clouds. By comparing both the clouds throughput values, AWS scales up compare to GCP. GCP scales up, when compared to AWS in terms of latency. 

    Keywords: Apache Cassandra, AWS, Google Cloud Platform, Cassandra Stress, Throughput, Latency

    Download full text (pdf)
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)
  • 44.
    Al-Mashahedi, Ahmad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ljung, Oliver
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robust Code Generation using Large Language Models: Guiding and Evaluating Large Language Models for Static Verification2024Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Generative AI has achieved rapid and widespread acclaim over a short period since the inception of recent models that have opened up opportunities not possible before. Large Language Models (LLMs), a subset of generative AI, have become an essential part of code generation for software development. However, there is always a risk that the generated code does not fulfill the programmer's intent and contains faults or bugs that can go unnoticed. To that end, we propose that verification of generated code should increase its quality and trust.

    Objectives: This thesis aims to research generation of code that is both functionally correct and verifiable by implementing and evaluating four prompting approaches and a reinforcement learning solution to increase robustness within code generation, using unit-test and verification rewards.

    Methods: We used a Rapid Literature Review (RLR) and Design Science methodology to get a solid overview of the current state of robust code generation. From the RLR and related works, we evaluated the following four prompting approaches: Base prompt, Documentation prompting, In-context learning, and Documentation + In-context learning on the two datasets: MBPP and HumanEval. Moreover, we fine-tuned one model using Proximal Policy Optimization (PPO) for the novel task.

    Results: We measured the functional correctness and static verification success rates, amongst other metrics, for the four proposed approaches on eight model configurations, including the PPO fine-tuned LLM. Our results show that for the MBPP dataset, on average, In-context learning had the highest functional correctness at 29.4% pass@1, Documentation prompting had the highest verifiability at 8.48% verfiable@1, and finally, In-context learning had the highest functionally correct verifiable code at 3.2% pass@1 & verifiable@1. Moreover, the PPO fine-tuned model showed an overall increase in performance across all approaches compared to the pre-trained base model.

    Conclusions: We found that In-context learning on the PPO fine-tuned model yielded the best overall results across most metrics compared to the other approaches. The PPO fine-tuned with In-context learning resulted in 32.0% pass@1, 12.8% verifiable@1, and 5.0% pass@1 & verifiable@1. Documentation prompting was better for verifable@1 on MBPP. However, it did not perform as well for the other metrics. Documentation prompting + In-context learning was performance-wise between Documentation prompting and In-context learning, while Base prompt performed the worst overall. For future work, we envision several improvements to PPO training, including but not limited to training on Nagini documentation and utilizing expert iteration to create supervised fine-tuning datasets to improve the model iteratively. 

    Download full text (pdf)
    fulltext
  • 45.
    Almeling, Marie
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Svedlund Ishii, Tomoko
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Förbättra användning av API: Ett case inom Business Intelligence med implementering av en anpassad portal-lösning2024Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    API, Application Program Interface, enables users to interact with data. However, access to an API often depends on programming proficiency, which can limit its usage. This study explored ways to improve API usage by developing and implementing a custom portal for Power BI’s API, called AdminPortal (AdP).        

    To address the first research question, which investigated whether AdP improved API usage compared to the existing portal, a System Usability Scale (SUS) survey was conducted. The second research question was studied through qualitative methods, including open-ended questions and documentation of use cases, to gain deeper insights into how AdP could be utilized and how it improved the workflow for Power BI administration.

    The results showed a statistically significant difference between AdP and the existing portal, with AdP receiving higher SUS scores. Additionally, comparing workflows with and without AdP highlighted two use cases, where AdP simplified tasks by providing a user-friendly interface, in contrast to the manual processes or coding requirements of the existing portal.  

    AdP demonstrated the feasibility of accessing API benefits without coding. While offering advantages like automation and flexibility, AdP also presented challenges such as information selection and user interface complexity. 

    Download full text (pdf)
    Uppsats
  • 46.
    Al-Saedi, Ahmed Abbas Mohsin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Resource-Aware and Personalized Federated Learning via Clustering Analysis2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today’s advancement in Artificial Intelligence (AI) enables training Machine Learning (ML) models on the daily-produced data by connected edge devices. To make the most of the data stored on the device, conventional ML approaches require gathering all individual data sets and transferring them to a central location to train a common model. However, centralizing data incurs significant costs related to communication, network resource utilization, high volume of traffic, and privacy issues. To address the aforementioned challenges, Federated Learning (FL) is employed as a novel approach to train a shared model on decentralized edge devices while preserving privacy. Despite the significant potential of FL, it still requires considerable resources such as time, computational power, energy, and bandwidth availability. More importantly, the computational capabilities of the training devices may vary over time. Furthermore, the devices involved in the training process of FL may have distinct training datasets that differ in terms of their size and distribution. As a result of this, the convergence of the FL models may become unstable and slow. These differences can influence the FL process and ultimately lead to suboptimal model performance within a heterogeneous federated network.

    In this thesis, we have tackled several of the aforementioned challenges. Initially, a FL algorithm is proposed that utilizes cluster analysis to address the problem of communication overhead. This issue poses a major bottleneck in FL, particularly for complex models, large-scale applications, and frequent updates. The next research conducted in this thesis involved extending the previous study to include wireless networks (WNs). In WSNs, achieving energy-efficient transmission is a significant challenge due to their limited resources. This has motivated us to continue with a comprehensive overview and classification of the latest advancements in context-aware edge-based AI models, with a specific emphasis on sensor networks. The review has also investigated the associated challenges and motivations for adopting AI techniques, along with an evaluation of current areas of research that need further investigation. To optimize the aggregation of the FL model and alleviate communication expenses, the initial study addressing communication overhead is extended to include a FL-based cluster optimization approach. Furthermore, to reduce the detrimental effect caused by data heterogeneity among edge devices on FL, a new study of group-personalized FL models has been conducted. Finally, taking inspiration from the previously mentioned FL models, techniques for assessing clients' contribution by monitoring and evaluating their behavior during training are proposed. In comparison with the most existing contribution evaluation solutions, the proposed techniques do not require significant computational resources.

    The FL algorithms presented in this thesis are assessed on a range of real-world datasets. The extensive experiments demonstrated that the proposed FL techniques are effective and robust. These techniques improve communication efficiency, resource utilization, model convergence speed, and aggregation efficiency, and also reduce data heterogeneity when compared to other state-of-the-art methods.

    Download full text (pdf)
    fulltext
  • 47.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Group-Personalized Federated Learning for Human Activity Recognition Through Cluster Eccentricity Analysis2023In: Engineering Applications of Neural Networks: 24th International Conference, EAAAI/EANN 2023, León, Spain, June 14–17, 2023, Proceedings / [ed] Iliadis L., Maglogiannis I., Alonso S., Jayne C., Pimenidis E., Springer Science+Business Media B.V., 2023, p. 505-519Conference paper (Refereed)
    Abstract [en]

    Human Activity Recognition (HAR) plays a significant role in recent years due to its applications in various fields including health care and well-being. Traditional centralized methods reach very high recognition rates, but they incur privacy and scalability issues. Federated learning (FL) is a leading distributed machine learning (ML) paradigm, to train a global model collaboratively on distributed data in a privacy-preserving manner. However, for HAR scenarios, the existing action recognition system mainly focuses on a unified model, i.e. it does not provide users with personalized recognition of activities. Furthermore, the heterogeneity of data across user devices can lead to degraded performance of traditional FL models in the smart applications such as personalized health care. To this end, we propose a novel federated learning model that tries to cope with a statistically heterogeneous federated learning environment by introducing a group-personalized FL (GP-FL) solution. The proposed GP-FL algorithm builds several global ML models, each one trained iteratively on a dynamic group of clients with homogeneous class probability estimations. The performance of the proposed FL scheme is studied and evaluated on real-world HAR data. The evaluation results demonstrate that our approach has advantages in terms of model performance and convergence speed with respect to two baseline FL algorithms used for comparison. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

  • 48.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Contribution Prediction in Federated Learning via Client Behavior Evaluation2025In: Future Generation Computer Systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 166, article id 107639Article in journal (Refereed)
    Abstract [en]

    Federated learning (FL), a decentralized machine learning framework that allows edge devices (i.e., clients) to train a global model while preserving data/client privacy, has become increasingly popular recently. In FL, a shared global model is built by aggregating the updated parameters in a distributed manner. To incentivize data owners to participate in FL, it is essential for service providers to fairly evaluate the contribution of each data owner to the shared model during the learning process. To the best of our knowledge, most existing solutions are resource-demanding and usually run as an additional evaluation procedure. The latter produces an expensive computational cost for large data owners. In this paper, we present simple and effective FL solutions that show how the clients’ behavior can be evaluated during the training process with respect to reliability, and this is demonstrated for two existing FL models, Cluster Analysis-based Federated Learning (CA-FL) and Group-Personalized FL (GP-FL), respectively. In the former model, CA-FL, the frequency of each client to be selected as a cluster representative and in that way to be involved in the building of the shared model is assessed. This can eventually be considered as a measure of the respective client data reliability. In the latter model, GP-FL, we calculate how many times each client changes a cluster it belongs to during FL training, which can be interpreted as a measure of the client's unstable behavior, i.e., it can be considered as not very reliable. We validate our FL approaches on three LEAF datasets and benchmark their performance to two baseline contribution evaluation approaches. The experimental results demonstrate that by applying the two FL models we are able to get robust evaluations of clients’ behavior during the training process. These evaluations can be used for further studying, comparing, understanding, and eventually predicting clients’ contributions to the shared global model.

    Download full text (pdf)
    fulltext
  • 49.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    FedCO: Communication-Efficient Federated Learning via Clustering Optimization †2022In: Future Internet, E-ISSN 1999-5903, Vol. 14, no 12, article id 377Article in journal (Refereed)
    Abstract [en]

    Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers’ partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 50.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Reducing Communication Overhead of Federated Learning through Clustering Analysis2021In: 26th IEEE Symposium on Computers and Communications (ISCC 2021), Institute of Electrical and Electronics Engineers (IEEE), 2021Conference paper (Refereed)
    Abstract [en]

    Training of machine learning models in a Datacenter, with data originated from edge nodes, incurs high communication overheads and violates a user's privacy. These challenges may be tackled by employing Federated Learning (FL) machine learning technique to train a model across multiple decentralized edge devices (workers) using local data. In this paper, we explore an approach that identifies the most representative updates made by workers and those are only uploaded to the central server for reducing network communication costs. Based on this idea, we propose a FL model that can mitigate communication overheads via clustering analysis of the worker local updates. The Cluster Analysis-based Federated Learning (CA-FL) model is studied and evaluated in human activity recognition (HAR) datasets. Our evaluation results show the robustness of CA-FL in comparison with traditional FL in terms of accuracy and communication costs on both IID and non-IID  cases.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 902
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf