Change search
Refine search result
1234567 1 - 50 of 744
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdsharifi, Mohammad Hossein
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Dhar, Ripan Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Service Management for P2P Energy Sharing Using Blockchain – Functional Architecture2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Blockchain has become the most revolutionary technology in the 21st century. In recent years, one of the concerns of world energy isn't just sustainability yet, in addition, being secure and reliable also. Since information and energy security are the main concern for the present and future services, this thesis is focused on the challenge of how to trade energy securely on the background of using distributed marketplaces that can be applied. The core technology used in this thesis is distributed ledger, specifically blockchain. Since this technology has recently gained much attention because of its functionalities such as transparency, immutability, irreversibility, security, etc, we tried to convey a solution for the implementation of a secure peer-to-peer (P2P) energy trading network over a suitable blockchain platform. Furthermore, blockchain enables traceability of the origin of data which is called data provenience.

    In this work, we applied a secure blockchain technology in peer-to-peer energy sharing or trading system where the prosumer and consumer can trade their energies through a secure channel or network. Furthermore, the service management functionalities such as security, reliability, flexibility, and scalability are achieved through the implementation. \\

    This thesis is focused on the current proposals for p2p energy trading using blockchain and how to select a suitable blockchain technique to implement such a p2p energy trading network. In addition, we provide an implementation of such a secure network under blockchain and proper management functions. The choices of the system models, blockchain technology, and the consensus algorithm are based on literature review, and it carried to an experimental implementation where the feasibility of that system model has been validated through the output results. 

    Download full text (pdf)
    Service Management for P2P Energy Sharing Using Blockchain – Functional Architecture
  • 2.
    Abghari, Shahrooz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Data Mining Approaches for Outlier Detection Analysis2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Outlier detection is studied and applied in many domains. Outliers arise due to different reasons such as fraudulent activities, structural defects, health problems, and mechanical issues. The detection of outliers is a challenging task that can reveal system faults, fraud, and save people's lives. Outlier detection techniques are often domain-specific. The main challenge in outlier detection relates to modelling the normal behaviour in order to identify abnormalities. The choice of model is important, i.e., an unsuitable data model can lead to poor results. This requires a good understanding and interpretation of the data, the constraints, and requirements of the domain problem. Outlier detection is largely an unsupervised problem due to unavailability of labeled data and the fact that labeled data is expensive. 

    In this thesis, we study and apply a combination of both machine learning and data mining techniques to build data-driven and domain-oriented outlier detection models. We focus on three real-world application domains: maritime surveillance, district heating, and online media and sequence datasets. We show the importance of data preprocessing as well as feature selection in building suitable methods for data modelling. We take advantage of both supervised and unsupervised techniques to create hybrid methods. 

    More specifically, we propose a rule-based anomaly detection system using open data for the maritime surveillance domain. We exploit sequential pattern mining for identifying contextual and collective outliers in online media data. We propose a minimum spanning tree clustering technique for detection of groups of outliers in online media and sequence data. We develop a few higher order mining approaches for identifying manual changes and deviating behaviours in the heating systems at the building level. The proposed approaches are shown to be capable of explaining the underlying properties of the detected outliers. This can facilitate domain experts in narrowing down the scope of analysis and understanding the reasons of such anomalous behaviours. We also investigate the reproducibility of the proposed models in similar application domains.

    Download full text (pdf)
    fulltext
  • 3.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Higher Order Mining Approach for the Analysis of Real-World Datasets2020In: Energies, E-ISSN 1996-1073, Vol. 13, no 21, article id 5781Article in journal (Refereed)
    Abstract [en]

    In this study, we propose a higher order mining approach that can be used for the analysis of real-world datasets. The approach can be used to monitor and identify the deviating operational behaviour of the studied phenomenon in the absence of prior knowledge about the data. The proposed approach consists of several different data analysis techniques, such as sequential pattern mining, clustering analysis, consensus clustering and the minimum spanning tree (MST). Initially, a clustering analysis is performed on the extracted patterns to model the behavioural modes of the studied phenomenon for a given time interval. The generated clustering models, which correspond to every two consecutive time intervals, can further be assessed to determine changes in the monitored behaviour. In cases in which significant differences are observed, further analysis is performed by integrating the generated models into a consensus clustering and applying an MST to identify deviating behaviours. The validity and potential of the proposed approach is demonstrated on a real-world dataset originating from a network of district heating (DH) substations. The obtained results show that our approach is capable of detecting deviating and sub-optimal behaviours of DH substations.

    Download full text (pdf)
    fulltext
  • 4.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Multi-view Clustering Analyses for District Heating Substations2020In: DATA 2020 - Proceedings of the 9th International Conference on Data Science, Technology and Applications2020, / [ed] Hammoudi S.,Quix C.,Bernardino J., SciTePress, 2020, p. 158-168Conference paper (Refereed)
    Abstract [en]

    In this study, we propose a multi-view clustering approach for mining and analysing multi-view network datasets. The proposed approach is applied and evaluated on a real-world scenario for monitoring and analysing district heating (DH) network conditions and identifying substations with sub-optimal behaviour. Initially, geographical locations of the substations are used to build an approximate graph representation of the DH network. Two different analyses can further be applied in this context: step-wise and parallel-wise multi-view clustering. The step-wise analysis is meant to sequentially consider and analyse substations with respect to a few different views. At each step, a new clustering solution is built on top of the one generated by the previously considered view, which organizes the substations in a hierarchical structure that can be used for multi-view comparisons. The parallel-wise analysis on the other hand, provides the opportunity to analyse substations with regards to two different views in parallel. Such analysis is aimed to represent and identify the relationships between substations by organizing them in a bipartite graph and analysing the substations’ distribution with respect to each view. The proposed data analysis and visualization approach arms domain experts with means for analysing DH network performance. In addition, it will facilitate the identification of substations with deviating operational behaviour based on comparative analysis with their closely located neighbours.

    Download full text (pdf)
    Multi-view Clustering Analyses for District Heating Substations
  • 5.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    District Heating Substation Behaviour Modelling for Annotating the Performance2020In: Communications in Computer and Information Science / [ed] Cellier, P, Driessens, K, Springer , 2020, Vol. 1168, p. 3-11Conference paper (Refereed)
    Abstract [en]

    In this ongoing study, we propose a higher order data mining approach for modelling district heating (DH) substations’ behaviour and linking operational behaviour representative profiles with different performance indicators. We initially create substation’s operational behaviour models by extracting weekly patterns and clustering them into groups of similar patterns. The built models are further analyzed and integrated into an overall substation model by applying consensus clustering. The different operational behaviour profiles represented by the exemplars of the consensus clustering model are then linked to performance indicators. The labelled behaviour profiles are deployed over the whole heating season to derive diverse insights about the substation’s performance. The results show that the proposed method can be used for modelling, analyzing and understanding the deviating and sub-optimal DH substation’s behaviours. © 2020, Springer Nature Switzerland AG.

  • 6.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Brage, Jens
    NODA Intelligent Systems AB, SWE.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Jönköping University, SWE.
    Higher order mining for monitoring district heating substations2019In: Proceedings - 2019 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 382-391Conference paper (Refereed)
    Abstract [en]

    We propose a higher order mining (HOM) approach for modelling, monitoring and analyzing district heating (DH) substations' operational behaviour and performance. HOM is concerned with mining over patterns rather than primary or raw data. The proposed approach uses a combination of different data analysis techniques such as sequential pattern mining, clustering analysis, consensus clustering and minimum spanning tree (MST). Initially, a substation's operational behaviour is modeled by extracting weekly patterns and performing clustering analysis. The substation's performance is monitored by assessing its modeled behaviour for every two consecutive weeks. In case some significant difference is observed, further analysis is performed by integrating the built models into a consensus clustering and applying an MST for identifying deviating behaviours. The results of the study show that our method is robust for detecting deviating and sub-optimal behaviours of DH substations. In addition, the proposed method can facilitate domain experts in the interpretation and understanding of the substations' behaviour and performance by providing different data analysis and visualization techniques. © 2019 IEEE.

    Download full text (pdf)
    Higher Order Mining for Monitoring DistrictHeating Substations
  • 7.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exner, Peter
    Sony R&D Center Lund Laboratory, SWE.
    An Inductive System Monitoring Approach for GNSS Activation2022In: IFIP Advances in Information and Communication Technology / [ed] Maglogiannis, I, Iliadis, L, Macintyre, J, Cortez, P, Springer Science+Business Media B.V., 2022, Vol. 647, p. 437-449Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a Global Navigation Satellite System (GNSS) component activation model for mobile tracking devices that automatically detects indoor/outdoor environments using the radio signals received from Long-Term Evolution (LTE) base stations. We use an Inductive System Monitoring (ISM) technique to model environmental scenarios captured by a smart tracker via extracting clusters of corresponding value ranges from LTE base stations’ signal strength. The ISM-based model is built by using the tracker’s historical data labeled with GPS coordinates. The built model is further refined by applying it to additional data without GPS location collected by the same device. This procedure allows us to identify the clusters that describe semi-outdoor scenarios. In that way, the model discriminates between two outdoor environmental categories: open outdoor and semi-outdoor. The proposed ISM-based GNSS activation approach is studied and evaluated on a real-world dataset contains radio signal measurements collected by five smart trackers and their geographical location in various environmental scenarios.

  • 8.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radio Electronics, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Reinforcement Learning for Anti-Ransomware Testing2020In: 2020 IEEE East-West Design and Test Symposium, EWDTS 2020 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2020, article id 9225141Conference paper (Refereed)
    Abstract [en]

    In this paper, we are going to verify the possibility to create a ransomware simulation that will use an arbitrary combination of known tactics and techniques to bypass an anti-malware defense. To verify this hypothesis, we conducted an experiment in which an agent was trained with the help of reinforcement learning to run the ransomware simulator in a way that can bypass anti-ransomware solution and encrypt the target files. The novelty of the proposed method lies in applying reinforcement learning to anti-ransomware testing that may help to identify weaknesses in the anti-ransomware defense and fix them before a real attack happens. © 2020 IEEE.

    Download full text (pdf)
    fulltext
  • 9.
    Adamov, Alexander
    et al.
    NioGuard Security Lab, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Surmacz, Tomasz
    Wrocław University of Science and Technology, POL.
    An analysis of lockergoga ransomware2019In: 2019 IEEE East-West Design and Test Symposium, EWDTS 2019, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    This paper contains an analysis of the LockerGoga ransomware that was used in the range of targeted cyberattacks in the first half of 2019 against Norsk Hydra-A world top 5 aluminum manufacturer, as well as the US chemical enterprises Hexion, and Momentive-Those companies are only the tip of the iceberg that reported the attack to the public. The ransomware was executed by attackers from inside a corporate network to encrypt the data on enterprise servers and, thus, taking down the information control systems. The intruders asked for a ransom to release a master key and decryption tool that can be used to decrypt the affected files. The purpose of the analysis is to find out tactics and techniques used by the LockerGoga ransomware during the cryptolocker attack as well as an encryption model to answer the question if the encrypted files can be decrypted with or without paying a ransom. The scientific novelty of the paper lies in an analysis methodology that is based on various reverse engineering techniques such as multi-process debugging and using open source code of a cryptographic library to find out a ransomware encryption model. © 2019 IEEE.

  • 10.
    Adeopatoye, Remilekun
    et al.
    Federal University of Technology, Nigeria.
    Ikuesan, Richard Adeyemi
    Zayed University, United Arab Emirates.
    Sookhak, Mehdi
    Texas A&m University, United States.
    Hungwe, Taurai
    Sefako Makgatho University of Health Sciences, South Africa.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards an Open-Source Based E-Mail Forensic Tool that uses Headers in Digital Investigation2023In: ACM International Conference Proceeding Series, ACM Digital Library, 2023Conference paper (Refereed)
    Abstract [en]

    Email-related incidents/crimes are on the rise owing to the fact that communication by electronic mail (e-mail) has become an important part of our daily lives. The technicality behind e-mail plays an important role when looking for digital evidence that can be used to create a hypothesis that can be used during litigation. During this process, it is needful to have a tool that can help to isolate email incidents as a potential crime scene in the wake of suspected attacks. The problem that this paper is addressing paper, is more centered on realizing an open-source email-forensic tool that used the header analysis approach. One advantage of this approach is that it helps investigators to collect digital evidence from e-mail systems, organize the collected data, analyze and discover any discrepancies in the header fields of an e-mail, and generates an evidence report. The main contribution of this paper focuses on generating a freshly computed hash that is attached to every generated report, to ensure the verifiability, reliability, and integrity of the reports to prove that they have not been modified in any way. Finally, this ensures that the sanctity and forensic soundness of the collected evidence are maintained. © 2023 ACM.

  • 11.
    Adurti, Devi Abhiseshu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Battu, Mohit
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Optimization of Heterogeneous Parallel Computing Systems using Machine Learning2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Heterogeneous parallel computing systems utilize the combination of different resources CPUs and GPUs to achieve high performance and, reduced latency and energy consumption. Programming applications that target various processing units requires employing different tools and programming models/languages. Furthermore, selecting the most optimal implementation, which may either target different processing units (i.e. CPU or GPU) or implement the various algorithms, is not trivial for a given context. In this thesis, we investigate the use of machine learning to address the selection problem of various implementation variants for an application running on a heterogeneous system.

    Objectives: This study is focused on providing an approach for optimization of heterogeneous parallel computing systems at runtime by building the most efficient machine learning model to predict the optimal implementation variant of an application.

    Methods: The six machine learning models KNN, XGBoost, DTC, Random Forest Classifier, LightGBM, and SVM are trained and tested using stratified k-fold on the dataset generated from the matrix multiplication application for square matrix input dimension ranging from 16x16 to 10992x10992.

    Results: The results of each machine learning algorithm’s finding are presented through accuracy, confusion matrix, classification report for parameters precision, recall, and F-1 score, and a comparison between the machine learning models in terms of accuracy, run-time training, and run-time prediction are provided to determine the best model.

    Conclusions: The XGBoost, DTC, SVM algorithms achieved 100% accuracy. In comparison to the other machine learning models, the DTC is found to be the most suitable due to its low time required for training and prediction in predicting the optimal implementation variant of the heterogeneous system application. Hence the DTC is the best suitable algorithm for the optimization of heterogeneous parallel computing.

    Download full text (pdf)
    fulltext
  • 12.
    Ahlgren, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Local And Network Ransomware Detection Comparison2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Ransomware is a malicious application encrypting important files on a victim's computer. The ransomware will ask the victim for a ransom to be paid through cryptocurrency. After the system is encrypted there is virtually no way to decrypt the files other than using the encryption key that is bought from the attacker.

    Objectives. In this practical experiment, we will examine how machine learning can be used to detect ransomware on a local and network level. The results will be compared to see which one has a better performance.

    Methods. Data is collected through malware and goodware databases and then analyzed in a virtual environment to extract system information and network logs. Different machine learning classifiers will be built from the extracted features in order to detect the ransomware. The classifiers will go through a performance evaluation and be compared with each other to find which one has the best performance.

    Results. According to the tests, local detection was both more accurate and stable than network detection. The local classifiers had an average accuracy of 96% while the best network classifier had an average accuracy of 89.6%.

    Conclusions. In this case the results show that local detection has better performance than network detection. However, this can be because the network features were not specific enough for a network classifier. The network performance could have been better if the ransomware samples consisted of fewer families so better features could have been selected.

    Download full text (pdf)
    BTH2019Ahlgren
  • 13.
    Ahlstrand, Jim
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Telenor Sverige AB, Sweden..
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Preliminary Results on the use of Artificial Intelligence for Managing Customer Life Cycles2023In: 35th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2023 / [ed] Håkan Grahn, Anton Borg and Martin Boldt, Linköping University Electronic Press, 2023, p. 68-76Conference paper (Refereed)
    Abstract [en]

    During the last decade we have witnessed how artificial intelligence (AI) have changed businesses all over the world. The customer life cycle framework is widely used in businesses and AI plays a role in each stage. However,implementing and generating value from AI in the customerlife cycle is not always simple. When evaluating the AI against business impact and value it is critical to consider both themodel performance and the policy outcome. Proper analysis of AI-derived policies must not be overlooked in order to ensure ethical and trustworthy AI. This paper presents a comprehensive analysis of the literature on AI in customer lifecycles (CLV) from an industry perspective. The study included 31 of 224 analyzed peer-reviewed articles from Scopus search result. The results show a significant research gap regardingoutcome evaluations of AI implementations in practice. This paper proposes that policy evaluation is an important tool in the AI pipeline and empathizes the significance of validating bothpolicy outputs and outcomes to ensure reliable and trustworthy AI.

    Download full text (pdf)
    fulltext
  • 14.
    Ahlström, Frida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Karlsson, Janni
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Utvecklarens förutsättningar för säkerställande av tillgänglig webb2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Since 2019, all public websites in Sweden are legally bound to meet a certain degree of digital accessibility. An additional EU directive is being transposed into national law at the time of publication of this thesis, which will impose corresponding requirements on part of the private sector, such as banking services and e-commerce. This will likely cause increased demand which suppliers of web development and, in turn, their developers must be able to meet. 

    The aims of this study are to create an increased awareness of digital accessibility as well as to clarify, from the developer’s perspective, how this degree of accessibility is achieved and what could make application of digital accessibility more efficient. 

    In order to achieve this, eight qualitative interviews were conducted, transcribed and thematized in the results section. An inductive thematic analysis has been carried out related to the research questions. It compares the results of previous studies with the outcomes from this study, and shows clear similarities but also differences and new discoveries. 

    The study shows that developers have access to evaluation tools and guidelines that provide good support in their work, but that the responsibility often lies with individual developers rather than with the business as a whole. This is one of the main challenges, together with the fact that inaccessible development is still being carried out in parallel, and that time pressure leads to deprioritization of accessibility. However, the respondents agree that it does not take any more time to develop accessible rather than inaccessible websites, provided that this is taken into account from the outset. Success factors for digital accessibility are to sell the idea to the customer, to work in a structured way with knowledge sharing and to document solutions in order to save time. In addition to this, it appears that the implementation of accessibility would benefit from the ownership being raised to a higher decision level and the competence being broadened in the supplier's organization, and that developers gain access to specialist competence and user tests to support their work. A basic knowledge of accessibility could be included in web development training to a greater extent, and an extension of the legal requirements could also create additional incentives for the customer. 

    Download full text (pdf)
    fulltext
  • 15.
    Ahmad, Al Ghaith
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd ULRAHMAN, Ibrahim
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Download full text (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 16.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Automated Context-aware Vulnerability Risk Management2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.

    The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. 

    ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.

    Download full text (pdf)
    fulltext
  • 17.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Secure Collaborative AI Service Chains2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.

    Download full text (pdf)
    fulltext
  • 18.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated Context-Aware Vulnerability Risk Management for Patch Prioritization2022In: Electronics, E-ISSN 2079-9292, Vol. 11, no 21, article id 3580Article in journal (Refereed)
    Abstract [en]

    The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 19.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Sapienza University of Rome, Italy.
    Automated Patch Management: An Empirical Evaluation Study2023In: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023, IEEE, 2023, p. 321-328Conference paper (Refereed)
    Abstract [en]

    Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.

    Download full text (pdf)
    fulltext
  • 20.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Normalization Framework for Vulnerability Risk Management in Cloud2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, IEEE, 2021, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.

    Download full text (pdf)
    fulltext
  • 21.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB, Sweden.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management2020In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 200-205, article id 9196350Conference paper (Refereed)
    Abstract [en]

    In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.

    In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.

    Download full text (pdf)
    fulltext
  • 22.
    Ahmed Sheik, Kareem
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A Comparative Study on Optimization Algorithms and its efficiency2022Independent thesis Advanced level (degree of Master (Two Years)), 20 HE creditsStudent thesis
    Abstract [en]

    Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2].

    Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation.

    Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better.

    Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained.

    Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted.

    Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.

    Download full text (pdf)
    A Comparative Study on Optimization Algorithms and its efficiency
  • 23.
    Ahmed, Syed Saif
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arepalli, Harshini Devi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Auto-scaling Prediction using MachineLearning Algorithms: Analysing Performance and Feature Correlation2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.

    Download full text (pdf)
    Auto-scaling Prediction using Machine Learning Algorithms: Analysing Performance and Feature Correlation
  • 24.
    Ajjapu, Siva Babu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lokireddy, Sasank Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Indoor VLC behaviour  with RGB spectral power distribution using simulation.2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    n recent years visible light communication (VLC) has been one of the technologies overgrowing in this competitive world and breaking through the wireless transmission of future mobile communications. This VLC replaces radio frequency (RF), which has several important features like large bandwidth, low cost, unlicensed spectrum. In telecommunications, there is a need for high bandwidth and secure transmission of data through a network. Communication can be done through wired and wireless. Wired communication such as coaxial cable, twisted wire, fiber optics, and wireless are RF, light fidelity (Li-Fi), optical wireless communication(OWC). In our daily lives, we are transferring data from one place to another through a network connection. The network is connected to multiple devices as the network bandwidth provided by VLC is higher than the RF communications. When multiple devices are connected to RF, the latency is high. In the case of VLC, the latency is low. In this research, the light emitting diode (LED) bulbs act as the transmitter(Tx), and the avalanche photodiode (APD) acts as a receiver(Rx).

    This research mainly focuses on creating a MATLAB simulation environment for a two-room VLC system with given spectral power distributions. We have simulated two rooms with the exact dimensions. The LEDs are placed in opposite positions in each room. LED is placed at the middle top of the ceiling in one room, and a photodiode (PD) is placed on top of the table under the light in the same room. Moreover, in another room, the light is placed on top of the table at the bottom, and PD is placed at the middle top of the ceiling.Moreover, these two rooms are connected to the same network.   The input parameters are taken from the previous studies, but the transmitting power is calculated from the Red-Green-Blue(RGB or White) light spectrum distribution using the OOK modulation technique. We obtained responsivity of APD at a single point and bit error rates(BER) of APD at multi-points inside both the rooms.

    Download full text (pdf)
    fulltext
  • 25.
    Aklilu, Yohannes T.
    et al.
    University of Skövde, SWE.
    Ding, Jianguo
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Survey on blockchain for smart grid management, control, and operation2022In: Energies, E-ISSN 1996-1073, Vol. 15, no 1, article id 193Article in journal (Refereed)
    Abstract [en]

    Power generation, distribution, transmission, and consumption face ongoing challenges such as smart grid management, control, and operation, resulting from high energy demand, the diversity of energy sources, and environmental or regulatory issues. This paper provides a comprehensive overview of blockchain-based solutions for smart grid management, control, and operations. We systematically summarize existing work on the use and implementation of blockchain technology in various smart grid domains. The paper compares related reviews and highlights the challenges in the management, control, and operation for a blockchain-based smart grid as well as future research directions in the five categories: collaboration among stakeholders; data analysis and data manage-ment; control of grid imbalances; decentralization of grid management and operations; and security and privacy. All these aspects have not been covered in previous reviews. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

    Download full text (pdf)
    fulltext
  • 26.
    AKULA, SAI PANKAJ
    Blekinge Institute of Technology, Education Development Unit. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A critical evaluation on SRK STORE APP by using the Heuristic Principles of Usability2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis aim is to do a critical evaluation on SRK STORE APP (Shopping app for android) by applying the heuristic principles of usability, to obtain the usability issues or problems for the respective mobile application. Another vital element of this thesis is to attain the necessary suggestions for the respective mobile application by accomplishing the principles of the heuristic evaluation required for the mobile application. On the other hand, the outcome should be demonstrated that the mobile application is user flexible by following the principles of heuristic.

    Background: To be aesthetic and attractive, the mobile application should be given an ideal user experience with usability. So we decided to focus on this utility field and while looking through the different articles, we came across one that talks about design principles and their concepts. The current thesis idea has been obtained by the literature survey we have done on the design principles of the heuristic evaluation and its concepts. This thesis is to attain the necessary suggestions and as well as the complemented solutions/recommendations for the specific mobile application by accomplishing the principles of the heuristic evaluation required for the mobile application.

    Objectives: The main objectives of this project are examining the design principles and identifying the usability issues or problems of the respective mobile application and anthologizing a list of necessary suggestions for enhancing the mobile application and providing absolute recommendations to the existing application.

    Methods: To compile a list of necessary suggestions and providing absolute recommendations for the mobile application, we have applied Jakob Neilson’s design principles. This specific method aids in determining the utility of design criteria and aids in the transformation of the interactive system by analyzing factors such as usability. Using this method, we will provide a concise detailed overview of the importance of design principles in an interaction. The key aim of employing design principles of usability is to ensure the performance and reliability of the effective interaction design, to provide meaningful user interaction assistance, and as well as to dispense an acceptable and optimal user experience.

    Results: The results here obtained are the usability issues of the respective mobile application i.e., SRK STORE APP, and the heuristic principles which are not satisfied by the specific mobile application. The severity level of the heuristic principles will have resulted and the list of necessary suggestions for enhancing the mobile application and providing absolute recommendations to the existing application.

    Conclusions: This study was conducted to evaluate the mobile application. Heuristic evaluation methodology was used to evaluate the system. Jakob Neilson’sdesign principles were used to depict the usability issues of the mobile application. The required suggestions and absolute recommendations/solutions are provided to the existing mobile application. 

    Download full text (pdf)
    A critical evaluation on SRK STORE APP by using the Heuristic Principles of Usability
  • 27.
    Akurathi, Lakshmikanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chilluguri, Surya Teja Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Decode and Forward Relay Assisting Active Jamming in NOMA System2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Non-orthogonal multiple access (NOMA), with its exceptional spectrum efficiency, was thought to be a promising technology for upcoming wireless communications. Physical layer security has also been investigated to improve the security performance of the system. Power-domain NOMA has been considered for this paper, where multiple users can share the same spectrum which bases this sharing on distinct power values. Power allocation is used to allocate different power to the users based on their channel condition. Data signals of different users are superimposed on the transmitter's side, and the receiver uses successive interference cancellation (SIC) to remove the unwanted signals before decoding its own signal. There exist an eavesdropper whose motive is to eavesdrop on the confidential information that is being shared with the users. The network model developed in this way consists of two links, one of which considers the relay transmission path from the source to Near User to Far User and the other of which takes into account the direct transmission path from the source to the destination, both of which experience Nakagami-m fading. To degrade the eavesdropper's channel, the jamming technique is used against the eavesdropper where users are assumed to be in a full-duplex mode which aims to improve the security of the physical layer. Secrecy performance metrics such as secrecy outage probability, secrecy capacity, etc. are evaluated and analyzed for the considered system. Mathematical analysis and simulation using MATLAB are done to assess, analyze and visualize the system's performance in the presence of an eavesdropper when the jamming technique is applied. According to simulation results, the active jamming approach enhances the secrecy performance of the entire system and leads to a positive improvement in the secrecy rate.

    Download full text (pdf)
    Decode and Forward Relay Assisting Active Jamming in NOMA System
  • 28.
    Alanko Öberg, John
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Svensson, Carl
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Time-based Key for Coverless Audio Steganography: A Proposed Behavioral Method to Increase Capacity2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Coverless steganography is a relatively unexplored area of steganography where the message is not embedded into a cover media. Instead the message is derived from one or several properties already existing in the carrier media. This renders steganalysis methods used for traditional steganography useless. Early coverless methods were applied to images or texts but more recently the possibilities in the video and audio domain have been explored. The audio domain still remains relatively unexplored however, with the earliest work being presented in 2022. In this thesis, we narrow the existing research gap by proposing an audio-compatible method which uses the timestamp that marks when a carrier media was received to generate a time-based key which can be applied to the hash produced by said carrier. This effectively allows one carrier to represent a range of different hashes depending on the timestamp specifying when it was received, increasing capacity.

    Objectives. The objectives of the thesis are to explore what features of audio are suitable for steganographic use, to establish a method for finding audio clips which can represent a specific message to be sent and to improve on the current state-of-the-art method, taking capacity, robustness and cost into consideration.

    Methods. A literature review was first conducted to gain insight on techniques used in previous works. This served both to illuminate features of audio that could be used to good effect in a coverless approach, and to identify coverless approaches which could work but had not been tested yet. Experiments were then performed on two datasets to show the effective capacity increase of the proposed method when used in tandem with the existing state-of-the-art method for coverless audio steganography. Additional robustness tests for said state-of-the-art method were also performed.

    Results. The results show that the proposed method could increase the per-message capacity from eight bits to 16 bits, while still retaining 100% effective capacity using only 200 key permutations, given a database consisting of 50 one-minute long audio clips. They further show that the time cost added by the proposed method is in total less than 0.1 seconds for 2048 key permutations. The robustness experiments show that the hashing algorithms used in the state-of-the-art method have high robustness against additive white gaussian noise, low-pass filters, and resampling attacks but are weaker against compression and band-pass filters. 

    Conclusions. We address the scientific gap and complete our objectives by proposing a method which can increase capacity of existing coverless steganography methods. We demonstrate the capacity increase our method brings by using it in tandem with the state-of-the-art method for the coverless audio domain. We argue that our method is not limited to the audio domain, or to the coverless method with which we performed our experiments. Finally, we discuss several directions for future works. 

    Download full text (pdf)
    fulltext
  • 29.
    Al-Dhaqm, Arafat
    et al.
    Univ Teknol Malaysia UTM, MYS.
    Ikuesan, Richard Adeyemi
    Community Coll Qatar, QAT.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd Razak, Shukor
    Univ Teknol Malaysia UTM, MYS.
    Grispos, George
    Univ Nebraska, USA.
    Choo, Kim-Kwang Raymond
    Univ Texas San Antonio, USA.
    Al-Rimy, Bander Ali Saleh
    Univ Teknol Malaysia UTM, MYS.
    Alsewari, Abdulrahman A.
    Univ Malaysia Pahang, MYS.
    Digital Forensics Subdomains: The State of the Art and Future Directions2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 152476-152502Article in journal (Refereed)
    Abstract [en]

    For reliable digital evidence to be admitted in a court of law, it is important to apply scientifically proven digital forensic investigation techniques to corroborate a suspected security incident. Mainly, traditional digital forensics techniques focus on computer desktops and servers. However, recent advances in digital media and platforms have seen an increased need for the application of digital forensic investigation techniques to other subdomains. This includes mobile devices, databases, networks, cloud-based platforms, and the Internet of Things (IoT) at large. To assist forensic investigators to conduct investigations within these subdomains, academic researchers have attempted to develop several investigative processes. However, many of these processes are domain-specific or describe domain-specific investigative tools. Hence, in this paper, we hypothesize that the literature is saturated with ambiguities. To further synthesize this hypothesis, a digital forensic model-orientated Systematic Literature Review (SLR) within the digital forensic subdomains has been undertaken. The purpose of this SLR is to identify the different and heterogeneous practices that have emerged within the specific digital forensics subdomains. A key finding from this review is that there are process redundancies and a high degree of ambiguity among investigative processes in the various subdomains. As a way forward, this study proposes a high-level abstract metamodel, which combines the common investigation processes, activities, techniques, and tasks for digital forensics subdomains. Using the proposed solution, an investigator can effectively organize the knowledge process for digital investigation.

    Download full text (pdf)
    fulltext
  • 30.
    Alkhabbas, Fahed
    et al.
    Malmö University.
    Alawadi, Sadi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ayyad, Majed
    Birzeit University, Palestine.
    Spalazzese, Romina
    Malmö University.
    Davidsson, Paul
    Malmö University.
    ART4FL: An Agent-based Architectural Approach for Trustworthy Federated Learning in the IoT2023In: 8th International Conference on Fog and Mobile Edge Computing, FMEC 2023 / [ed] Quwaider M., Awaysheh F.M., Jararweh Y., Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 270-275Conference paper (Refereed)
    Abstract [en]

    The integration of the Internet of Things (IoT) and Machine Learning (ML) technologies has opened up for the development of novel types of systems and services. Federated Learning (FL) has enabled the systems to collaboratively train their ML models while preserving the privacy of the data collected by their IoT devices and objects. Several FL frameworks have been developed, however, they do not enable FL in open, distributed, and heterogeneous IoT environments. Specifically, they do not support systems that collect similar data to dynamically discover each other, communicate, and negotiate about the training terms (e.g., accuracy, communication latency, and cost). Towards bridging this gap, we propose ART4FL, an end-to-end framework that enables FL in open IoT settings. The framework enables systems’ users to configure agents that participate in FL on their behalf. Those agents negotiate and make commitments (i.e., contractual agreements) to dynamically form federations. To perform FL, the framework deploys the needed services dynamically, monitors the training rounds, and calculates agents’ trust scores based on the established commitments. ART4FL exploits a blockchain network to maintain the trust scores, and it provides those scores to negotiating agents’ during the federations’ formation phase. © 2023 IEEE.

    Download full text (pdf)
    fulltext
  • 31.
    Alkhabbas, Fahed
    et al.
    Malmö University, SWE.
    Alsadi, Mohammed
    Norwegian University of Science and Technology, NOR.
    Alawadi, Sadi
    Uppsala University, SWE.
    Awaysheh, Feras M.
    University of Tartu, EST.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moghaddam, Mahyar T.
    University of Southern Denmark, DEN.
    ASSERT: A Blockchain-Based Architectural Approach for Engineering Secure Self-Adaptive IoT Systems2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 18, article id 6842Article in journal (Refereed)
    Abstract [en]

    Internet of Things (IoT) systems are complex systems that can manage mission-critical, costly operations or the collection, storage, and processing of sensitive data. Therefore, security represents a primary concern that should be considered when engineering IoT systems. Additionally, several challenges need to be addressed, including the following ones. IoT systems’ environments are dynamic and uncertain. For instance, IoT devices can be mobile or might run out of batteries, so they can become suddenly unavailable. To cope with such environments, IoT systems can be engineered as goal-driven and self-adaptive systems. A goal-driven IoT system is composed of a dynamic set of IoT devices and services that temporarily connect and cooperate to achieve a specific goal. Several approaches have been proposed to engineer goal-driven and self-adaptive IoT systems. However, none of the existing approaches enable goal-driven IoT systems to automatically detect security threats and autonomously adapt to mitigate them. Toward bridging these gaps, this paper proposes a distributed architectural Approach for engineering goal-driven IoT Systems that can autonomously SElf-adapt to secuRity Threats in their environments (ASSERT). ASSERT exploits techniques and adopts notions, such as agents, federated learning, feedback loops, and blockchain, for maintaining the systems’ security and enhancing the trustworthiness of the adaptations they perform. The results of the experiments that we conducted to validate the approach’s feasibility show that it performs and scales well when detecting security threats, performing autonomous security adaptations to mitigate the threats and enabling systems’ constituents to learn about security threats in their environments collaboratively. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 32.
    Alkharabsheh, Khalid
    et al.
    Al-Balqa Applied University, JOR.
    Alawadi, Sadi
    Uppsala University, SWE.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Crespo, Yania
    Universidad de Valladolid, ESP.
    Fernández-Delgado, Manuel
    Universidad de Santiago de Compostela, ESP.
    Taboada, José A.
    Universidad de Santiago de Compostela, ESP.
    A comparison of machine learning algorithms on design smell detection using balanced and imbalanced dataset: A study of God class2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 143, article id 106736Article in journal (Refereed)
    Abstract [en]

    Context: Design smell detection has proven to be a significant activity that has an aim of not only enhancing the software quality but also increasing its life cycle. Objective: This work investigates whether machine learning approaches can effectively be leveraged for software design smell detection. Additionally, this paper provides a comparatively study, focused on using balanced datasets, where it checks if avoiding dataset balancing can be of any influence on the accuracy and behavior during design smell detection. Method: A set of experiments have been conducted-using 28 Machine Learning classifiers aimed at detecting God classes. This experiment was conducted using a dataset formed from 12,587 classes of 24 software systems, in which 1,958 classes were manually validated. Results: Ultimately, most classifiers obtained high performances,-with Cat Boost showing a higher performance. Also, it is evident from the experiments conducted that data balancing does not have any significant influence on the accuracy of detection. This reinforces the application of machine learning in real scenarios where the data is usually imbalanced by the inherent nature of design smells. Conclusions: Machine learning approaches can effectively be used as a leverage for God class detection. While in this paper we have employed SMOTE technique for data balancing, it is worth noting that there exist other methods of data balancing and with other design smells. Furthermore, it is also important to note that application of those other methods may improve the results, in our experiments SMOTE did not improve God class detection. The results are not fully generalizable because only one design smell is studied with projects developed in a single programming language, and only one balancing technique is used to compare with the imbalanced case. But these results are promising for the application in real design smells detection scenarios as mentioned above and the focus on other measures, such as Kappa, ROC, and MCC, have been used in the assessment of the classifier behavior. © 2021 The Authors

    Download full text (pdf)
    fulltext
  • 33.
    Alladi, Sai Sumeeth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Prioritized Database Synchronization using Optimization Algorithms2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    Prioritized Database Synchronization using Optimization Algorithms
  • 34.
    Alluri, Gayathri Thanuja
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In the field of computer science and communication systems, cloud computing plays animportant role in Information and Technology industry, it allows users to start from small and increase resources when there is a demand. AWS (Amazon Web Services) and GCP (Google cloud Platform) are two different cloud platform providers. Many organizations are still relying onstructured databases like MySQL etc. Structured databases cannot handle huge requests and data efficiently when number of requests and data increase. To overcome this problem, the organizations shift to NoSQL unstructured databases like Apache cassandra, Mongo DB etc.

    Conclusions: From the literature review, I have gained knowledge regarding the cloud computing, problems existed in cloud, which leads to setup this research in evaluating the performance of cassandra on AWS and GCP. The conclusion from the experiment is that as the thread count increases throughput and latency has increased gradually till thread count 600 in both the clouds. By comparing both the clouds throughput values, AWS scales up compare to GCP. GCP scales up, when compared to AWS in terms of latency. 

    Keywords: Apache Cassandra, AWS, Google Cloud Platform, Cassandra Stress, Throughput, Latency

    Download full text (pdf)
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)
  • 35.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Group-Personalized Federated Learning for Human Activity Recognition Through Cluster Eccentricity Analysis2023In: Engineering Applications of Neural Networks: 24th International Conference, EAAAI/EANN 2023, León, Spain, June 14–17, 2023, Proceedings / [ed] Iliadis L., Maglogiannis I., Alonso S., Jayne C., Pimenidis E., Springer Science+Business Media B.V., 2023, p. 505-519Conference paper (Refereed)
    Abstract [en]

    Human Activity Recognition (HAR) plays a significant role in recent years due to its applications in various fields including health care and well-being. Traditional centralized methods reach very high recognition rates, but they incur privacy and scalability issues. Federated learning (FL) is a leading distributed machine learning (ML) paradigm, to train a global model collaboratively on distributed data in a privacy-preserving manner. However, for HAR scenarios, the existing action recognition system mainly focuses on a unified model, i.e. it does not provide users with personalized recognition of activities. Furthermore, the heterogeneity of data across user devices can lead to degraded performance of traditional FL models in the smart applications such as personalized health care. To this end, we propose a novel federated learning model that tries to cope with a statistically heterogeneous federated learning environment by introducing a group-personalized FL (GP-FL) solution. The proposed GP-FL algorithm builds several global ML models, each one trained iteratively on a dynamic group of clients with homogeneous class probability estimations. The performance of the proposed FL scheme is studied and evaluated on real-world HAR data. The evaluation results demonstrate that our approach has advantages in terms of model performance and convergence speed with respect to two baseline FL algorithms used for comparison. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

  • 36.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    FedCO: Communication-Efficient Federated Learning via Clustering Optimization †2022In: Future Internet, E-ISSN 1999-5903, Vol. 14, no 12, article id 377Article in journal (Refereed)
    Abstract [en]

    Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers’ partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 37.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Reducing Communication Overhead of Federated Learning through Clustering Analysis2021In: 26th IEEE Symposium on Computers and Communications (ISCC 2021), Institute of Electrical and Electronics Engineers (IEEE), 2021Conference paper (Refereed)
    Abstract [en]

    Training of machine learning models in a Datacenter, with data originated from edge nodes, incurs high communication overheads and violates a user's privacy. These challenges may be tackled by employing Federated Learning (FL) machine learning technique to train a model across multiple decentralized edge devices (workers) using local data. In this paper, we explore an approach that identifies the most representative updates made by workers and those are only uploaded to the central server for reducing network communication costs. Based on this idea, we propose a FL model that can mitigate communication overheads via clustering analysis of the worker local updates. The Cluster Analysis-based Federated Learning (CA-FL) model is studied and evaluated in human activity recognition (HAR) datasets. Our evaluation results show the robustness of CA-FL in comparison with traditional FL in terms of accuracy and communication costs on both IID and non-IID  cases.

    Download full text (pdf)
    fulltext
  • 38.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exner, Peter
    Sony, R&D Center Europe, SWE.
    Context-Aware Edge-Based AI Models for Wireless Sensor Networks-An Overview2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 15, article id 5544Article, review/survey (Refereed)
    Abstract [en]

    Recent advances in sensor technology are expected to lead to a greater use of wireless sensor networks (WSNs) in industry, logistics, healthcare, etc. On the other hand, advances in artificial intelligence (AI), machine learning (ML), and deep learning (DL) are becoming dominant solutions for processing large amounts of data from edge-synthesized heterogeneous sensors and drawing accurate conclusions with better understanding of the situation. Integration of the two areas WSN and AI has resulted in more accurate measurements, context-aware analysis and prediction useful for smart sensing applications. In this paper, a comprehensive overview of the latest developments in context-aware intelligent systems using sensor technology is provided. In addition, it also discusses the areas in which they are used, related challenges, motivations for adopting AI solutions, focusing on edge computing, i.e., sensor and AI techniques, along with analysis of existing research gaps. Another contribution of this study is the use of a semantic-aware approach to extract survey-relevant subjects. The latter specifically identifies eleven main research topics supported by the articles included in the work. These are analyzed from various angles to answer five main research questions. Finally, potential future research directions are also discussed.

    Download full text (pdf)
    fulltext
  • 39.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    An Energy-aware Multi-Criteria Federated Learning Model for Edge Computing2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021 / [ed] Younas M., Awan I., Unal P., IEEE, 2021, p. 134-143Conference paper (Refereed)
    Abstract [en]

    The successful convergence of Internet of Things (IoT) technology and distributed machine learning have leveraged to realise the concept of Federated Learning (FL) with the collaborative efforts of a large number of low-powered and small-sized edge nodes. In Wireless Networks (WN), an energy-efficient transmission is a fundamental challenge since the energy resource of edge nodes is restricted.In this paper, we propose an Energy-aware Multi-Criteria Federated Learning (EaMC-FL) model for edge computing. The proposed model enables to collaboratively train a shared global model by aggregating locally trained models in selected representative edge nodes (workers). The involved workers are initially partitioned into a number of clusters with respect to the similarity of their local model parameters. At each training round a small set of representative workers is selected on the based of multi-criteria evaluation that scores each node representativeness (importance) by taking into account the trade-off among the node local model performance, consumed energy and battery lifetime. We have demonstrated through experimental results the proposed EaMC-FL model is capable of reducing the energy consumed by the edge nodes by lowering the transmitted data.

    Download full text (pdf)
    fulltext
  • 40.
    Al-Shuwaili, Mustafa
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering.
    Helo, Zeid
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Optimering av interna materialflödet på Scandinavian Stone2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The thesis focuses on analysis and development of a material flow on a quarry, which belongs to the company Scandinavian Stone. A material flow that is dependent on heavy vehicles, which causes high costs and carbon dioxide emissions. The company strives for a development that focuses on carbon dioxide emissions - and cost reduction. The process was examined in several activities, such as study visits, interviews and observations. A survey is the basis for a process analysis that takes to the surface development needs and shortage factors. The analysis was based on the Lean philosophy, to distinguish between value-adding and non-value-adding parameters from a customer perspective. The results of the analysis showed that the distance between the lowest point in the hole to the processing station, is the part that contributes the highest energy consumption. A calculation model has been created to be able to calculate the energy consumption that is resulted from product transports within the process. The calculation was made on a simplified driving cycle with varied product weights. The results from the calculation model showed that an increase in product weight leads to small energy increases. This was the basis for a concept that focuses on a reduction in the number of runs along the identified route. To maintain process productivity, it is necessary to transport several products and heavier waste materials at a time. The proposed concept changes the layout of the process, as the processing station is moved down to the hole, instead of its current location outside the hole. It enables the transport of several products at a time, as the products lose about 50% of their weight after processing. The waste material is temporarily stored in a container and then transported up when the weight has reached a maximum level. This maximum weight is observed by means of a scale on which the container is placed. The scale indicates when that limit has been reached and it is time for emptying. The concept showed a profitability in energy consumption of up to 40%.

    Download full text (pdf)
    fulltext
  • 41.
    Alsolai, Hadeel
    et al.
    Princess Nourah Bint Abdulrahman Univ, SAU.
    Qureshi, Shahnawaz
    Natl Univ Comp & Emerging Sci, PAK.
    Iqbal, Syed Muhammad Zeeshan
    BrightWare LLC, SAU.
    Ameer, Asif
    Natl Univ Comp & Emerging Sci, PAK.
    Cheaha, Dania
    Prince Songkla Univ, THA.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Karrila, Seppo
    Prince Songkla Univ, THA.
    Employing a Long-Short-Term Memory Neural Network to Improve Automatic Sleep Stage Classification of Pharmaco-EEG Profiles2022In: Applied Sciences, E-ISSN 2076-3417, Vol. 12, no 10, article id 5248Article in journal (Refereed)
    Abstract [en]

    An increasing problem in today's society is the spiraling number of people suffering from various sleep disorders. The research results presented in this paper support the use of a novel method that employs techniques from the classification of sleep disorders for more accurate scoring. Applying this novel method will assist researchers with better analyzing subject profiles for recommending prescriptions or to alleviate sleep disorders. In biomedical research, the use of animal models is required to experimentally test the safety and efficacy of a drug in the pre-clinical stage. We have developed a novel LSTM Recurrent Neural Network to process Pharmaco-EEG Profiles of rats to automatically score their sleep-wake stages. The results indicate improvements over the current methods; for the case of combined channels, the model accuracy improved by 1% and 3% in binary or multiclass classifications, respectively, to accuracies of 93% and 82%. In the case of using a single channel, binary and multiclass LSTM models for identifying rodent sleep stages using single or multiple electrode positions for binary or multiclass problems have not been evaluated in prior literature. The results reveal that single or combined channels, and binary or multiclass classification tasks, can be applied in the automatic sleep scoring of rodents.

    Download full text (pdf)
    fulltext
  • 42.
    Alsolai, Hadeel
    et al.
    Princess Nourah bint Abdulrahman University, SAU.
    Qureshi, Shahnawaz
    National University of Computing and Emerging Sciences, PAK.
    Iqbal, Syed Muhammad Zeeshan
    Research and Development, BrightWare LLC, SAU.
    Vanichayobon, Sirirut
    Prince of Songkla University, THA.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lindley, Craig
    CSIRO Data, AUS.
    Karrila, Seppo
    Prince of Songkla University, THA.
    A Systematic Review of Literature on Automated Sleep Scoring2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 79419-79443Article, review/survey (Refereed)
    Abstract [en]

    Sleep is a period of rest that is essential for functional learning ability, mental health, and even the performance of normal activities. Insomnia, sleep apnea, and restless legs are all examples of sleep-related issues that are growing more widespread. When appropriately analyzed, the recording of bio-electric signals, such as the Electroencephalogram, can tell how well we sleep. Improved analyses are possible due to recent improvements in machine learning and feature extraction, and they are commonly referred to as automatic sleep analysis to distinguish them from sleep data analysis by a human sleep expert. This study outlines a Systematic Literature Review and the results it provided to assess the present state-of-the-art in automatic analysis of sleep data. A search string was organized according to the PICO (Population, Intervention, Comparison, and Outcome) strategy in order to determine what machine learning and feature extraction approaches are used to generate an Automatic Sleep Scoring System. The American Academy of Sleep Medicine and Rechtschaffen & Kales are the two main scoring standards used in contemporary research, according to the report. Other types of sensors, such as Electrooculography, are employed in addition to Electroencephalography to automatically score sleep. Furthermore, the existing research on parameter tuning for machine learning models that was examined proved to be incomplete. Based on our findings, different sleep scoring standards, as well as numerous feature extraction and machine learning algorithms with parameter tuning, have a high potential for developing a reliable and robust automatic sleep scoring system for supporting physicians. In the context of the sleep scoring problem, there are evident gaps that need to be investigated in terms of automatic feature engineering techniques and parameter tuning in machine learning algorithms.

    Download full text (pdf)
    fulltext
  • 43.
    Andersson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Employing gamification to enhance the engagement of video education2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 44.
    Andersson, Jonathan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. 9705#.
    Effects of Menu Systems, Interaction Methods, and Posture on User Experience in Virtual Reality2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. In recent years, Virtual Reality (VR) has emerged as an important technology in both commercial and industrial use. This has prompted large investments from large corporations, and some have even shifted their focus toward this new rising technology. With the oncoming of this tech as mainstream, emphasis has been put on the content itself, while the surrounding user experiences of the UIsand the interaction methods in the VR environment have been put aside.

    Objectives. The objectives of this thesis are to explore different menu systems together with interaction methods while also evaluating their effect of them and the posture of the user on user experience and simulator sickness in VR applications. Data collected could provide good observations for how menus and interaction methods together with posture can be best designed for VR applications.

    Methods. A VR application with two different menu systems, and two different interaction methods were implemented, and a survey based on the System UsabilityScale (SUS), After-Scenario Questionnaire (ASQ), and Simulator Sickness Questionnaire (SSQ) was created. These questionnaires answer matters relating to user experience and cybersickness and were chosen for their ease of use in addition to being used in similar works. Together these formed the basis for an experiment which was carried out with 20 participants. The study measured the differences in user experience, time taken, and simulator sickness for the different combinations of controls, menus, and postures.

    Results. Results show that there are significant differences depending on the controls, menu systems, and posture in both user experience and simulator sickness. The study showed that participants reported fewer simulator sickness symptoms when seated and that the overall best control and menu combination was a traditional panel menu together with motion controls.

    Conclusions. Among the options explored in the study, traditional, top-down, panel menus together with motion controls form the best combination in regard to the user experience in VR applications. A sitting posture provides the overall best environment in VR applications in regard to less severe simulator sickness symptoms

    Download full text (pdf)
    Effects of Menu Systems, Interaction Methods, and Posture on User Experience in Virtual Reality
  • 45.
    Andersson, Jonathan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Using Gamification to Improve User Experience and Health Effects in Mobile Applications2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. According to the World Health Organization, over 264 million people suffer from depression. A recent trend to treat and combat depression is e-health applications like Headspace with the help of mindfulness or meditation. The rise of new treatment methods based on these concepts are seen as a promising alternative to traditional methods like cognitive behavioural therapy and medication. Objectives. The objectives of this study is to make a new mobile application, in the form of a mobile e-health prototype. The application, called MindBud, is designed to help the user reduce depressive thoughts. This is done by using a daily schedule to plan your day and in turn, reduce depressive thoughts and procrastination through structure. Then, the study seeks to compare two versions of this application, one version will have gamification elements and one will be without them. The comparison will measure overall user experience through a test called the system usability scale, and in addition measure the effectiveness of the application on depressive thoughts.Methods. Two versions of MindBud were implemented, one basic app and one with gamification elements added to it. The applications were then tested by performing an experiment with sixteen participants. Each of the participants tested both versions of the application, and then answered a questionnaire about the app. The answers of the questionnaire were used to compare test scores between the two versions of the application, to see if gamification had any impact on overall user experience and to see which gamification elements could be used to reduce depressive thoughts through the application. Results. The results show a slight increase in score in regards to overall user experience when comparing the gamified app with the basic one. Most notable increases came in questions about frequency of use, and complexity of the application. Additionally, the gamified application scored significantly better when participants were asked how much they thought the app version would reduce depressive thoughts.Conclusions. The gamification elements added were found to increase overall user experience, and also help reduce depressive thoughts more than the basic version. The used gamification elements were an in-game avatar, a reward system and an experience and level system. 

    Download full text (pdf)
    fulltext
  • 46.
    Andersson, Jonathan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. student.
    Hu, Yan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exploring the Impact of Menu Systems, Interaction Methods, and Sitting or Standing Posture on User Experience in Virtual Reality2023Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) has become an increasingly crucial aspect in both commercial and industrial settings. However, the user experience of the user interfaces and interaction methods in the VR environment is often overlooked. This paper aims to explore different menu systems, interaction methods, and the user’s sitting or standing posture on user experience and cybersickness in VR applications. An experiment with two menu systems and two interaction methods in an implemented VR application was conducted with 20 participants. The results found that traditional, top-down, panel menus with motion controls are the best combination regarding the user experience. Sitting posture provides less severe simulator sickness symptoms than standing.

    Download full text (pdf)
    fulltext
  • 47.
    Andersson, Klara
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Landén, Erik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    The Impact of Foveated Rendering as Used for Head-Mounted Displays in Interactive Video Games2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Introduction: In this study, foveated rendering has been explored as its impacton people in a virtual reality (VR) video game setting. Foveated rendering has thepotential to decrease the performance cost of virtual reality gaming by only renderingthe part of the scene where the user is looking. It achieves this with the use of aneye tracker. Although, this study does not focus on the performance gain.Related Work: The work that has previously been done has mostly focused onperformance. Studies that concentrate on how it affects people have only used ascene with a still or moving target in it. This work will expand upon this by includingit in a whole game instead. Method: A user study is conducted to test the perceived visual quality by playinga fast-paced game that requires a lot of eye and head movements. This is tested asa first-person shooter game, which is played thrice with different types of foveation.Those being no foveation, static, and dynamic foveated rendering and comparingthem with each other. The user study performed to test this had 20 participants whoplayed the game and answered a questionnaire afterward regarding the quality. These participants were experienced in normal gaming but had little previous experiencewith virtual reality. Results and Analysis: The results show that the majority do not notice a differencein quality between the game types. However, the type most people preferred wasfoveated rendering without the use of an eye tracker, called static foveation.Discussion: The results demonstrate that video games can effectively incorporatefoveated rendering, resulting in significant performance improvements with minimaldrawbacks. However, it is important to note that foveated techniques have certainhardware requirements that may limit their widespread adoption. One such require-ment is Variable Rate Shading, which is becoming increasingly prevalent as it issupported by all new graphics cards. This means that the market may more easilyadopt foveated rendering techniques that do not rely on an eye tracker. Additionally,a built-in eye tracker in headsets is another hardware requirement that might makeit harder for that to happen.Conclusion: Foveated rendering has almost no impact on the perceived quality of VR games which can lead to performance gains and makes it easier for people to getfeel more immersed in them.

    Download full text (pdf)
    The Impact of Foveated Rendering as Used for Head-Mounted Displays in Interactive Video Games
  • 48.
    Andersson, Oliver
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Vad motiverar en utvecklare: Och hur får man utvecklaren till att stanna inom företaget2022Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Hur ska företag kunna behålla utvecklaren inom arbetsplatsen när utvecklaren har fler valmöjligheter än någonsin. På vilka sätt kan företaget arbeta för att lyckas behålla utvecklaren och undvika kostnader som associeras med nyanställning. Vad är det egentligen som motiverar en utvecklare och hur får man individen till att stanna på arbetsplatsen.Denna studie börjar med att diskutera varför den höga personalomsättningen kan vara ett problem för företaget samt utvecklaren. Målet och forskningsfrågorna som studien ämnar undersöka presenteras i metoden. Resultaten har tagits fram genom en kvalitativ undersökning. Resultatet analyseras med hjälp av det teoretiska ramverket bestående utav motivationsteori. Författaren går även igenom det agila arbetssättet som är ytterst utbrett inom sektorn. Resultaten från den tidigare forskningen som finns angående motivation hos utvecklare avhandlas.Studien visar att ekonomiska incitament verkar motiverande, vilket inte kan anses stämma överens med delar av teorin. Arbetsuppgifterna, produkt och lön har stor betydelse för denna studie och utvecklarens motivation. Det har visat sig att personlig utveckling är väldigt viktigt för utvecklaren, vilket kan ske genom arbetsuppgifternas utformning. Författaren diskuterar hur tillhörighet till en produkt kan få en utvecklare att stanna kvar på arbetsplatsen. Författaren delar även sina tankar om hur företaget kan motverka att den anställda lämnar arbetsplatsen samt hur kultur och arbetssätt kan användas för att öka effektiviteten och således produktiviteten samt motivationen hos en utvecklare.

    Download full text (pdf)
    fulltext
  • 49.
    Andreasson, Simon
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Östergaard, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Applying spatially and temporally adaptive techniques for faster DEM-based snow simulation2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Physically-based snow simulation is computationally expensive and not yet applicable to real-time applications. Some of the prime factors for this cost are the complex physics, the large number of particles, and the small time step required for a high-quality and stable simulation.Simplified methods, such as height maps, are used instead to emulate snow accumulation. A way of improving performance is finding ways of doing less computations. In the field of computer graphics, adaptive methods have been developed to focus computation to where it is most needed. These works will serve as inspiration for this thesis.

    Objectives. This thesis aims to reduce the total particle workload of an existing Discrete Element Method (DEM) application, thereby improving performance. The aim consists of the following objectives. Integrate a spatial method, thereby lessening the total number of particles through particle merging and splitting, and implement a temporal method, thereby lessening the workload by freezing certain particles in time. The performance of both these techniques will then be tested and analyzed in multiple scenarios.

    Methods. Spatially and temporally adaptive methods were implemented in an existing snow simulator. The methods were both measured and compared using quantitative tests in three different scenes with varying particle counts.

    Results. Performance tests show that both the spatial and temporal adaptivity reduce the execution time compared to the base method. The improvements from temporal adaptivity are consistently around 1.25x while the spatial adaptivity shows a larger range of improvements between 1.23x and 2.86x. Combining both adaptive techniques provides an improvement of up to 3.58x.

    Conclusions. Both spatially and temporally adaptive techniques are viable ways to improve the performance of a DEM-based snow simulation. The current implementation has some issues with performance overhead and with the visual results while using spatial adaptivity, but there is a lot of potential for the future.

    Download full text (pdf)
    fulltext
  • 50.
    Andres, Bustamante
    et al.
    Tecnológico de Monterrey, MEX.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Jimenez-Perez, Julio Cesar
    Tecnológico de Monterrey, MEX.
    Rodriguez-Garcia, Alejandro
    Tecnológico de Monterrey, MEX.
    Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2021In: Photonics, ISSN 2304-6732, Vol. 8, no 4, article id 118Article in journal (Refereed)
    Abstract [en]

    Machine learning (ML) has an impressive capacity to learn and analyze a large volume of data. This study aimed to train different algorithms to discriminate between healthy and pathologic corneal images by evaluating digitally processed spectral-domain optical coherence tomography (SD-OCT) corneal images. A set of 22 SD-OCT images belonging to a random set of corneal pathologies was compared to 71 healthy corneas (control group). A binary classification method was applied where three approaches of ML were explored. Once all images were analyzed, representative areas from every digital image were also extracted, processed and analyzed for a statistical feature comparison between healthy and pathologic corneas. The best performance was obtained from transfer learning-support vector machine (TL-SVM) (AUC = 0.94, SPE 88%, SEN 100%) and transfer learning-random forest (TL- RF) method (AUC = 0.92, SPE 84%, SEN 100%), followed by convolutional neural network (CNN) (AUC = 0.84, SPE 77%, SEN 91%) and random forest (AUC = 0.77, SPE 60%, SEN 95%). The highest diagnostic accuracy in classifying corneal images was achieved with the TL-SVM and the TL-RF models. In image classification, CNN was a strong predictor. This pilot experimental study developed a systematic mechanized system to discern pathologic from healthy corneas using a small sample.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 744
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf