Change search
Refine search result
1234 1 - 50 of 189
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An approach for performance requirements verification and test environments generation2023In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 117-144Article in journal (Refereed)
    Abstract [en]

    Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify theintended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the arton modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic map-ping study on model-based performance testing. Then, we studied natural language software requirements specificationsin order to understand which and how performance requirements are typically specified. Since none of the identified MBTtechniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed thePerformance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluatedPRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mappingstudy and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, whichare validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Soft-ware Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustratethat with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones.We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modelingperformance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measur-ability, and completeness. Additionally, it allows to generate parameters for test environments

    Download full text (pdf)
    fulltext
  • 2.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Trend analysis to automatically identify heat program changes2017In: Energy Procedia, Elsevier, 2017, Vol. 116, p. 407-415Conference paper (Refereed)
    Abstract [en]

    The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

    Download full text (pdf)
    fulltext
  • 3.
    Adamov, Alexander
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cloud incident response model2016In: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

  • 4.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radioelectronics, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    The state of ransomware: Trends and mitigation techniques2017In: Proceedings of 2017 IEEE East-West Design and Test Symposium, EWDTS 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, article id 8110056Conference paper (Refereed)
    Abstract [en]

    This paper contains an analysis of the payload of the popular ransomware for Windows, Android, Linux, and MacOSX platforms. Namely, VaultCrypt (CrypVault), TeslaCrypt, NanoLocker, Trojan-Ransom.Linux.Cryptor, Android Simplelocker, OSX/KeRanger-A, WannaCry, Petya, NotPetya, Cerber, Spora, Serpent ransomware were put under the microscope. A set of characteristics was proposed to be used for the analysis. The purpose of the analysis is generalization of the collected data that describes behavior and design trends of modern ransomware. The objective is to suggest ransomware threat mitigation techniques based on the obtained information. The novelty of the paper is the analysis methodology based on the chosen set of 13 key characteristics that helps to determine similarities and differences thorough the list of ransomware put under analysis. Most of the ransomware samples presented were manually analyzed by the authors eliminating contradictions in descriptions of ransomware behavior published by different malware research laboratories through verification of the payload of the latest versions of ransomware. © 2017 IEEE.

  • 5.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated Context-Aware Vulnerability Risk Management for Patch Prioritization2022In: Electronics, E-ISSN 2079-9292, Vol. 11, no 21, article id 3580Article in journal (Refereed)
    Abstract [en]

    The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 6.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Normalization Framework for Vulnerability Risk Management in Cloud2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, IEEE, 2021, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.

    Download full text (pdf)
    fulltext
  • 7.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB, Sweden.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management2020In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 200-205, article id 9196350Conference paper (Refereed)
    Abstract [en]

    In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.

    In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.

    Download full text (pdf)
    fulltext
  • 8.
    Ahmed, Syed Saif
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arepalli, Harshini Devi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Auto-scaling Prediction using MachineLearning Algorithms: Analysing Performance and Feature Correlation2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.

    Download full text (pdf)
    Auto-scaling Prediction using Machine Learning Algorithms: Analysing Performance and Feature Correlation
  • 9.
    Al burhan, Mohammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Differences between DockerizedContainers and Virtual Machines: A performance analysis for hosting web-applications in a virtualized environment2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This is a bachelor thesis regarding the performance differences for hosting a web-application in a virtualized environment. We compare virtual machines against containers and observe their resource usage in categories such as CPU, RAM and disk storage in idle state and perform a range of computation experiments in which response times are measured from a series of request intervals. Response times are measured with the help of a web-application created in Python. The experiments are performed under both normal and stressed conditions to give a better indication in to which virtualized environment outperform the other during different scenarios.

    The results show that virtual machines and containers remained close to each other in response times during the first request interval, but the containers outperformed virtual machines in terms of resource usages while in idle state, they had less of a burden on the host computer. They were also significantly more rapid in terms of response times. This is also most noticeable during stressed conditions in which the virtual machine almost doubled its sluggishness.

    Download full text (pdf)
    Differences between Dockerized Containers and Virtual Machines A performance analysis for hosting web-applications in a virtualized environment
  • 10.
    Alexandre, Rui Carlos Josino
    et al.
    UNIFESP, Brazil.
    Martins, Luiz Eduardo Galvao
    UNIFESP, Brazil.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cybersecurity Risk Assessment for Medium-Risk Drones: A Systematic Literature Review2023In: IEEE Aerospace and Electronic Systems Magazine, ISSN 0885-8985, E-ISSN 1557-959X, Vol. 38, no 6, p. 28-43Article, review/survey (Refereed)
    Abstract [en]

    The increased demand for Remotely Piloted Aircraft Systems (RPAS) in Beyond Visual Line-Of-Sight (BVLOS) operations gives rise to a set of concerns regarding cybersecurity that, if not addressed, can lead to the unsafe operation of RPASs. To assist the airworthiness evaluation that is performed by Civil Aviation Authorities (CAAs), we identified several processes that are used to evaluate the cybersecurity of RPAS. We conducted a Systematic Literature Review (SLR) by selecting 30 papers (out of 211 screened) that were published during the past five years. The results of our SLR indicate the importance of cybersecurity to the safe operation of RPAS. It is evident that there is a lack of a systematic process to enable a cybersecurity review of RPAS. We observe that common cyber threats to RPAS are related to jamming, spoofing, and DOS/DDOS (Denial of Service/Distributed Denial of Service). Processes relevant to the assessment of RPAS cybersecurity exist, however they differ in safety concerns from our perspective. In addition, with only one exception, the methods have not been used, and/or the use has not been reported as pertaining to industrial application. The most frequently cited vulnerabilities are those related to GPS and datalinks. 

  • 11.
    Alluri, Gayathri Thanuja
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In the field of computer science and communication systems, cloud computing plays animportant role in Information and Technology industry, it allows users to start from small and increase resources when there is a demand. AWS (Amazon Web Services) and GCP (Google cloud Platform) are two different cloud platform providers. Many organizations are still relying onstructured databases like MySQL etc. Structured databases cannot handle huge requests and data efficiently when number of requests and data increase. To overcome this problem, the organizations shift to NoSQL unstructured databases like Apache cassandra, Mongo DB etc.

    Conclusions: From the literature review, I have gained knowledge regarding the cloud computing, problems existed in cloud, which leads to setup this research in evaluating the performance of cassandra on AWS and GCP. The conclusion from the experiment is that as the thread count increases throughput and latency has increased gradually till thread count 600 in both the clouds. By comparing both the clouds throughput values, AWS scales up compare to GCP. GCP scales up, when compared to AWS in terms of latency. 

    Keywords: Apache Cassandra, AWS, Google Cloud Platform, Cassandra Stress, Throughput, Latency

    Download full text (pdf)
    Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)
  • 12.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey2020In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 28, no 4, p. 1675-1707Article in journal (Refereed)
    Abstract [en]

    Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.

    Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.

    Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.

    Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on ``good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.

    Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.

    Download full text (pdf)
    fulltext
  • 13.
    Andersson, Dennis
    et al.
    Blekinge Institute of Technology.
    Artale, Jacques
    Blekinge Institute of Technology.
    Tracing Integration Errors to Upstream Development Activities: An exploratory study2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Eiffel Protocol provides traceability downstream and upstream of all activities that transpire inside the CI/CD pipeline. The traceability achieved by the Eiffel Protocol comes with great benefits even though it does not cover all development activities as it pertains only to the CI/CD pipeline. Our research aims to explore the idea of extending the Eiffel Protocol to cover all activities and discuss what benefits could be seen, especially in the scope of reducing the number of integration failures. A literature study was first carried out to find the root causes of these failures. After the literature study, we conducted a focus group session to gather data about the potential benefits and problems of an extension, what analyses could be drawn, and how it can affect integration errors. Our results show that an extension is beneficial as analyses that can be made with the generated data can tackle some of the biggest issues found in software development teams, especially in larger organizations. The complexity, cost involved and the time needed to see a return on investment does however weigh it down. Thus, while it is beneficial it is not enough for organizations to consider it as a priority to integrate with their environments when thinking of the costs involved to do so. Further implementation solutions need to be researched before it shows its worth.

    Download full text (pdf)
    Tracing Integration Errors to Upstream Development Activities - An exploratory study
  • 14.
    Andersson, Jonathan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. student.
    Hu, Yan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exploring the Impact of Menu Systems, Interaction Methods, and Sitting or Standing Posture on User Experience in Virtual Reality2023Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) has become an increasingly crucial aspect in both commercial and industrial settings. However, the user experience of the user interfaces and interaction methods in the VR environment is often overlooked. This paper aims to explore different menu systems, interaction methods, and the user’s sitting or standing posture on user experience and cybersickness in VR applications. An experiment with two menu systems and two interaction methods in an implemented VR application was conducted with 20 participants. The results found that traditional, top-down, panel menus with motion controls are the best combination regarding the user experience. Sitting posture provides less severe simulator sickness symptoms than standing.

    Download full text (pdf)
    fulltext
  • 15.
    Andres, Bustamante
    et al.
    Tecnológico de Monterrey, MEX.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Jimenez-Perez, Julio Cesar
    Tecnológico de Monterrey, MEX.
    Rodriguez-Garcia, Alejandro
    Tecnológico de Monterrey, MEX.
    Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2021In: Photonics, ISSN 2304-6732, Vol. 8, no 4, article id 118Article in journal (Refereed)
    Abstract [en]

    Machine learning (ML) has an impressive capacity to learn and analyze a large volume of data. This study aimed to train different algorithms to discriminate between healthy and pathologic corneal images by evaluating digitally processed spectral-domain optical coherence tomography (SD-OCT) corneal images. A set of 22 SD-OCT images belonging to a random set of corneal pathologies was compared to 71 healthy corneas (control group). A binary classification method was applied where three approaches of ML were explored. Once all images were analyzed, representative areas from every digital image were also extracted, processed and analyzed for a statistical feature comparison between healthy and pathologic corneas. The best performance was obtained from transfer learning-support vector machine (TL-SVM) (AUC = 0.94, SPE 88%, SEN 100%) and transfer learning-random forest (TL- RF) method (AUC = 0.92, SPE 84%, SEN 100%), followed by convolutional neural network (CNN) (AUC = 0.84, SPE 77%, SEN 91%) and random forest (AUC = 0.77, SPE 60%, SEN 95%). The highest diagnostic accuracy in classifying corneal images was achieved with the TL-SVM and the TL-RF models. In image classification, CNN was a strong predictor. This pilot experimental study developed a systematic mechanized system to discern pathologic from healthy corneas using a small sample.

    Download full text (pdf)
    fulltext
  • 16.
    Andres, Bustamante
    et al.
    Tecnológico de Monterrey, MEX.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Rodriguez-Garcia, Alejandro
    Tecnológico de Monterrey, MEX.
    Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2019Conference paper (Other academic)
  • 17.
    Anwar, Mahwish
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    A comparison of Unsupervised Learning Algorithms for Intrusion Detection in IEC 104 SCADA Protocol2021In: Proceedings - International Conference on Machine Learning and Cybernetics, IEEE Computer Society , 2021Conference paper (Refereed)
    Abstract [en]

    The power grid is a build-up of a mesh of thousands of sensors, embedded devices, and terminal units that communicate over different media. The heterogeneity of modern and legacy equipment calls for attention towards diverse network security measures. The critical infrastructure employs different security measures to detect and prevent adversaries, e.g., through signature-based tools. These approaches lack the potential to identify unknown attacks. Machine learning has the prospective to address novel attack vectors. This paper systematically evaluates the efficacy of learning algorithms from different families for intrusion detection in IEC 60870-5-104 protocol. One-class SVM and k-Nearest Neighbour unsupervised learning models show small potential when being tested on the IEC 104 unseen dataset with Area Under the Curve score 0.64 and 0.59, in the same order; and Matthews Correlation Coefficient value 0.3 and 0.2, respectively. The experimental results suggest little feasibility of the evaluated unsupervised learning approaches for anomaly detection in IEC 104 communication and recommend coupling it with other anomaly detection techniques. © 2021 IEEE.

    Download full text (pdf)
    fulltext
  • 18.
    Anwar, Mahwish
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Improving anomaly detection in SCADA network communication with attribute extension2022In: Energy Informatics, E-ISSN 2520-8942, Vol. 5, no 1, article id 69Article in journal (Refereed)
    Abstract [en]

    Network anomaly detection for critical infrastructure supervisory control and data acquisition (SCADA) systems is the first line of defense against cyber-attacks. Often hybrid methods, such as machine learning with signature-based intrusion detection methods, are employed to improve the detection results. Here an attempt is made to enhance the support vector-based outlier detection method by leveraging behavioural attribute extension of the network nodes. The network nodes are modeled as graph vertices to construct related attributes that enhance network characterisation and potentially improve unsupervised anomaly detection ability for SCADA network. IEC 104 SCADA protocol communication data with good domain fidelity is utilised for empirical testing. The results demonstrate that the proposed approach achieves significant improvements over the baseline approach (average F1F1 score increased from 0.6 to 0.9, and Matthews correlation coefficient (MCC) from 0.3 to 0.8). The achieved outcome also surpasses the unsupervised scores of related literature. For critical networks, the identification of attacks is indispensable. The result shows an insignificant missed-alert rate (0.3%0.3% on average), the lowest among related works. The gathered results show that the proposed approach can expose rouge SCADA nodes reasonably and assist in further pruning the identified unusual instances.

    Download full text (pdf)
    fulltext
  • 19. Arlebrink, Ludvig
    et al.
    Linde, Fredrik
    Image Quality-Driven Level of Detail Selection on a Triangle Budget2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Level of detail is an optimization technique used by several modern games. The level of detail systems uses simplified triangular meshes to determine the optimal combinations of 3D-models to use in order to meet a user-defined criterion for achieving fast performance. Prior work has also pre-computed level of detail settings to only apply the most optimal settings for any given view in a 3D scene.

    Objectives. The aim of this thesis is to determine the difference in image quality between a custom level of detail pre-preprocessing approach proposed in this paper, and the level of detail system built in the game engine Unity. This is investigated by implementing a framework in Unity for the proposed level of detail pre-preprocessing approach in this paper and designing representative test scenes to collect all data samples. Once the data is collected, the image quality produced by the proposed level of detail pre-preprocessing approach is compared to Unity's existing level of detail approach using perceptual-based metrics.

    Methods. The method used is an experiment. Unity's method was chosen because of the popularity of the engine, and it was decided to implement the proposed level of detail pre-preprocessing approach also in Unity to have the most fair comparison with Unity's implementation. The two approaches will only differ in how the level of detail is selected, the rest of the rendering pipeline will be exactly the same.

    Results. The pre-preprocessing time ranged between 13 to 30 hours. The results showed only a small difference in image quality between the two approaches, Unity's built-in system provides a better overall image quality in two out of three test scenes.

    Conclusions. Due to the pre-processing time and no overall improvement, it was concluded that the proposed level of detail pre-preprocessing approach is not feasible.

    Download full text (pdf)
    BTH2018Arlebrink
  • 20.
    Asritha, Kotha Sri Lakshmi Kamakshi
    Blekinge Institute of Technology, Faculty of Computing.
    Comparing Random forest and Kriging Methods for Surrogate Modeling2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par

    Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par

    Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions.

    This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.

    Download full text (pdf)
    Comparing Random forest and Kriging Methods for Surrogate Modeling
  • 21.
    Avdic, Adnan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ekholm, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

    Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

    Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

    Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

    Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

    Download full text (pdf)
    BTH2019EkholmAvdic
  • 22.
    Avritzer, Alberto
    et al.
    eSulab Solutions, USA.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Trubiani, Catia
    Gran Sasso Science Institute, ITA.
    Camilli, Matteo
    Free University of Bozen-Bolzano, ITA.
    Janes, Andrea
    Free University of Bozen-Bolzano, ITA.
    Russo, Barbara
    Free University of Bozen-Bolzano, ITA.
    van Hoorn, André
    University of Hamburg, DEU.
    Heinrich, Robert
    Karlsruhe Institute of Technology, DEU.
    Rapp, Martina
    FZI Forschungszentrum Informatik, DEU.
    Henß, Jörg
    FZI Forschungszentrum Informatik, DEU.
    Chalawadi, Ram Kishan
    Ericsson AB, SWE.
    Scalability testing automation using multivariate characterization and detection of software performance antipatterns2022In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 193, article id 111446Article in journal (Refereed)
    Abstract [en]

    Context: Software Performance Antipatterns (SPAs) research has focused on algorithms for their characterization, detection, and solution. Existing algorithms are based on the analysis of runtime behavior to detect trends on several monitored variables, such as system response time and CPU utilization. However, the lack of computationally efficient methods currently limits their integration into modern agile practices to detect SPAs in large scale systems. Objective: In this paper, we extended our previously proposed approach for the automated SPA characterization and detection designed to support continuous integration/delivery/deployment (CI/CDD) pipelines, with the goal of addressing the lack of computationally efficient algorithms. Method: We introduce a machine learning-based approach to improve the detection of SPA and interpretation of approach's results. The approach is complemented with a simulation-based methodology to analyze different architectural alternatives and measure the precision and recall of our approach. Our approach includes SPA statistical characterization using a multivariate analysis of load testing experimental results to identify the services that have the largest impact on system scalability. Results: To show the effectiveness of our approach, we have applied it to a large complex telecom system at Ericsson. We have built a simulation model of the Ericsson system and we have evaluated the introduced methodology by using simulation-based SPA injection. For this system, we are able to automatically identify the top five services that represent scalability choke points. We applied two machine learning algorithms for the automated detection of SPA. Conclusion: We contributed to the state-of-the-art by introducing a novel approach to support computationally efficient SPA characterization and detection that has been applied to a large complex system using performance testing data. We have compared the computational efficiency of the proposed approach with state-of-the-art heuristics. We have found that the approach introduced in this paper grows linearly, which is a significant improvement over existing techniques. © 2022 Elsevier Inc.

  • 23.
    Bakhtyar, Shoaib
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing Electronic Waybill Solutions for Road Freight Transport2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

    A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

    Download full text (pdf)
    fulltext
  • 24.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Electronic Waybill Solutions: A Systematic ReviewManuscript (preprint) (Other academic)
    Abstract [en]

    A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

  • 25.
    Benjaminsson, Axel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Blockchain Applicability in IoT Systems2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. The Internet of things (IoT) combines sensors and connectivity and can be applied to a wide variety of things. The security in these IoT devices is usually constrained by their limited hardware. When IoT handles sensitive data, security becomes an important challenge. Blockchain technologies enable safe transactions between two parties without involving any third party. The technology provides integrity to the two parties, ensuring the transaction is valid.

    Objectives. The objectives of this paper is to examine the applicability of a blockchain in an IoT system. The paper aims to find out the current threat map to IoT, Identify the security benefits of blockchain technologies and investigate an IoT home security system on the market.

    Methods. The threat map of IoT and the benefits of blockchain technologies was determined by studying existing literature. IoT vulnerabilities were assessed with respect to exploitation difficulty and severity. The security level in the IoT home security system was evaluated with a penetration test.

    Results. The most critical vulnerabilities of IoT today are weak guessable passwords, insecure updating mechanisms, insecure ecosystem interfaces, insecure data transfer and storage and use of insecure of outdated components. A big reason for the poor security is the lack of integrity in IoT systems. The blockchains can provide integrity, authenticity, transparency, decentralisation and more. The security level of the IoT home security system has security countermeasures implemented to protect from a basic level of threats.

    Conclusion. The majority of the current threats to IoT can be mitigated with integrity provided by a blockchain. From a security perspective blockchain technologies prove to be worth considering.

    Download full text (pdf)
    fulltext
  • 26.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Finding a needle in a haystack: A comparative study of IPv6 scanning methods2019In: 2019 INTERNATIONAL SYMPOSIUM ON NETWORKS, COMPUTERS AND COMMUNICATIONS (ISNCC 2019), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    It has previously been assumed that the size of anIPv6 network would make it impossible to scan the network forvulnerable hosts. Recent work has shown this to be false, andseveral methods for scanning IPv6 networks have been suggested.However, most of these are based on external information likeDNS, or pattern inference which requires large amounts of knownIP addresses. In this paper, DeHCP, a novel approach based ondelimiting IP ranges with closely clustered hosts, is presentedand compared to three previously known scanning methods. Themethod is shown to work in an experimental setting with resultscomparable to that of the previously suggested methods, and isalso shown to have the advantage of not being limited to a specificprotocol or probing method. Finally we show that the scan canbe executed across multiple VLANs.

    Download full text (pdf)
    isncc2019-ipv6
  • 27.
    Bexell, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Source Code Readability: A Mapping Study2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Building software systems is an iterative and collaborative project, requiring developers not only to write code, but to maintain, expand, fix and enhance code already written. In order to do so, reading code is a central activity, and therefore it is important that code is written in a manner that makes it readable.

    Objectives: To map the state-of-the-art of software source code readability and find the definitions and methods to measure it, and provide an overview of the kinds of factors considered to impact software source code readability, and to compare this to practitioners' experiences of software source code readability.

    Methods: A systematic literature review of 76 studies in 72 papers from the last 40 years, explicitly concerning software source code readability, is compared with the results of five interviews with practitioners, of which three are case studies of commits explicitly targeting readability.

    Results: While individual factors' contribution towards readability is studied with some success, more general modelling studies often suffer from methodological problems, making them difficult to apply in practice or in studies of the correlation between software source code readability and other metrics.

    Conclusions: Key elements of the state-of-the-art have been implemented in practice, however, readability models are not used by the practitioners in this study. Several factors mentioned by practitioners are not considered by the studies included, and further qualitative study of software development practitioners may be needed.

    Download full text (pdf)
    Software Source Code Readability A Mapping Study
  • 28. Bhupathiraju, Praneeth Varma
    Deep Neural Networks Based Disaggregation of Swedish Household Energy Consumption2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In recent years, households have been increasing energy consumption to very high levels, where it is no longer sustainable. There has been a dire need to find a way to use energy more sustainably due to the increase in the usage of energy consumption. One of the main causes of this unsustainable usage of energy consumption is that the user is not much acquainted with the energy consumed by the smart appliances (dishwasher, refrigerator, washing machine etc) in their households. By letting the household users know the energy usage consumed by the smart appliances. For the energy analytics companies, they must analyze the energy consumed by the smart appliances present in a house. To achieve this Kelly et. al. [7] have performed the task of energy disaggregation by using deep neural networks and producing good results. Zhang et. al. [7] has gone even a step further in improving the deep neural networks proposed by Kelly et. al., The task was performed by Non-intrusive load monitoring (NILM) technique.

    Objectives: The thesis aims to assess the performance of the deep neural networks which are proposed by Kelly et.al. [7], and Zhang et. al. [8]. We use deep neural networks for disaggregation of the dishwasher energy consumption, in the presence of vampire loads such as electric heaters, in a Swedish household setting. We also try to identify the training time of the proposed deep neural networks. 

    Methods: An intensive literature review is done to identify state-of-the-art deep neural network techniques used for energy disaggregation.  All the experiments are being performed on the dataset provided by the energy analytics company Eliq AB. The data is collected from 4 households in Sweden. All the households consist of vampire loads, an electrical heater, whose power consumption can be seen in the main power sensor. A separate smart plug is used to collect the dishwasher power consumption data. Each algorithm training is done on 2 houses with data provided by all the houses except two, which will be used for testing. The metrics used for analyzing the algorithms are Accuracy, Recall, Precision, Root mean square error (RMSE), and F1 measure. These software metrics would help us identify the best suitable algorithm for the disaggregation of dishwasher energy in our case. 

    Results: The results from our study have proved that Gated recurrent unit (GRU) performed best when compared to the other neural networks in our study like Simple recurrent neural network (SRN), Convolutional Neural Network (CNN), Long short-Term memory (LSTM) and Recurrent convolution neural network (RCNN). The Accuracy, RMSE and the F1 score of the GRU algorithm are higher when compared with the other algorithms. Also, if the user does not consider F1 score and RMSE as an evaluation metric and considers training time as his or her metric, then Simple recurrent neural network outperforms all the other neural nets with an average training time of 19.34 minutes. 

    Download full text (pdf)
    Deep Neural Networks Based Disaggregation of Swedish Household Energy Consumption
  • 29.
    Bigdan, Andrii
    et al.
    Taras Shevchenko National University of Kyiv, Ukraine.
    Babenko, Tetiana
    Taras Shevchenko National University of Kyiv, Ukraine.
    Hnatiienko, Hryhorii
    Taras Shevchenko National University of Kyiv, Ukraine.
    Baranovskyi, Oleksii
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Myrutenko, Larysa
    Taras Shevchenko National University of Kyiv, Ukraine.
    Detection of Cybersecurity Events Based on Entropy Analysis2022In: CEUR Workshop Proceedings / [ed] Khikmetov A., Daineko Y., Ipalakova M., Technical University of Aachen , 2022Conference paper (Refereed)
    Abstract [en]

    As a rule, modern approaches to protecting against cyberattacks do not guarantee the impossibility of compromising applications and operating systems. Therefore, detection and identification of vulnerabilities, and actions to avoid or mitigate their impact on businesses and cybersecurity processes are critical for the operation of information systems and the information security management system. To identify a possible attack vector, as a rule, the following methods could be applied: either those that allow detecting abuses or that allow detecting anomalies. This paper investigates the possibility of identifying the alleged attack vector based on the entropy analysis of cybersecurity events. The research results presented in the paper allow us to determine the required width of the sliding window and confirm that such entropy analysis detects exceeding security thresholds and anomalies in the operation of operating systems and applications and, accordingly, probable attack vectors. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

    Download full text (pdf)
    fulltext
  • 30.
    Bjäreholt, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.

    Download full text (pdf)
    fulltext
  • 31.
    Bonnier, Victor
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison between OpenStack virtual machines and Docker containers in regards to performance2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud computing is a fast growing technology which more and more companies are starting to use throughout the years. When deploying a cloud computing application it is important to know what kind of technology that you should use. Two popular technologies are containers and virtual machines.

    The objective with this study was to find out how the performance differs between Docker containers and OpenStack virtual machines in regards to memory usage, CPU utilization, time to boot up and throughput from a scalability perspective when scaling between two and four instances of containers and virtual machines.

    The comparison was done by having two different virtual machines running, one with Docker that ran the containers and another machine with OpenStack that was running a stack of my virtual machines. To gather the data from the virtual machines I used the command ”htop” and to get the data from the containers, I used the command ”Docker stats”.

    The results from the experiment showed a favor towards the Docker containers where the boot time on the virtual machines were between 280-320 seconds and the containers had between 5-8 seconds bootup time. The memory usage was more than doubled on the virtual machines than the containers. The CPU utilization and throughput favored the containers and the gap in performance increased when scaling the application outwards to four instances in all cases except for the throughput when adding information to a database.

    The conclusion that can be drawn from this is that Docker containers are favored over the OpenStack virtual machines from a performance perspective. There are still other aspects to think about regarding when choosing which technology to use when deploying a cloud application, such as security for example.

    Download full text (pdf)
    fulltext
  • 32.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation.

    Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field.

    Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis.

    Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation.

    Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.

    Download full text (pdf)
    fulltext
  • 33.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Container Orchestration: A Survey2019In: Systems Modeling: Methodologies and Tools / [ed] Antonio Puliafito, Kishor S. Trivedi, Springer , 2019, p. 221-235Chapter in book (Refereed)
    Abstract [en]

    Container technologies are changing the way cloud platforms and distributed applications are architected and managed. Containers are used to run enterprise, scientific and big data applications, to architect IoT and edge/fog computing systems, and by cloud providers to internally manage their infrastructure and services. However, we are far away from the maturity stage and there are still many research challenges to be solved. One of them is container orchestration that makes it possible to define how to select, deploy, monitor, and dynamically control the configuration of multi-container packaged applications in the cloud. This paper surveys the state-of-the-art solutions and discusses research challenges in autonomic orchestration of containers. A reference architecture of an autonomic container orchestrator is also proposed. © 2019, Springer International Publishing AG, part of Springer Nature.

  • 34.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    Spindox S.p.A, ITA.
    Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, p. 207-214, article id 8064125Conference paper (Refereed)
    Abstract [en]

    Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

  • 35.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

    Download full text (pdf)
    fulltext
  • 36.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

    Download full text (pdf)
    fulltext
  • 37.
    Chandrashekhar, Shobha Biligerepalya
    et al.
    Blekinge Institute of Technology.
    Ida Mutia, Rafika
    Blekinge Institute of Technology.
    Cloud Computing Impacts on Business Models: A study based on Osterwalder’s Business Model Canvas2023Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud computing is a model that provide on-demand access to a range of computing services and resources such as servers, networks, storage, and application through the internet. Companies utilizing cloud computing can use different services based on their needs without initial capital expenditure. For the companies adopting cloud, business models have changed the competitive edge it offers to the business. In this thesis, we study the impacts of cloud transformation on the business models, using Osterwalder’s business model canvas as framework. To give a holistic view of the cloud transformation, the study also presents the motivations, challenges and factors that should be considered by the companies for their transformation journey.We carried out this study by using qualitative research, i.e., by interviewing experienced leaders from software development unit in companies from different industries. In our findings, we conclude that cloud transformation brings positive impacts in the customer relationships as it offers promising and advantageous value propositions due to the technological advancement in the key activities and resources. Therefore, it could attract new customer segments. Furthermore, with the new potential customers and with the value propositions it offers, cloud computing could unleash numerous new revenue streams.However, companies must be aware of the challenges that one might encounter during the cloud transformation journey. Prioritization of software assets, clear communication and compatibility, security are the main factors that need to be in focus.For future research, we recommend studying the financial impact of cloud transformation, such as additional revenue and profitability as these aspects are the real main drive of any business innovation. We also recommend studying the impact of data protection laws such as GDPR laws or regulations that could impact the organizational businesses.

    Download full text (pdf)
    Cloud Computing Impacts on Business Models: A study based on Osterwalder’s Business Model Canvas
  • 38.
    Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, p. 47-52, article id 8073567Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

    Download full text (pdf)
    fulltext
  • 39.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance Analysis of an Adaptive Rate Scheme for QoE-Assured Mobile VR Video Streaming2022In: Computers, E-ISSN 2073-431X, Vol. 11, no 5, article id 69Article in journal (Refereed)
    Abstract [en]

    The emerging 5G mobile networks are essential enablers for mobile virtual reality (VR) video streaming applications assuring high quality of experience (QoE) at the end-user. In addition, mobile edge computing brings computational resources closer to the user equipment (UE), which allows offloading computationally intensive processing. In this paper, we consider a network architecture for mobile VR video streaming applications consisting of a server that holds the VR video content, a mobile edge virtualization with prefetching (MVP) unit that handles the VR video packets, and a head-mounted display along with a buffer, which together serve as the UE. Several modulation and coding schemes with different rates are provided by the MVP unit to adaptively cope with the varying wireless link conditions to the UE and the state of the UE buffer. The UE buffer caches VR video packets as needed to compensate for the adaptive rates. A performance analysis is conducted in terms of blocking probability, throughput, queueing delay, and average packet error rate. To capture the effect of fading severity, the analytical expressions for these performance metrics are derived for Nakagami-m fading on the wireless link from the MVP unit to the UE. Numerical results show that the proposed system meets the network requirements needed to assure the QoE levels of different mobile VR video streaming applications. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

    Download full text (pdf)
    fulltext
  • 40.
    Dallora Moraes, Ana Luiza
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Supplementary Material of: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.2016Other (Other academic)
    Abstract [en]

     This document contains the supplementary material regarding the systematic literature review entitled: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.

    Download (pdf)
    attachment
  • 41.
    Danesh, Parisasadat
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Qualcomm.
    Efficient CNN-based Object IDAssociation Model for Multiple ObjectTracking2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    Efficient CNN-based Object ID Association Model for Multiple Object Tracking
  • 42.
    Danielsson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Comparing Two Generations of Embedded GPUs Running a Feature Detection AlgorithmManuscript (preprint) (Other academic)
    Abstract [en]

    Graphics processing units (GPUs) in embedded mobile platforms are reaching performance levels where they may be useful for computer vision applications. We compare two generations of embedded GPUs for mobile devices when run- ning a state-of-the-art feature detection algorithm, i.e., Harris- Hessian/FREAK. We compare architectural differences, execu- tion time, temperature, and frequency on Sony Xperia Z3 and Sony Xperia XZ mobile devices. Our results indicate that the performance soon is sufficient for real-time feature detection, the GPUs have no temperature problems, and support for large work-groups is important.

    Download full text (pdf)
    fulltext
  • 43.
    Djärv Karltorp, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Skoglund, Eric
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Performance of Multi-threaded Web Applications using Web Workers in Client-side JavaScript2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context - Software applications on the web are more commonly used nowadays than before. As a result of this, the performance needed to run the applications is increasing. One method to increase performance is writing multi-threaded code using Web Workers in JavaScript.

    Objectives - We will investigate how using Web Workers can increase responsiveness, raw computational power and decrease load time. Additionally, we will conduct a survey that targets software developers to find out their opinions about performance in web applications, multi-threading and more specifically Web Workers.

    Realization (Method) - We created three experiments that concentrate on the areas mentioned above. The experiments are hosted on a web server inside an isolated Docker container to eliminate external factors as much as possible. To complement the experiments we sent out a survey to collect information of developers' opinions about Web Workers. The criteria for the selection of developers were some JavaScript experience. The survey contained questions about their opinions on switching to a multi-threaded workflow on the web. Do they experience performance issues in today's web applications? Could Web Workers be useful in their projects?

    Results - Responsiveness shifted from freezing the website to perfect responsiveness when using Web Workers. Raw computational power increased at best 67% when using eight workers with tasks that took between 100 milliseconds and 15 seconds. Over 15 seconds, sixteen workers improved the computational power further with around 3% - 9% compared to eight workers. At best the completion time decreased with 74% in Firefox and 72% in Chrome. Using Web Workers to help with load time gave a big improvement but is somewhat restricted to specific use cases.

    Conclusions - Using Web Workers to increase responsiveness made an immense difference when moving tasks that is affecting the users responsiveness to background threads. Completion time for big computational tasks was quicker in use cases where you can split the workload to separate portions and use multiple threads in parallel to complete the tasks. Load time can be improved with Web Workers by completing some tasks after the page is done loading, instead of waiting for all tasks to complete and then load the page. The survey indicated that many have performance in mind and would consider writing code in a multi-threaded way. The knowledge about multi-threading and Web Workers was low. Although, most of the participants believe that Web Workers would be useful in their current and future projects, and are worth the effort to implement.

    Download full text (pdf)
    Performance of Multi-threaded Web Applications using Web Workers in Client-side JavaScript
  • 44.
    Eivazzadeh, Shahryar
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Sanmartin Berglund, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Anderberg, Peter
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Larsson, Tobias
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering.
    Design of a Semi-Automated and Continuous Evaluation System: Customized for Application in e-HealthManuscript (preprint) (Other academic)
    Abstract [en]

    Background and Objectives

    Survey-based evaluation of a system, such as measuring user’s satisfaction or patient-reported outcomes, entails a set of burdens that limits the feasibility, frequency, extendability, and continuity of the evaluation. Automating the evaluation process, that is reducing the burden of evaluators in questionnaire curation or minimizing the need for explicit user attention when collecting their attitudes, can make the evaluation more feasible, repeatable, extendible, continuous, and even flexible for improvement. An automated evaluation process can be enhanced to include features, such as the ability to handle heterogeneity in evaluation cases. Here, we represent the design of a system that makes it possible to have a semi-automated evaluation system. The design is presented and partially implemented in the context of health information systems, but it can be applied to other contexts of information system usages as well.

    Method

    The system was divided into four components. We followed a design research methodology to design the system, where each component reached a certain level of maturity. Already implemented and validated methods from previous studies were embedded within components, while they were extended with improved automation proposals or new features.

    Results

    A system was designed, comprised of four major components: Evaluation Aspects Elicitation, User Survey, Benchmark Path Model, and Alternative Metrics Replacement. All components have the essential maturity of identification of the problem, identification of solution objectives, and the overall design. In the overall design, the primary flow, process-entities, data-entities, and events for each component are identified and illustrated. Parts of some components have been already verified and demonstrated in real-world cases.

    Conclusion

    A system can be developed to minimize human burden, both for the evaluators and respondants, in survey-based evaluation. This system automates finding items to evaluate, creating questionnaire based on those items, surveying the users' attitude about those items, modeling the relations between the evaluation items, and incrementally changing the model to rely on automatically collected metrics, usually implicit indicators, collected from the users, instead of requiring their explicit expression of their attitudes. The system provides the possibility of minimal human burden, frequent repetition, continuity and real-time reporting, incremental upgrades regarding environmental changes, proper handling of heterogeneity, and a higher degree of objectivity.

  • 45.
    Elfström, Carl-Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Industry preferred implementation practices for security in cloud systems: An exploratory and qualitative analysis of implementation practices for cloud security2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Since the introduction of cloud computing as a standard solution for data storage and software hosting, new security measures have been developed, and data laws and compliance regulations have become more stringent. In this thesis, the exploration of compliance and regulatory documents, interviews of industry professionals, and thematic analysis of interview data  uncover some industry-preferred implementation practices that will help to ensure compliance and cloud security for applications and data storage on the cloud. Industry professionals' opinions are put into context and compared to compliance regulations.

    Key findings include a list of implementation practices through thematic analysis of interview data put into themes. These include encryption, security controls, life-cycle management, and audits. The findings' importance is a list of practices or actions that any developer can proactively adapt for the cloud and know that it is a viable implementation to remain secure. The limitations and scope is related to cloud systems and software engineering, and the list extracted from the interview data is not an all-encompassing solution for cloud security.  Furthermore, the raw interview data will assist future research into this topic.

    Download full text (pdf)
    fulltext
  • 46.
    Engert, Marcus
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Working towards Circular Economy using Blockchain of Things: An exploratory thesis; The Challenges of implementing Blockchain of Things for a Circular Economy2023Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: The internet of things (IoT) is transforming industries by enabling data-driven decision-making. IoT play a crucial role in the transition to a Circular Economy of businesses. However, the intrinsic features of the IoT, such as centralization, poor interoperability, and privacy and security vulnerabilities, pose several challenges. Blockchain technology offers potential solutions to these challenges and can enhance traceability and information reliability, while also creating new avenues of interactions and value creation. A combination of these technologies is called Blockchain of Things (BCoT). Additionally, technology such as BCoT and assist the adoption of Circular Economy, a paradigm of sustainable and resource-optimization activities. 

    Purpose: This study aims to explore the potential challenges of adopting Blockchain of Things and Circular Economy practices, and potential barriers of adopting Blockchain of Things for Circular Economy. 

    Method: The study uses an explorative-qualitative research method where primary data was gathered through semi-structured interviews. The data was then transcribed and categorized through the use of the Gioia-method, which was then applied to the TOE-framework.  

    Result: This study finds several barriers regarding Blockchain of Things and Circular Economy, such as lack of standardization, lack of data & information, lack of knowledge, lack of regulations, and finally, slow mind-shift of decision makers. 

    Conclusion: The results of the study points toward greater systemic challenges of knowledge and standards. Conclusions that can be derived from this are  that industries are still in the early adoption phase of digitalization, which is the reason for a lack of insight and knowledge regarding new technologies. Additionally, identified challenges of a lack of standards, regulations, and know-how regarding Circular Economy showcases a lack of knowledge and confidence in being able to create competitive, sustainable circular activities.

    Download full text (pdf)
    fulltext
  • 47.
    Eriksson, Adam
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Runtime of WebAssembly: A study into WebAssembly runtime2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    WebAssembly is Assembly-like code that is created by compiling other languages into Wasm. The Wasm file can then be run on the web at near native speed. The objective of this study is to find how WebAssemblys runtime compares to JavaScript and native. The study will also see if different browsers impact WebAssembly runtime.

    To get the information two different methods were used. Firstly literature and articles were used to gather data on JavaScript and native runtime compared to WebAssembly. Secondly an empirical study was conducted to compare four different browsers WebAssembly runtime. 

    When comparing WebAssembly and JavaScript it was found that WebAssembly isn't always the fastest alternative due to many reasons but some major ones were how they were compiled and optimised. 

    When looking at WebAssembly compared to native we could clearly see that WebAssembly was slower. These slowdowns came primarily from the increase in code size but the virtual environment and security checks also contributed to this. 

    After the empirical study we could see some differences between browsers both in compilation speed and execution time. Between the chromium browsers the difference in execution time was very small and Firefox was always faster. But when looking at compilation time Chrome was faster with the other browsers having varying results.

    The research could conclude that WebAssembly can provide a useful boost to runtime on websites when used correctly. It is not something that is going to replace JavaScript but can be used together with it. We could also conclude that the user's choice of browser has a small impact on WebAssembly and can cause differences in runtime.

    Download full text (pdf)
    Runtime of WebAssembly - A study into WebAssembly runtime
  • 48.
    Erlandsson, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Johnsson Thunell, Karl-Manne
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Differences of SMS gateway services: A performance analysis of two communication platforms implemented on an infrastructure based on ASP.NET Core 62022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: When we use our phone to make a purchase, booking, or simply contact someone, we expect a quick response to acknowledge that our request has been sent and received. Today’s traffic requests are higher than ever, and will most likely continue to grow. This puts pressure on the communication platforms to keep up with the demand and continue to perform and deliver the requests within short time frames. Twilio and 46elks are two communication platforms that offer an SMS gateway service, and this thesis will take a deeper look at how they perform when implemented on an ASP.NET Core 6 web application. 

    Objectives: The goal of this thesis is to evaluate if there are any disparities or similarities between the two communication platform’s SMS gateway services regarding performance. The performance quality attributes are focused on time behaviour, CPU utilization, and RAM usage.

    Method: Comparing two communication platforms by using a quasi-experiment. A web application was developed with ASP.NET Core 6 to handle incoming SMS bookings. With the provided data from the SMS, it created and stored the booking. Once done, a confirmation SMS was delivered to the Sender. The performance quality attributes were stored and collected for evaluation of each incoming SMS during the experiments.

    Results: Overall, Twilio had a longer time behaviour and higher RAM usage compared to 46elks, but Twilio had a lower CPU utilization compared to 46elks.

    Conclusions: The time behaviour and CPU utilization between the two communication platforms were significant different. Interesting findings were that when injecting a higher workload on the web application, the performance improved in two quality attributes, RAM usage and time behaviour, for both communication platforms.

    Download full text (pdf)
    Differences of SMS gateway services. A performance analysis of two communication platforms implemented on an infrastructure based on ASP.NET Core 6.
  • 49.
    Fakhouri, Hussam N.
    et al.
    University of Petra, Jordan.
    Alawadi, Sadi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Awaysheh, Feras M.
    Tartu University, Estonia.
    Hamad, Faten
    Sultan Qaboos University, Oman.
    Novel hybrid success history intelligent optimizer with Gaussian transformation: application in CNN hyperparameter tuning2023In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543Article in journal (Refereed)
    Abstract [en]

    This research proposes a novel Hybrid Success History Intelligent Optimizer with Gaussian Transformation (SHIOGT) for solving different complexity level optimization problems and for Convolutional Neural Network (CNNs) hyperparameter tuning. SHIOGT algorithm is designed to balance exploration and exploitation phases through the addition of Gaussian Transformation to the original Success History Intelligent Optimizer. The inclusion of Gaussian Transformation enhances solution diversity enables SHIO to avoid local optima. SHIOGT also demonstrates robustness and adaptability by dynamically adjusting its search strategy based on problem characteristics. Furthermore, the combination of Gaussian and SHIO facilitates faster convergence, accelerating the discovery of optimal or near-optimal solutions. Moreover, the hybridization of these two techniques brings a synergistic effect, enabling SHIOGT to overcome individual limitations and achieve superior performance in hyperparameter optimization tasks. SHIOGT was thoroughly assessed against an array of benchmark functions of varying complexities, demonstrating its ability to efficiently locate optimal or near-optimal solutions across different problem categories. Its robustness in tackling multimodal and deceptive landscapes and high-dimensional search spaces was particularly notable. SHIOGT has been benchmarked over 43 challenging optimization problems and have been compared with state-of-the art algorithm. Further, SHIOGT algorithm is applied to the domain of deep learning, with a case study focusing on hyperparameter tuning of CNNs. With the intelligent exploration–exploitation balance of SHIOGT, we hypothesized it could effectively optimize the CNN's hyperparameters. We evaluated the performance of SHIOGT across a variety of datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100, with the aim of optimizing CNN model hyperparameters. The results show an impressive accuracy rate of 98% on the MNIST dataset. Similarly, the algorithm achieved a 92% accuracy rate on Fashion-MNIST, 76% on CIFAR-10, and 70% on CIFAR-100, underscoring its effectiveness across diverse datasets. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

  • 50.
    Fiati-Kumasenu, Albert
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Extracting Customer Sentiments from Email Support Tickets: A case for email support ticket prioritisation2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background

    Daily, companies generate enormous amounts of customer support tickets which are grouped and placed in specialised queues, based on some characteristics, from where they are resolved by the customer support personnel (CSP) on a first-in-first-out basis. Given that these tickets require different levels of urgency, a logical next step to improving the effectiveness of the CSPs is to prioritise the tickets based on business policies. Among the several heuristics that can be used in prioritising tickets is sentiment polarity.

    Objectives

    This study investigates how machine learning methods and natural language techniques can be leveraged to automatically predict the sentiment polarity of customer support tickets using.

    Methods

    Using a formal experiment, the study examines how well Support Vector Machine (SVM), Naive Bayes (NB) and Logistic Regression (LR) based sentiment polarity prediction models built for the product and movie reviews, can be used to make sentiment predictions on email support tickets. Due to the limited size of annotated email support tickets, Valence Aware Dictionary and sEntiment Reasoner (VADER) and cluster ensemble - using k-means, affinity propagation and spectral clustering, is investigated for making sentiment polarity prediction.

    Results

    Compared to NB and LR, SVM performs better, scoring an average f1-score of .71 whereas NB scores least with a .62 f1-score. SVM, combined with the presence vector, outperformed the frequency and TF-IDF vectors with an f1-score of .73 while NB records an f1-score of .63. Given an average f1-score of .23, the models transferred from the movie and product reviews performed inadequately even when compared with a dummy classifier with an f1-score average of .55. Finally, the cluster ensemble method outperformed VADER with an f1-score of .61 and .53 respectively.

    Conclusions

    Given the results, SVM, combined with a presence vector of bigrams and trigrams is a candidate solution for extracting sentiments from email support tickets. Additionally, transferring sentiment models from the movie and product reviews domain to the email support tickets is not possible. Finally, given that there exists a limited dataset for conducting sentiment analysis studies in the Swedish and the customer support context, a cluster ensemble is recommended as a sample selection method for generating annotated data.

    Download full text (pdf)
    Extracting Customer Sentiments
1234 1 - 50 of 189
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf