Change search
Refine search result
1 - 31 of 31
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exner, Peter
    Sony R&D Center Lund Laboratory, SWE.
    An Inductive System Monitoring Approach for GNSS Activation2022In: IFIP Advances in Information and Communication Technology / [ed] Maglogiannis, I, Iliadis, L, Macintyre, J, Cortez, P, Springer Science+Business Media B.V., 2022, Vol. 647, p. 437-449Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a Global Navigation Satellite System (GNSS) component activation model for mobile tracking devices that automatically detects indoor/outdoor environments using the radio signals received from Long-Term Evolution (LTE) base stations. We use an Inductive System Monitoring (ISM) technique to model environmental scenarios captured by a smart tracker via extracting clusters of corresponding value ranges from LTE base stations’ signal strength. The ISM-based model is built by using the tracker’s historical data labeled with GPS coordinates. The built model is further refined by applying it to additional data without GPS location collected by the same device. This procedure allows us to identify the clusters that describe semi-outdoor scenarios. In that way, the model discriminates between two outdoor environmental categories: open outdoor and semi-outdoor. The proposed ISM-based GNSS activation approach is studied and evaluated on a real-world dataset contains radio signal measurements collected by five smart trackers and their geographical location in various environmental scenarios.

  • 2.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated Context-Aware Vulnerability Risk Management for Patch Prioritization2022In: Electronics, E-ISSN 2079-9292, Vol. 11, no 21, article id 3580Article in journal (Refereed)
    Abstract [en]

    The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 3.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Normalization Framework for Vulnerability Risk Management in Cloud2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, IEEE, 2021, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.

    Download full text (pdf)
    fulltext
  • 4.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB, Sweden.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management2020In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 200-205, article id 9196350Conference paper (Refereed)
    Abstract [en]

    In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.

    In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.

    Download full text (pdf)
    fulltext
  • 5.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Contribution Prediction in Federated Learning via Client Behavior Evaluation2025In: Future Generation Computer Systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 166, article id 107639Article in journal (Refereed)
    Abstract [en]

    Federated learning (FL), a decentralized machine learning framework that allows edge devices (i.e., clients) to train a global model while preserving data/client privacy, has become increasingly popular recently. In FL, a shared global model is built by aggregating the updated parameters in a distributed manner. To incentivize data owners to participate in FL, it is essential for service providers to fairly evaluate the contribution of each data owner to the shared model during the learning process. To the best of our knowledge, most existing solutions are resource-demanding and usually run as an additional evaluation procedure. The latter produces an expensive computational cost for large data owners. In this paper, we present simple and effective FL solutions that show how the clients’ behavior can be evaluated during the training process with respect to reliability, and this is demonstrated for two existing FL models, Cluster Analysis-based Federated Learning (CA-FL) and Group-Personalized FL (GP-FL), respectively. In the former model, CA-FL, the frequency of each client to be selected as a cluster representative and in that way to be involved in the building of the shared model is assessed. This can eventually be considered as a measure of the respective client data reliability. In the latter model, GP-FL, we calculate how many times each client changes a cluster it belongs to during FL training, which can be interpreted as a measure of the client's unstable behavior, i.e., it can be considered as not very reliable. We validate our FL approaches on three LEAF datasets and benchmark their performance to two baseline contribution evaluation approaches. The experimental results demonstrate that by applying the two FL models we are able to get robust evaluations of clients’ behavior during the training process. These evaluations can be used for further studying, comparing, understanding, and eventually predicting clients’ contributions to the shared global model.

    Download full text (pdf)
    fulltext
  • 6.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    FedCO: Communication-Efficient Federated Learning via Clustering Optimization †2022In: Future Internet, E-ISSN 1999-5903, Vol. 14, no 12, article id 377Article in journal (Refereed)
    Abstract [en]

    Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers’ partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 7.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Reducing Communication Overhead of Federated Learning through Clustering Analysis2021In: 26th IEEE Symposium on Computers and Communications (ISCC 2021), Institute of Electrical and Electronics Engineers (IEEE), 2021Conference paper (Refereed)
    Abstract [en]

    Training of machine learning models in a Datacenter, with data originated from edge nodes, incurs high communication overheads and violates a user's privacy. These challenges may be tackled by employing Federated Learning (FL) machine learning technique to train a model across multiple decentralized edge devices (workers) using local data. In this paper, we explore an approach that identifies the most representative updates made by workers and those are only uploaded to the central server for reducing network communication costs. Based on this idea, we propose a FL model that can mitigate communication overheads via clustering analysis of the worker local updates. The Cluster Analysis-based Federated Learning (CA-FL) model is studied and evaluated in human activity recognition (HAR) datasets. Our evaluation results show the robustness of CA-FL in comparison with traditional FL in terms of accuracy and communication costs on both IID and non-IID  cases.

    Download full text (pdf)
    fulltext
  • 8.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Exner, Peter
    Sony, R&D Center Europe, SWE.
    Context-Aware Edge-Based AI Models for Wireless Sensor Networks-An Overview2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 15, article id 5544Article, review/survey (Refereed)
    Abstract [en]

    Recent advances in sensor technology are expected to lead to a greater use of wireless sensor networks (WSNs) in industry, logistics, healthcare, etc. On the other hand, advances in artificial intelligence (AI), machine learning (ML), and deep learning (DL) are becoming dominant solutions for processing large amounts of data from edge-synthesized heterogeneous sensors and drawing accurate conclusions with better understanding of the situation. Integration of the two areas WSN and AI has resulted in more accurate measurements, context-aware analysis and prediction useful for smart sensing applications. In this paper, a comprehensive overview of the latest developments in context-aware intelligent systems using sensor technology is provided. In addition, it also discusses the areas in which they are used, related challenges, motivations for adopting AI solutions, focusing on edge computing, i.e., sensor and AI techniques, along with analysis of existing research gaps. Another contribution of this study is the use of a semantic-aware approach to extract survey-relevant subjects. The latter specifically identifies eleven main research topics supported by the articles included in the work. These are analyzed from various angles to answer five main research questions. Finally, potential future research directions are also discussed.

    Download full text (pdf)
    fulltext
  • 9.
    Al-Saedi, Ahmed Abbas Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    An Energy-aware Multi-Criteria Federated Learning Model for Edge Computing2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021 / [ed] Younas M., Awan I., Unal P., IEEE, 2021, p. 134-143Conference paper (Refereed)
    Abstract [en]

    The successful convergence of Internet of Things (IoT) technology and distributed machine learning have leveraged to realise the concept of Federated Learning (FL) with the collaborative efforts of a large number of low-powered and small-sized edge nodes. In Wireless Networks (WN), an energy-efficient transmission is a fundamental challenge since the energy resource of edge nodes is restricted.In this paper, we propose an Energy-aware Multi-Criteria Federated Learning (EaMC-FL) model for edge computing. The proposed model enables to collaboratively train a shared global model by aggregating locally trained models in selected representative edge nodes (workers). The involved workers are initially partitioned into a number of clusters with respect to the similarity of their local model parameters. At each training round a small set of representative workers is selected on the based of multi-criteria evaluation that scores each node representativeness (importance) by taking into account the trade-off among the node local model performance, consumed energy and battery lifetime. We have demonstrated through experimental results the proposed EaMC-FL model is capable of reducing the energy consumed by the edge nodes by lowering the transmitted data.

    Download full text (pdf)
    fulltext
  • 10.
    Anwar, Mahwish
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    The feasibility of Blockchain solutions in the maritime industry2019Conference paper (Other academic)
    Abstract [en]

    Purpose / Value

    The concept of Blockchain technology in supply chain management is well discussed, yet

    inadequately theorized in terms of its applicability, especially within the maritime industry,

    which forms a fundamental node of the entire supply chain network. More so, the assumptive

    grounds associated with the technology have not been openly articulated, leading to unclear

    ideas about its applicability.

    Design/methodology/approach

    The research is designed divided into two Stages. This paper (Stage one) enhanced

    literature review for data collection in order to gauge the properties of the Blockchain

    technology, and to understand and map those characteristics with the Bill of Lading

    process within maritime industry. In Stage two an online questionnaire is conducted to

    assess the feasibility of Blockchain technology for different maritime use-cases.

    Findings

    The research that was collected and analysed partly from deliverable in the

    Connect2SmallPort Project and from other literature suggests that Blockchain can be an

    enabler for improving maritime supply chain. The use-case presented in this paper highlights

    the practicality of the technology. It was identified that Blockchain possess characteristics

    suitable to mitigate the risks and issues pertaining to the paper-based Bill of Lading process.

    Research limitations

    The study would mature further after the execution of the Stage Two. By the end of both

    Stages, a framework for Blockchain adoption with a focus on the maritime industry would

    be proposed.

    Practical implications

    The proposed outcome indicated the practicality of technology, which could be beneficial

    for the port stakeholders that wish to use Blockchain in processing Bill of Lading or

    contracts.

    Social implications

    The study may influence the decision makers to consider the benefits of using the Blockchain

    technology, thereby, creating opportunities for the maritime industry to leverage the

    technology with government’s support.

    Download full text (pdf)
    fulltext
  • 11.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Detection of Metamorphic Malware Packers Using Multilayered LSTM Networks2020In: Lecture Notes in Computer Science / [ed] Weizhi Meng, Dieter Gollmann, Christian D. Jensen, and Jianying Zhou, Springer Science and Business Media Deutschland GmbH , 2020, Vol. 12282, p. 36-53Conference paper (Refereed)
    Abstract [en]

    Malware authors do their best to conceal their malicious software to increase its probability of spreading and to slow down analysis. One method used to conceal malware is packing, in which the original malware is completely hidden through compression or encryption, only to be reconstructed at run-time. In addition, packers can be metamorphic, meaning that the output of the packer will never be exactly the same, even if the same file is packed again. As the use of known off-the-shelf malware packers is declining, it is becoming increasingly more important to implement methods of detecting packed executables without having any known samples of a given packer. In this study, we evaluate the use of recurrent neural networks as a means to classify whether or not a file is packed by a metamorphic packer. We show that even with quite simple networks, it is possible to correctly distinguish packed executables from non-packed executables with an accuracy of up to 89.36% when trained on a single packer, even for samples packed by previously unseen packers. Training the network on more packer raises this number to up to 99.69%.

    Download full text (pdf)
    fulltext
  • 12.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Finding a needle in a haystack: A comparative study of IPv6 scanning methods2019In: 2019 INTERNATIONAL SYMPOSIUM ON NETWORKS, COMPUTERS AND COMMUNICATIONS (ISNCC 2019), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    It has previously been assumed that the size of anIPv6 network would make it impossible to scan the network forvulnerable hosts. Recent work has shown this to be false, andseveral methods for scanning IPv6 networks have been suggested.However, most of these are based on external information likeDNS, or pattern inference which requires large amounts of knownIP addresses. In this paper, DeHCP, a novel approach based ondelimiting IP ranges with closely clustered hosts, is presentedand compared to three previously known scanning methods. Themethod is shown to work in an experimental setting with resultscomparable to that of the previously suggested methods, and isalso shown to have the advantage of not being limited to a specificprotocol or probing method. Finally we show that the scan canbe executed across multiple VLANs.

    Download full text (pdf)
    isncc2019-ipv6
  • 13.
    Cardellini, Valeria
    et al.
    Universita degli Studi di Roma Tor Vergata, ITA.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grassi, Vincenzo
    Universita degli Studi di Roma Tor Vergata, ITA.
    Iannucci, Stefano
    Universita degli Studi di Roma Tor Vergata, ITA.
    Lo Presti, F.
    Universita degli Studi di Roma Tor Vergata, ITA.
    Mirandola, Raffaela
    Politecnico di Milano, ITA.
    MOSES: A platform for experimenting with qos-driven self-adaptation policies for service oriented systems2017In: Lecture Notes in Computer Science, Springer Verlag , 2017, Vol. 9640, p. 409-433Conference paper (Refereed)
    Abstract [en]

    Architecting software systems according to the service-oriented paradigm, and designing runtime self-adaptable systems are two relevant research areas in today’s software engineering. In this chapter we present MOSES, a software platform supporting QoS-driven adaptation of service-oriented systems. It has been conceived for service-oriented systems architected as composite services that receive requests generated by different classes of users. MOSES integrates within a unified framework different adaptation mechanisms. In this way it achieves a greater flexibility in facing various operating environments and the possibly conflicting QoS requirements of several concurrent users. Besides providing its own self-adaptation functionalities, MOSES lends itself to the experimentation of alternative approaches to QoS-driven adaptation of service-oriented systems thanks to its modular architecture. © Springer International Publishing AG 2017.

  • 14.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A study on performance measures for auto-scaling CPU-intensive containerized applications2019In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543, Vol. 22, no 3, p. 995-1006, article id Special Issue: SIArticle in journal (Refereed)
    Abstract [en]

    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented.

    Download full text (pdf)
    fulltext
  • 15.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Container Orchestration: A Survey2019In: Systems Modeling: Methodologies and Tools / [ed] Antonio Puliafito, Kishor S. Trivedi, Springer , 2019, p. 221-235Chapter in book (Refereed)
    Abstract [en]

    Container technologies are changing the way cloud platforms and distributed applications are architected and managed. Containers are used to run enterprise, scientific and big data applications, to architect IoT and edge/fog computing systems, and by cloud providers to internally manage their infrastructure and services. However, we are far away from the maturity stage and there are still many research challenges to be solved. One of them is container orchestration that makes it possible to define how to select, deploy, monitor, and dynamically control the configuration of multi-container packaged applications in the cloud. This paper surveys the state-of-the-art solutions and discusses research challenges in autonomic orchestration of containers. A reference architecture of an autonomic container orchestrator is also proposed. © 2019, Springer International Publishing AG, part of Springer Nature.

  • 16.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cardellini, Valeria
    University of Rome, ITA.
    Interino, Gianluca
    University of Rome, ITA.
    Palmirani, Monica
    University of Bologna, ITA.
    Research challenges in legal-rule and QoS-aware cloud service brokerage2018In: Future Generation Computer Systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 78, no Part 1, p. 211-223Article in journal (Refereed)
    Abstract [en]

    Abstract The ICT industry and specifically critical sectors, such as healthcare, transportation, energy and government, require as mandatory the compliance of ICT systems and services with legislation and regulation, as well as with standards. In the era of cloud computing, this compliance management issue is exacerbated by the distributed nature of the system and by the limited control that customers have on the services. Today, the cloud industry is aware of this problem (as evidenced by the compliance program of many cloud service providers), and the research community is addressing the many facets of the legal-rule compliance checking and quality assurance problem. Cloud service brokerage plays an important role in legislation compliance and QoS management of cloud services. In this paper we discuss our experience in designing a legal-rule and QoS-aware cloud service broker, and we explore relate research issues. Specifically we provide three main contributions to the literature: first, we describe the detailed design architecture of the legal-rule and QoS-aware broker. Second, we discuss our design choices which rely on the state of the art solutions available in literature. We cover four main research areas: cloud broker service deployment, seamless cloud service migration, cloud service monitoring, and legal rule compliance checking. Finally, from the literature review in these research areas, we identify and discuss research challenges.

  • 17.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gualandi, Gabriele
    Sapienza University of Rome, ITA.
    ASiMOV: A self-protecting control application for the smart factory2021In: Future Generation Computer Systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 115, p. 213-235Article in journal (Refereed)
    Abstract [en]

    The evolution of manufacturing systems into a smart factory brings advantages but also increased cyber-risks. This paper investigates the problem of intrusion detection and autonomous response to cyber-attacks targeting the control logic of industrial control applications for the smart factory. Specifically, we propose ASiMOV (Asynchronous Modular Verification), a self-protecting architecture for cyber–physical systems realizing a verifiable control application. ASiMOV is inspired by modular redundancy and leverages virtualization technologies to respond and to prevent cyber-attacks to the control logic. Using simulation experiments, we evaluate: the effects of an attack on an industrial control application enhanced by ASiMOV; the delay introduced by ASiMOV within a control loop; and the cyber-attack detection delay. Results show that, in the simulated scenario, the controller can work with a sampling rate of up to 200 Hertz. Any tampering with the control logic is detected without false positives/negatives in a time equal to the latency between the proposed control application and the proposed IDS (e.g., tens to hundreds of milliseconds). © 2020

  • 18.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Iannucci, Stefano
    Mississippi State University, .
    The state-of-the-art in container technologies: Application, orchestration and security2020In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 32, no 17, article id e5668Article in journal (Refereed)
    Abstract [en]

    Containerization is a lightweight virtualization technology enabling the deployment and execution of distributed applications on cloud, edge/fog, and Internet-of-Things platforms. Container technologies are evolving at the speed of light, and there are many open research challenges. In this paper, an extensive literature review is presented that identifies the challenges related to the adoption of container technologies in High Performance Computing, Big Data analytics, and geo-distributed (Edge, Fog, Internet-of-Things) applications. From our study, it emerges that performance, orchestration, and cyber-security are the main issues. For each challenge, the state-of-the-art solutions are then analyzed. Performance is related to the assessment of the performance footprint of containers and comparison with the footprint of virtual machines and bare metal deployments, the monitoring, the performance prediction, the I/O throughput improvement. Orchestration is related to the selection, the deployment, and the dynamic control of the configuration of multi-container packaged applications on distributed platforms. The focus of this work is on run-time adaptation. Cyber-security is about container isolation, confidentiality of containerized data, and network security. From the analysis of 97 papers, it came out that the state-of-the-art is more mature in the area of performance evaluation and run-time adaptation rather than in security solutions. However, the main unsolved challenges are I/O throughput optimization, performance prediction, multilayer monitoring, isolation, and data confidentiality (at rest and in transit). © 2020 John Wiley & Sons, Ltd.

  • 19.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-Aware Adaptation in Managed Cassandra Datacenters2016In: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, p. 60-71Conference paper (Refereed)
    Abstract [en]

    Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

  • 20.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimal adaptation for Apache Cassandra2016In: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 21.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbad, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Energy-Aware Adaptation Model for Big Data Platforms2016In: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, p. 349-350Conference paper (Refereed)
    Abstract [en]

    Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

  • 22.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Magliarisi, Danilo
    Sapienza University of Rome, Italy.
    Decentralized Task Scheduling in Satellite Edge Computing2024In: 2024 9th International Conference on Fog and Mobile Edge Computing, FMEC 2024 / [ed] Quwaider M., Alawadi S., Jararweh Y., Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 154-161Conference paper (Refereed)
    Abstract [en]

    Satellite Edge Computing has been recently introduced to deploy innovative computational services in space using Low Earth Orbit (LEO) satellite constellations as a distributed computational platform. Running a distributed computing platform in space introduces new challenges to traditional problems like computation offloading, task scheduling, mobility management, fault detection, and recovery. This research focuses on the problem of task scheduling, proposing a system model that accounts for the dynamics of the Satellite Edge Computing environment and a formulation of the scheduling problem as an optimization problem that minimizes the average task response time under constraints on available resources and task completion deadlines. Then, we propose a decentralized algorithm that estimates the task response time and computes a scheduling solution in a fixed time, which depends only on the number of Inter Satellite Links a satellite has (typically four). Finally, we estimate and compare the overhead of the decentralized versus the decentralized solutions, showing the advantages of the proposed approach. Simulation experiments allow us to compare the performance of the decentralized approach with the performance of baseline decentralized and centralized solutions. Results show that, in all scenarios considered, the proposed decentralized algorithm performs better than the baseline centralized and decentralized solutions and is more scalable and highly available. © 2024 IEEE.

  • 23.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

    Download full text (pdf)
    fulltext
  • 24.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Energy-Aware Very Fast Decision Tree2021In: International Journal of Data Science and Analytics, ISSN 2364-415X, Vol. 11, no 2, p. 105-126Article in journal (Refereed)
    Abstract [en]

    Recently machine learning researchers are designing algorithms that can run in embedded and mobile devices, which introduces additional constraints compared to traditional algorithm design approaches. One of these constraints is energy consumption, which directly translates to battery capacity for these devices. Streaming algorithms, such as the Very Fast Decision Tree (VFDT), are designed to run in such devices due to their high velocity and low memory requirements. However, they have not been designed with an energy efficiency focus. This paper addresses this challenge by presenting the nmin adaptation method, which reduces the energy consumption of the VFDT algorithm with only minor effects on accuracy. nmin adaptation allows the algorithm to grow faster in those branches where there is more confidence to create a split, and delays the split on the less confident branches. This removes unnecessary computations related to checking for splits but maintains similar levels of accuracy. We have conducted extensive experiments on 29 public datasets, showing that the VFDT with nmin adaptation consumes up to 31% less energy than the original VFDT, and up to 96% less energy than the CVFDT (VFDT adapted for concept drift scenarios), trading off up to 1.7 percent of accuracy.

    Download full text (pdf)
    fulltext
  • 25.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Hoeffding Trees with nmin adaptation2018In: The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018), IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    Machine learning software accounts for a significant amount of energy consumed in data centers. These algorithms are usually optimized towards predictive performance, i.e. accuracy, and scalability. This is the case of data stream mining algorithms. Although these algorithms are adaptive to the incoming data, they have fixed parameters from the beginning of the execution. We have observed that having fixed parameters lead to unnecessary computations, thus making the algorithm energy inefficient.In this paper we present the nmin adaptation method for Hoeffding trees. This method adapts the value of the nmin pa- rameter, which significantly affects the energy consumption of the algorithm. The method reduces unnecessary computations and memory accesses, thus reducing the energy, while the accuracy is only marginally affected. We experimentally compared VFDT (Very Fast Decision Tree, the first Hoeffding tree algorithm) and CVFDT (Concept-adapting VFDT) with the VFDT-nmin (VFDT with nmin adaptation). The results show that VFDT-nmin consumes up to 27% less energy than the standard VFDT, and up to 92% less energy than CVFDT, trading off a few percent of accuracy in a few datasets.

  • 26.
    García Martín, Eva
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    How to Measure Energy Consumption in Machine Learning Algorithms2019In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): ECMLPKDD 2018: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases Workshops. Lecture Notes in Computer Science. Springer, Cham, Springer, 2019, Vol. 11329, p. 243-255Conference paper (Refereed)
    Abstract [en]

    Machine learning algorithms are responsible for a significant amount of computations. These computations are increasing with the advancements in different machine learning fields. For example, fields such as deep learning require algorithms to run during weeks consuming vast amounts of energy. While there is a trend in optimizing machine learning algorithms for performance and energy consumption, still there is little knowledge on how to estimate an algorithm’s energy consumption. Currently, a straightforward cross-platform approach to estimate energy consumption for different types of algorithms does not exist. For that reason, well-known researchers in computer architecture have published extensive works on approaches to estimate the energy consumption. This study presents a survey of methods to estimate energy consumption, and maps them to specific machine learning scenarios. Finally, we illustrate our mapping suggestions with a case study, where we measure energy consumption in a big data stream mining scenario. Our ultimate goal is to bridge the current gap that exists to estimate energy consumption in machine learning scenarios.

    Download full text (pdf)
    garciamartin-measure-energy-ml
  • 27.
    Nardelli, Matteo
    et al.
    Università degli Studi di Roma Tor Vergata, ITA.
    Cardellini, Valeria
    Università degli Studi di Roma Tor Vergata, ITA.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Multi-Level Elastic Deployment of Containerized Applications in Geo-Distributed Environments2018In: Proceedings - 2018 IEEE 6th International Conference on Future Internet of Things and Cloud, FiCloud 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Containers are increasingly adopted, because they simplify the deployment and management of applications. Moreover, the ever increasing presence of IoT devices and Fog computing resources calls for the development of new approaches for decentralizing the application execution, so to improve the application performance. Although several solutions for orchestrating containers exist, the most of them does not efficiently exploit the characteristics of the emerging computing environment. In this paper, we propose Adaptive Container Deployment (ACD), a general model of the deployment and adaptation of containerized applications, expressed as an Integer Linear Programming problem. Besides acquiring and releasing geo-distributed computing resources, ACD can optimize multiple run-time deployment goals, by exploiting horizontal and vertical elasticity of containers. We show the flexibility of the ACD model and, using it as benchmark, we evaluate the behavior of several greedy heuristics for determining the container deployment. © 2018 IEEE.

  • 28.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Comparison between Horizontal Scaling of Hypervisor and Container Based Virtualization using Cassandra NoSQL Database2018In: Proceeding of the 3rd International Conference on Virtualization Application and Technology, 2018, , p. 6Conference paper (Refereed)
    Abstract [en]

    Cloud computing promises customers the ondemand ability to scale in face of workload variations. There are different ways to accomplish scaling, one is vertical scaling and the other is horizontal scaling. The vertical scaling refers to buying more power (CPU, RAM), buying a more expensive and robust server, which is less challenging to implement but exponentially expensive. While, the horizontal scaling refers to adding more servers with less processor and RAM, which is usually cheaper overall and can scale very well. The majority of cloud providers prefer the horizontal scaling approach, and for them would be very important to know about the advantages and disadvantages of both technologies from the perspective of the application performance at scale. In this paper, we compare performance differences caused by scaling of the different virtualization technologies in terms of CPU utilization, latency, and the number of transactions per second. The workload is Apache Cassandra, which is a leading NoSQL distributed database for Big Data platforms. Our results show that running multiple instances of the Cassandra database concurrently, affected the performance of read and write operations differently; for both VMware and Docker, the maximum number of read operations was reduced when we ran several instances concurrently, whereas the maximum number of write operations increased when we ran instances concurrently.

    Download full text (pdf)
    fulltext
  • 29.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Evaluation of Container and Virtual Machine Running Cassandra Workload2017In: PROCEEDINGS OF 2017 3RD INTERNATIONAL CONFERENCE OF CLOUD COMPUTING TECHNOLOGIES AND APPLICATIONS (CLOUDTECH) / [ed] Essaaidi, M Zbakh, M, 2017, p. 24-31Conference paper (Refereed)
    Abstract [en]

    Today, scalable and high-available NoSQL distributed databases are largely used as Big Data platforms. Such distributed databases typically run on a virtualized infrastructure that could be implemented using Hypervisorb ased virtualiz ation or Container-based virtualiz ation. Hypervisor-based virtualization is a mature technology but imposes overhead on CPU, memory, networking, and disk Recently, by sharing the operating system resources and simplifying the deployment of applications, container-based virtualization is getting more popular. Container-based virtualization is lightweight in resource consumption while also providing isolation. However, disadvantages are security issues and 110 performance. As a result, today these two technologies are competing to provide virtual instances for running big data platforms. Hence, a key issue becomes the assessment of the performance of those virtualization technologies while running distributed databases. This paper presents an extensive performance comparison between VMware and Docker container, while running Apache Cassandra as workload. Apache Cassandra is a leading NoSQL distributed database when it comes to Big Data platforms. As baseline for comparisons we used the Cassandra's performance when running on a physical infrastructure. Our study shows that Docker had lower overhead compared to the VMware when running Cassandra. In fact, the Cassandra's performance on the Dockerized infrastructure was as good as on the Non-Virtualized.

  • 30.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Performance evaluation of containers and virtual machines when running Cassandra workload concurrently2020In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 32, no 17, article id e5693Article in journal (Refereed)
    Abstract [en]

    NoSQL distributed databases are often used as Big Data platforms. To provide efficient resource sharing and cost effectiveness, such distributed databases typically run concurrently on a virtualized infrastructure that could be implemented using hypervisor-based virtualization or container-based virtualization. Hypervisor-based virtualization is a mature technology but imposes overhead on CPU, networking, and disk. Recently, by sharing the operating system resources and simplifying the deployment of applications, container-based virtualization is getting more popular. This article presents a performance comparison between multiple instances of VMware VMs and Docker containers running concurrently. Our workload models a real-world Big Data Apache Cassandra application from Ericsson. As a baseline, we evaluated the performance of Cassandra when running on the nonvirtualized physical infrastructure. Our study shows that Docker has lower overhead compared with VMware; the performance on the container-based infrastructure was as good as on the nonvirtualized. Our performance evaluations also show that running multiple instances of a Cassandra database concurrently affected the performance of read and write operations differently; for both VMware and Docker, the maximum number of read operations was reduced when we ran several instances concurrently, whereas the maximum number of write operations increased when we ran instances concurrently.

    Download full text (pdf)
    Performance evaluation of containers and virtual machines when running Cassandra workload concurrently
  • 31.
    Sundstedt, Veronica
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abghari, Shahrooz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Hu, Yan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Garro, Valeria
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    HINTS: Human-Centered Intelligent Realities2023In: 35th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2023 / [ed] Håkan Grahn, Anton Borg and Martin Boldt, Linköping University Electronic Press, 2023, p. 9-17Conference paper (Refereed)
    Abstract [en]

    During the last decade, we have witnessed a rapiddevelopment of extended reality (XR) technologies such asaugmented reality (AR) and virtual reality (VR). Further, therehave been tremendous advancements in artificial intelligence(AI) and machine learning (ML). These two trends will havea significant impact on future digital societies. The vision ofan immersive, ubiquitous, and intelligent virtual space opensup new opportunities for creating an enhanced digital world inwhich the users are at the center of the development process,so-calledintelligent realities(IRs).The “Human-Centered Intelligent Realities” (HINTS) profileproject will develop concepts, principles, methods, algorithms,and tools for human-centered IRs, thus leading the wayfor future immersive, user-aware, and intelligent interactivedigital environments. The HINTS project is centered aroundan ecosystem combining XR and communication paradigms toform novel intelligent digital systems.HINTS will provide users with new ways to understand,collaborate with, and control digital systems. These novelways will be based on visual and data-driven platforms whichenable tangible, immersive cognitive interactions within realand virtual realities. Thus, exploiting digital systems in a moreefficient, effective, engaging, and resource-aware condition.Moreover, the systems will be equipped with cognitive featuresbased on AI and ML, which allow users to engage with digitalrealities and data in novel forms. This paper describes theHINTS profile project and its initial results. ©2023, Copyright held by the authors   

    Download full text (pdf)
    fulltext
1 - 31 of 31
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf