Change search
Refine search result
1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing a Secure IoT System Architecture from a Virtual Premise for a Collaborative AI Lab2019Conference paper (Refereed)
    Abstract [en]

    IoT systems are increasingly composed out of flexible, programmable, virtualised, and arbitrarily chained IoT elements and services using portable code. Moreover, they might be sliced, i.e. allowing multiple logical IoT systems (network + application) to run on top of a shared physical network and compute infrastructure. However, implementing and designing particularly security mechanisms for such IoT systems is challenging since a) promising technologies are still maturing, and b) the relationships among the many requirements, technologies and components are difficult to model a-priori.

    The aim of the paper is to define design cues for the security architecture and mechanisms of future, virtualised, arbitrarily chained, and eventually sliced IoT systems. Hereby, our focus is laid on the authorisation and authentication of user, host, and code integrity in these virtualised systems. The design cues are derived from the design and implementation of a secure virtual environment for distributed and collaborative AI system engineering using so called AI pipelines. The pipelines apply chained virtual elements and services and facilitate the slicing of the system. The virtual environment is denoted for short as the virtual premise (VP). The use-case of the VP for AI design provides insight into the complex interactions in the architecture, leading us to believe that the VP concept can be generalised to the IoT systems mentioned above. In addition, the use-case permits to derive, implement, and test solutions. This paper describes the flexible architecture of the VP and the design and implementation of access and execution control in virtual and containerised environments. 

    Download full text (pdf)
    fulltext
  • 2.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Privacy and DRM Requirements for Collaborative Development of AI Application2019In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2019, article id 3233268Conference paper (Refereed)
    Abstract [en]

    The use of data is essential for the capabilities of Data-driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. This data usage, however, raises intrinsically the concerns on data privacy. In addition, supporting collaborative development of AI applications across organisations has become a major need in AI system design. Digital Rights Management (DRM) is required to protect intellectual property in such collaboration. As a consequence of DRM, privacy threats and privacy-enforcing mechanisms will interact with each other.

    This paper describes the privacy and DRM requirements in collaborative AI system design using AI pipelines. It describes the relationships between DRM and privacy and outlines the threats against these non-functional features. Finally, the paper provides first security architecture to protect against the threats on DRM and privacy in collaborative AI design using AI pipelines. 

    Download full text (pdf)
    fulltext
  • 3.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Towards Privacy Requirements for Collaborative Development of AI Applications2018In: 14th Swedish National Computer Networking Workshop (SNCNW), 2018Conference paper (Refereed)
    Abstract [en]

    The use of data is essential for the capabilities of Data- driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. The use of data, however, raises intrinsically the concern of the data privacy, in particular for the individuals that provide data. Hence, data privacy is considered as one of the main non-functional features of the Next Generation Internet. This paper describes the privacy challenges and requirements for collaborative AI application development. We investigate the constraints of using digital right management for supporting collaboration to address the privacy requirements in the regulation.

    Download full text (pdf)
    fulltext
  • 4.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Detection of Metamorphic Malware Packers Using Multilayered LSTM Networks2020In: Lecture Notes in Computer Science / [ed] Weizhi Meng, Dieter Gollmann, Christian D. Jensen, and Jianying Zhou, Springer Science and Business Media Deutschland GmbH , 2020, Vol. 12282, p. 36-53Conference paper (Refereed)
    Abstract [en]

    Malware authors do their best to conceal their malicious software to increase its probability of spreading and to slow down analysis. One method used to conceal malware is packing, in which the original malware is completely hidden through compression or encryption, only to be reconstructed at run-time. In addition, packers can be metamorphic, meaning that the output of the packer will never be exactly the same, even if the same file is packed again. As the use of known off-the-shelf malware packers is declining, it is becoming increasingly more important to implement methods of detecting packed executables without having any known samples of a given packer. In this study, we evaluate the use of recurrent neural networks as a means to classify whether or not a file is packed by a metamorphic packer. We show that even with quite simple networks, it is possible to correctly distinguish packed executables from non-packed executables with an accuracy of up to 89.36% when trained on a single packer, even for samples packed by previously unseen packers. Training the network on more packer raises this number to up to 99.69%.

    Download full text (pdf)
    fulltext
  • 5.
    Bergenholtz, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Moss, Andrew
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Finding a needle in a haystack: A comparative study of IPv6 scanning methods2019In: 2019 INTERNATIONAL SYMPOSIUM ON NETWORKS, COMPUTERS AND COMMUNICATIONS (ISNCC 2019), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    It has previously been assumed that the size of anIPv6 network would make it impossible to scan the network forvulnerable hosts. Recent work has shown this to be false, andseveral methods for scanning IPv6 networks have been suggested.However, most of these are based on external information likeDNS, or pattern inference which requires large amounts of knownIP addresses. In this paper, DeHCP, a novel approach based ondelimiting IP ranges with closely clustered hosts, is presentedand compared to three previously known scanning methods. Themethod is shown to work in an experimental setting with resultscomparable to that of the previously suggested methods, and isalso shown to have the advantage of not being limited to a specificprotocol or probing method. Finally we show that the scan canbe executed across multiple VLANs.

    Download full text (pdf)
    isncc2019-ipv6
  • 6. Constantinescu, Doru
    et al.
    Erman, David
    Ilie, Dragos
    Popescu, Adrian
    Congestion and Error Control in Overlay Networks2007Report (Other academic)
    Abstract [en]

    In recent years, Internet has known an unprecedented growth, which, in turn, has lead to an increased demand for real-time and multimedia applications that have high Quality-of-Service (QoS) demands. This evolution lead to difficult challenges for the Internet Service Providers (ISPs) to provide good QoS for their clients as well as for the ability to provide differentiated service subscriptions for those clients who are willing to pay more for value added services. Furthermore, a tremendous development of several types of overlay networks have recently emerged in the Internet. Overlay networks can be viewed as networks operating at an inter-domain level. The overlay hosts learn of each other and form loosely-coupled peer relationships. The major advantage of overlay networks is their ability to establish subsidiary topologies on top of the underlying network infrastructure acting as brokers between an application and the required network connectivity. Moreover, new services that cannot be implemented (or are not yet supported) in the existing network infrastructure are much easier to deploy in overlay networks. In this context, multicast overlay services have become a feasible solution for applications and services that need (or benefit from) multicast-based functionality. Nevertheless, multicast overlay networks need to address several issues related to efficient and scalable congestion control schemes to attain a widespread deployment and acceptance from both end-users and various service providers. This report aims at presenting an overview and taxonomy of current solutions proposed that provide congestion control in overlay multicast environments. The report describes several protocols and algorithms that are able to offer a reliable communication paradigm in unicast, multicast as well as multicast overlay environments. Further, several error control techniques and mechanisms operating in these environments are also presented. In addition, this report forms the basis for further research work on reliable and QoS-aware multicast overlay networks. The research work is part of a bigger research project, "Routing in Overlay Networks (ROVER)". The ROVER project was granted in 2006 by EuroNGI Network of Excellence (NoE) to the Dept. of Telecommunication Systems at Blekinge Institute of Technology (BTH).

    Download full text (pdf)
    FULLTEXT01
  • 7.
    Erman, David
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Ilie, Dragos
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Popescu, Adrian
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    BitTorrent Session Characteristics and Models2005Conference paper (Refereed)
    Abstract [en]

    The paper reports on a modeling and evaluation study of session characteristics of BitTorrent traffic. BitTorrent is a second generation Peer-to-Peer (P2P) application recently developed as an alternative to the classical client-server model to reduce the load burden on content servers and networks. Results are reported on measuring, modeling and analysis of application layer traces collected at the Blekinge Institute of Technology (BIT) and a local ISP. For doing this, a dedicated measurement infrastructure has been developed at BIT to collect P2P traffic. A dedicated modeling methodology has been put forth as well. New results are reported on session characteristics of BitTorrent, and it is observed that session interarrivals can be accurately modeled by the hyper-exponential distribution while session durations and sizes can be reasonably well modeled by the lognormal distribution.

    Download full text (pdf)
    FULLTEXT01
  • 8. Erman, David
    et al.
    Ilie, Dragos
    Popescu, Adrian
    Measuring and Modeling the BitTorrent Content Distribution System2010In: Computer Communications, ISSN 0140-3664, E-ISSN 1873-703X, Vol. 33, no Sp. Iss. SI Suppl. 1, p. S22-S29Article in journal (Refereed)
    Abstract [en]

    The paper reports on a detailed study of the BitTorrent content distribution system. We first present a measurement infrastructure designed to allow detailed, message-level capture and analysis of P2P traffic. An associated modeling methodology is presented as well. These tools have been used to measure and model the BitTorrent protocol, which is observed to exhibit exponential characteristics of session interarrival times. We also observe that session durations and sizes are modeled with a lognormal distribution.

    Download full text (pdf)
    FULLTEXT01
  • 9. Erman, David
    et al.
    Ilie, Dragos
    Popescu, Adrian
    Nilsson, Arne A.
    Measurement and Analysis of BitTorrent Signaling Traffic2004Conference paper (Refereed)
    Abstract [en]

    BitTorrent is a second generation Peer-to-Peer application that has been recently developed as an alternative to the classical client-server model to reduce the load burden on content servers and networks. The protocol relies on the use of swarming techniques for distributing content. No search functionality is built into the protocol, and the signaling is geared only towards an efficient dissemination of data. The paper reports on measurement and analysis of BitTorrent traffic collected at the Blekinge Institute of Technology (BIT), Karlskrona, Sweden. We measure and analyze data from local BitTorrent client sessions at BIT. The characteristics of the signaling traffic exchanged among the participating peers in a BitTorrent distribution swarm are investigated. A dedicated approach based on combining instrumentation at the application layer with flow identification and extraction at the transport layer is used for traffic measurements.

    Download full text (pdf)
    FULLTEXT01
  • 10. Forsman, Mattias
    et al.
    Glad, Andreas
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Algorithms for Automated Live Migration of Virtual Machines2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 101, p. 110-126Article in journal (Refereed)
    Abstract [en]

    We present two strategies to balance the load in a system with multiple virtual machines (VMs) through automated live migration. When the push strategy is used, overloaded hosts try to migrate workload to less loaded nodes. On the other hand, when the pull strategy is employed, the light-loaded hosts take the initiative to offload overloaded nodes. The performance of the proposed strategies was evaluated through simulations. We have discovered that the strategies complement each other, in the sense that each strategy comes out as “best” under different types of workload. For example, the pull strategy is able to quickly re-distribute the load of the system when the load is in the range low-to-medium, while the push strategy is faster when the load is medium-to-high. Our evaluation shows that when adding or removing a large number of virtual machines in the system, the “best” strategy can re-balance the system in 4–15 minutes.

    Download full text (pdf)
    fulltext
  • 11. Ilie, Dragos
    Gnutella Network Traffic: Measurements and Characteristics2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wide availability of computing resources at the edge of the network has lead to the appearance of new services based on peer-to-peer architectures. In a peer-to-peer network nodes have the capability to act both as client and server. They self-organize and cooperate with each other to perform more efficiently operations related to peer discovery, content search and content distribution. The main goal of this thesis is to obtain a better understanding of the network traffic generated by Gnutella peers. Gnutella is a well-known, heavily decentralized file-sharing peer-to-peer network. It is based on open protocol specifications for peer signaling, which enable detailed measurements and analysis down to individual messages. File transfers are performed using HTTP. An 11-days long Gnutella link-layer packet trace collected at BTH is systematically decoded and analyzed. Analysis results include various traffic characteristics and statistical models. The emphasis for the characteristics has been on accuracy and detail, while for the traffic models the emphasis has been on analytical tractability and ease of simulation. To the author's best knowledge this is the first work on Gnutella that presents statistics down to message level. The results show that incoming requests to open a session follow a Poisson distribution. Incoming messages of mixed types can be described by a compound Poisson distribution. Mixture distribution models for message transfer rates include a heavy-tailed component.

    Download full text (pdf)
    FULLTEXT01
  • 12. Ilie, Dragos
    On Unicast QoS Routing in Overlay Networks2008Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In the last few years the Internet has witnessed a tremendous growth in the area of multimedia services. For example YouTube, used for videosharing [1] and Skype, used for Internet telephony [2], enjoy a huge popularity, counting their users in millions. Traditional media services, such as telephony, radio and TV, once upon a time using dedicated networks are now deployed over the Internet at an accelerating pace. The triple play and quadruple play business models, which consist of combined broadband access, (fixed and mobile) telephony and TV over a common access medium, are evidence for this development. Multimedia services often have strict requirements on quality of service (QoS) metrics such as available bandwidth, packet delay, delay jitter and packet loss rate. Existing QoS architectures (e. g. , IntServ and DiffServ) are typically used within the service provider network, but have not seen a wide Internet deployment. Consequently, Internet applications are still forced to rely on the Internet Protocol (IP)’s best-effort service. Furthermore, wide availability of computing resources at the edge of the network has lead to the appearance of services implemented in overlay networks. The overlay networks are typically spawned between end-nodes that share resources with each other in a peer-to-peer (P2P) fashion. Since these services are not relying on dedicated resources provided by a third-party, they can be deployed with little effort and low cost. On the other hand, they require mechanisms for handling resource fluctuations when nodes join and leave the overlay. This dissertation addresses the problem of unicast QoS routing implemented in overlay networks. More precisely, we are investigating methods for providing a QoS-aware service on top of IP’s best-effort service, with minimal changes to existing Internet infrastructure. A framework named Overlay Routing Protocol (ORP) was developed for this purpose. The framework is used for handling QoS path discovery and path restoration. ORP’s performance was evaluated through a comprehensive simulation study. The study showed that QoS paths can be established and maintained as long as one is willing to accept a protocol overhead of maximum 1.5% of the network capacity. We studied the Gnutella P2P network as an example of overlay network. An 11-days long Gnutella link-layer packet trace collected at Blekinge Institute of Technology (BTH) was systematically decoded and analyzed. Analysis results include various traffic characteristics and statistical models. The emphasis for the characteristics has been on accuracy and detail, while for the traffic models the emphasis has been on analytical tractability and ease of simulation. To the author’s best knowledge this is the first work on Gnutella that presents statistics down to message level. The models for Gnutella’s session arrival rate and session duration were further used to generate churn within the ORP simulations. Finally, another important contribution is the evaluation of GNU Linear Programming Toolkit (GLPK)’s performance in solving linear optimization problems for flow allocation with the simplex method and the interior point method, respectively. Based on the results of the evaluation, the simplex method was selected to be integrated with ORP’s path restoration capability.

    Download full text (pdf)
    FULLTEXT01
  • 13. Ilie, Dragos
    Optimization Algorithms with Applications to Unicast QoS Routing in Overlay Networks2007Report (Other academic)
    Abstract [en]

    The research report is focused on optimization algorithms with application to quality of service (QoS) routing. A brief theoretical background is provided for mathematical tools in relation to optimization theory. The rest of the report provides a survey of different types of optimization algorithms: several numerical methods, a heuristics and a metaheuristic. In particular, we discuss basic descent methods, gradient-based methods, particle swarm optimization (PSO) and a constrained-path selection algorithm called Self-Adaptive Multiple Constraints Routing Algorithm (SAMCRA).

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Ilie, Dragos
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Datta, Vishnubhotla Venkata Krishna Sai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On Designing a Cost-Aware Virtual CDN for the Federated Cloud2016In: 2016 INTERNATIONAL CONFERENCE ON COMMUNICATIONS (COMM 2016), IEEE, 2016, p. 255-260Conference paper (Refereed)
    Abstract [en]

    We have developed a prototype for a cost-aware, cloud-based content delivery network (CDN) suitable for a federated cloud scenario. The virtual CDN controller spawns and releases virtual caching proxies according to variations in user demand. A cost-based heuristic algorithm is used for selecting data centers where proxies are spawned. The functionality and performance of our virtual CDN prototype were evaluated in the XIFI federated OpenStack cloud. Initial results indicate that the virtual CDN can offer reliable and prompt service. Multimedia providers can use this virtual CDN solution to regulate expenses and have greater freedom in choosing the placement of virtual proxies as well as more flexibility in configuring the hardware resources available to the proxy (e.g., CPU cores, memory and storage).

  • 15. Ilie, Dragos
    et al.
    Erman, David
    Peer-to-peer Traffic Measurements2007Report (Other academic)
    Abstract [en]

    The global Internet has emerged to become an integral part of everyday life. Internet is now as fundamental a part of the infrastructure as is the telephone system or the road network. Peer-to-Peer (P2P) is the logical antithesis of the Client-Server (CS) paradigm that has been the ostensible predominant paradigm for IP-based networks since their inception. Current research indicates that P2P applications are responsible for a substantial part of the Internet traffic. New P2P services are developed and released at a high pace. The number of users embracing new P2P technology is also increasing fast. It is therefore important to understand the impact of the new P2P services on the existing Internet infrastructure and on legacy applications. This report describes a measurement infrastructure geared towards P2P network traffic collection and analysis, and presents measurement results for two P2P applications: Gnutella and BitTorrent.

    Download full text (pdf)
    FULLTEXT01
  • 16. Ilie, Dragos
    et al.
    Erman, David
    Popescu, Adrian
    Transfer Rate Models for Gnutella Signaling Traffic2006Conference paper (Refereed)
    Abstract [en]

    This paper reports on transfer rate models for the Gnutella signaling protocol. New results on message-level and IP-level rates are presented. The models are based on traffic captured at the Blekinge Institute of Technology (BTH) campus in Sweden and offer several levels of granularity: message type, application layer and network layer. The aim is to obtain parsimonous models suitable for analysis and simulation of P2P workload.

    Download full text (pdf)
    FULLTEXT01
  • 17. Ilie, Dragos
    et al.
    Erman, David
    Popescu, Adrian
    Nilsson, Arne A.
    Measurement and Analysis of Gnutella Signaling Traffic2004Conference paper (Refereed)
    Abstract [en]

    The paper reports on in-depth measurements and analysis of Gnutella signaling traffic collected at the Blekinge Institute of Technology (BIT), Karlskrona, Sweden. The measurements are based on a week-long packet trace collected with help of the well-known tcpdump application. Furthermore, a novel approach has been used to measure and analyze Gnutella signaling traffic. Associated with this, a dedicated tcptrace module has been developed and used to decode the packet trace, down to individual Gnutella messages. The measurement infrastructure consists of a Gnutella node running in ultrapeer mode and protocol decoding software. Detailed traffic characteristics have been collected and analyzed, such as session durations and interarrival times, and Gnutella message sizes and duration. Preliminary results show a high degree of variability of the Gnutella signaling traffic, which is mostly created by the QUERY messages. Furthermore, the Gnutella session interarrival times are observed to resemble the exponential distribution.

    Download full text (pdf)
    FULLTEXT01
  • 18. Ilie, Dragos
    et al.
    Erman, David
    Popescu, Adrian
    Nilsson, Arne A.
    Traffic Measurements of P2P Systems2004Conference paper (Refereed)
    Abstract [en]

    The paper reports on a measurement infrastructure developed at the Blekinge Institute of Technology (BIT) with the purpose to do traffic measurements and analysis on Peer-to-Peer (P2P) traffic. The measurement methodology is based on using application logging as well as link-layer packet capture. This offers the possibility to measure application layer information with link-layer accuracy. Details are reported on this methodology, together with description of the BIT measurement infrastructure. The paper also reports on traffic measurements done on BitTorrent and Gnutella protocols from an end-client perspective, together with some measurement results of salient protocol characteristics. Preliminary results show a high degree of variability of the BitTorrent and Gnutella traffic, where in the case of Gnutella a large contribution is given by the signaling traffic.

    Download full text (pdf)
    FULLTEXT01
  • 19.
    Ilie, Dragos
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Westerhagen, Alexander
    Saab AB, Surveillance.
    Topology Control for Directed DataLinks between Airborne Platforms: Directed Air Data Link: WP3 report2023Report (Other academic)
    Abstract [en]

    Contemporary airborne radio networks are usually implemented using omnidirectional antennas. Unfortunately, such networks suffer from disadvantages such as easy detection by hostile aircraft and potential information leakage. In addition, tactical links used for military communication rely on NATO-specific standards such as Link 16, which are becoming outdated. 

    To this end we are investigating the feasibility of replacing omnidirectional communication with directed communication, which will address the disadvantages mentioned above. In addition, we definine a communication architecture based on the conventional Ethernet and TCP/IP protocol stack, which will ease management and interoperability with existing Internet-based system 

    In this report, we briefly review the TCP/IP stack and the services offerd at each layer of the stack. Furthermore, we review existing litterature involving mobile ad hoc network (MANET) protocols used for airborne networks along with various performance studies in the same area. Finally, we propose a novel MANET routing protocol based on directional antennas and situation awareness data that utilizes adaptive multihop routing to avoid sending information in directions where hostile nodes are present. 

    Our protocol is implemented in the OMNEST simulator and evaluated using two realistic flight scenarios involving 8 and 24 aircraft, respectively. The results show that our protocol has significantly fewer leaked packets than comparative protocols, but at a slightly higher cost in terms of longer packet lifetime. 

    Download full text (pdf)
    fulltext
  • 20.
    Ilie, Dragos
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Westerhagen, Alexander
    Saab AB, Surveillance, Karlskrona, Sweden..
    Granbom, Bo
    Saab AB, Aeronaut, Linkoping, Sweden..
    Höök, Anders
    Saab AB, Surveillance, Gothenburg, Sweden..
    Avoiding Detection by Hostile Nodes in Airborne Tactical Networks2023In: Future Internet, E-ISSN 1999-5903, Vol. 15, no 6, article id 204Article in journal (Refereed)
    Abstract [en]

    Contemporary airborne radio networks are usually implemented using omnidirectional antennas. Unfortunately, such networks suffer from disadvantages such as easy detection by hostile aircraft and potential information leakage. In this paper, we present a novel mobile ad hoc network (MANET) routing protocol based on directional antennas and situation awareness data that utilizes adaptive multihop routing to avoid sending information in directions where hostile nodes are present. Our protocol is implemented in the OMNEST simulator and evaluated using two realistic flight scenarios involving 8 and 24 aircraft, respectively. The results show that our protocol has significantly fewer leaked packets than comparative protocols, but at a slightly higher cost in terms of longer packet lifetime.

    Download full text (pdf)
    fulltext
  • 21. Ilie, Dragos
    et al.
    Popescu, Adrian
    Statistical Models for Gnutella Signaling Traffic2007In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069, Vol. 51, no 17, p. 4816-4835Article in journal (Refereed)
    Abstract [en]

    The paper is focused on signaling traffic between Gnutella peers that implement the latest Gnutella protocol specifications (v0.6). In particular, we provide analytically tractable statistical models at session level, message level and IP datagram level for traffic crossing a Gnutella ultrapeer at Blekinge Institute of Technology (BTH) in Karlskrona, Sweden. To the best of our knowledge this is the first work that provides Gnutella v0.6 statistical models at this level of detail. These models can be implemented straightforward in network simulators such as ns2 and OmNet++. The results show that incoming requests to open a session follow a Poisson distribution. Incoming Gnutella messages across all established sessions can be described by a compound Poisson distribution. Mixture distribution models for message transfer rates include a heavy-tailed component.

    Download full text (pdf)
    FULLTEXT01
  • 22. Ilie, Dragos
    et al.
    Popescu, Adrian
    Unicast QoS Routing in Overlay Networks2011In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 5233, p. 1017-1038Article in journal (Refereed)
    Abstract [en]

    The goal of quality of service (QoS) routing in overlay networks is to address deficiencies in today's Internet Protocol (IP) routing. This is achieved by application-layer protocols executed on end-nodes, which search for Alternate paths that can provide better QoS for the overlay hosts. In the first part of this paper we introduce fundamental concepts of QoS routing and the current state-of-the-art in overlay networks for QoS. In the remaining part of the paper we report performance results for the Overlay Routing Protocol (ORP) framework developed at Blekinge Institute of Technology (BTH) in Karlskrona, Sweden. The results show that QoS paths can be established and maintained as long as one is willing to accept a protocol overhead of maximum 1.5% of the network capacity.

    Download full text (pdf)
    FULLTEXT01
  • 23. Kassahun, Solomon
    et al.
    Demessie, Atinkut
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    A PMIPv6 Approach to Maintain Network Connectivity During VM Live Migration Over the Internet2014In: 2014 IEEE 3RD INTERNATIONAL CONFERENCE ON CLOUD NETWORKING (CLOUDNET), Luxembourg: IEEE , 2014Conference paper (Refereed)
    Abstract [en]

    We present a live migration solution based on Proxy Mobile IPv6 (PMIPv6), a light-weight mobility protocol standardized by IETF. PMIPv6 handles node mobility without requiring any support from the moving nodes. In addition, PMIPv6 works with IPv4, IPv6 and dual-stack nodes. Our results from a real testbed show that network connectivity is successfully maintained with little signaling overhead and with short virtual machine (VM) downtime. As far as we know, this is the first time PMIPv6 is used to enable live migration beyond the scope of a LAN.

    Download full text (pdf)
    fulltext
  • 24.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Melander, Christian
    Compuverde AB.
    Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data2015In: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, p. 537-546Conference paper (Refereed)
    Abstract [en]

    Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

  • 25.
    Martinkauppi, Louise Bergman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. student.
    He, Qiuping
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Student.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On the Design and Performance of Chinese OSCCA-approved Cryptographic Algorithms2020In: 2020 13th International Conference on Communications, COMM 2020 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 119-124, article id 9142035Conference paper (Refereed)
    Abstract [en]

    SM2, SM3, and SM4 are cryptographic standards authorized to be used in China. To comply with Chinese cryptography laws, standard cryptographic algorithms in products targeting the Chinese market may need to be replaced with the algorithms mentioned above. It is important to know beforehand if the replaced algorithms impact performance. Bad performance may degrade user experience and increase future system costs.

    We present a performance study of the standard cryptographic algorithms (RSA, ECDSA, SHA-256, and AES-128) and corresponding Chinese cryptographic algorithms.

    Our results indicate that the digital signature algorithms SM2 and ECDSA have similar design and also similar performance. SM2 and RSA have fundamentally different designs. SM2 performs better than RSA when generating keys and signatures. Hash algorithms SM3 and SHA-256 have many design similarities, but SHA-256 performs slightly better than SM3. AES-128 and SM4 share some similarities in the design. In the controlled experiment, AES-128 outperforms SM4 with a significant margin.

    Download full text (pdf)
    fulltext
  • 26. Mugga, Charles
    et al.
    Sun, Dong
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Comparison of IPv6 Multihoming and Mobility Protocols2014Conference paper (Refereed)
    Abstract [en]

    Multihoming and mobility protocols enable computing devices to stay always best connected (ABC) to the Internet. The focus of our study is on handover latency and rehoming time required by such protocols. We used simulations in OMNeT++ to study the performance of the following protocols that support multihoming, mobility or a combination thereof: Mobile IPv6 (MIPv6), Multiple Care-of Address Registration (MCoA), Stream Control Transmission Protocol (SCTP), and Host Identity Proto- col (HIP). Our results indicate that HIP shows best performance in all scenarios considered.

    Download full text (pdf)
    FULLTEXT01
  • 27.
    Popescu, Adrian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Erman, David
    Blekinge Institute of Technology, School of Computing.
    Ilie, Dragos
    Blekinge Institute of Technology, School of Computing.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Computing.
    Popescu, Alexandru
    Blekinge Institute of Technology, School of Computing.
    Vogeleer, Karel De
    Blekinge Institute of Technology, School of Computing.
    Seamless Roaming: Developments and Challenges2011In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 5233, p. 795-807Article in journal (Refereed)
    Abstract [en]

    The chapter reports on recent developments and challenges focused on seamless handover. These are subject for the research projects MOBICOME and PERIMETER, recently granted by the EU EUREKA and EU STREP FP7, respectively. The research projects are considering the recently advanced IP Multimedia Subsystem (IMS), which is a Set of technology standards put forth by the Internet Engineering Task Force (IETF) and two Third Generation Partnership Project groups, namely 3GPP and 3GPP2. The foundation of seamless handover is provided by several components, the most important ones being the handover, mobility management, connectivity management and Internet mobility. The paper provides an intensive analysis of these components.

    Download full text (pdf)
    FULLTEXT01
  • 28. Popescu, Adrian
    et al.
    Erman, David
    Ilie, Dragos
    Popescu, Alexandru
    Fiedler, Markus
    Vogeleer, Karel De
    Seamless Roaming: Developments and Challenges2008Conference paper (Other academic)
    Abstract [en]

    The paper reports on recent developments and challenges focused on seamless handover. These are subject for the research projects MOBICOME and PERIMETER, recently granted by the EU EUREKA and EU STREP FP7, respectively. The research projects are considering the recently advanced IP Multimedia Subsystem (IMS), which is a set of technology standards put forth by the Internet Engineering Task Force (IETF) and two Third Generation Partnership Project groups, namely 3GPP and 3GPP2. The foundation of seamless handover is provided by several components, the most important ones being the handover, mobility management, connectivity management and Internet mobility. The paper provides an intensive analysis of these components.

    Download full text (pdf)
    FULLTEXT01
  • 29. Popescu, Adrian
    et al.
    Ilie, Dragos
    Erman, David
    Fiedler, Markus
    Popescu, Alexandru
    Vogeleer, Karel de
    An Application Layer architecture for Seamless Roaming2009Conference paper (Refereed)
    Abstract [en]

    The paper advances a new architecture for seamless roaming, which is implemented at the application layer. This architecture is subject for the research projects PERIMETER and MOBICOME, recently granted by the EU STREP FP7 and EUREKA, respectively. The research challenges are on mobility management, security, QoE management, overlay routing, node positioning, mobility modeling and prediction, middleware and handover. The foundation of seamless handover is provided by several components, the most important ones being the handover, mobility management, connectivity management and Internet mobility. The paper provides an analysis of these components

    Download full text (pdf)
    FULLTEXT01
  • 30.
    Popescu, Adrian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Yao, Yong
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Video Distribution Networks: Architectures and System Requirements2018In: Greening Video Distribution Networks: Energy-Efficient Internet Video Delivery / [ed] Adrian Popescu, Springer, 2018Chapter in book (Refereed)
    Abstract [en]

    The creation of video content and its distribution over the Internet Protocol (IP) are sophisticated processes that follow a chain model from the acquisition of the video source, production and packaging, transport, and finally distribution to viewers. Video distribution networks refer to several parts, namely content contribution, primary distribution, secondary distribution, and video consumers. The focus of the chapter is on the presentation of video distribution systems over IP, categories of architectural solutions as well as a short presentation of several important applications associated with video distribution networks.

  • 31. Popescu, Alexandru
    et al.
    Ilie, Dragos
    Kouvatsos, Demetres
    On the Implementation of a Content-Addressable Network2008Conference paper (Refereed)
    Abstract [en]

    Over the last years, the Internet has evolved towards becoming the dominant platform for deployment of new services and applications such as real time and multimedia services, application-level multicast communication and large-scale data sharing. A consequence of this evolution is that features like robust routing, efficient search, scalability, decentralization, fault tolerance, trust and authentication have become of paramount importance. We consider in our paper the specific case of structured peer-to-peer (P2P) overlay networks with a particular focus on Content-Addressable Networks (CANs). An investigation of the existing approaches for structured P2P overlay networks is provided, where we point out their advantages and drawbacks. The essentials of CAN architecture are presented and based on that, we report on the implementation of a CAN simulator. Our initial goal is to use the simulator for investigating the performance of the CAN with focus on the routing algorithm. Preliminary results obtained in our experiments are reported as well.

    Download full text (pdf)
    FULLTEXT01
  • 32. Pruthi, Parag
    et al.
    Ilie, Dragos
    Popescu, Adrian
    Application Level Performamce of Multimedia Services1999Conference paper (Refereed)
    Abstract [en]

    Quality of Service (QoS) is a difficult term to define for multimedia applications. The main reason is that both audio and video quality are subjective and difficult to quantify. Much work has been done in the past to map the subjective quality of video and audio into measurable quantities. Unfortunately, when it comes to IP environments, not much experience and mathematical work exists that can be used to define robust metrics for measurement of QoS. In this paper, we report on measurements of multimedia QoS and try to map subjective criteria to discrete measurables in terms of packet loss rates, packet delays, and other quantities. We report the results of measurements done at the application level and show how network characteristics affect the perceived quality of multimedia applications. In particular, we analyze the application traffic generated by MBone clients in a distributed network education scenario. In order to measure the traffic, we have implemented software on a non-intrusive probe developed by NIKSUN INC.\footnote{http://www.niksun.com} that can accurately monitor all traffic from a variety of networks. We have developed MBone aware software modules which can not only play back the recorded streams but also provide the essential statistics in real-time. We report in detail the results of our study of a particular end-to-end MBone session.

    Download full text (pdf)
    FULLTEXT01
  • 33. Pruthi, Parag
    et al.
    Ilie, Dragos
    Popescu, Adrian
    Application level performance of multimedia services1999Conference paper (Refereed)
    Abstract [en]

    Quality of Service (QoS) is a difficult term to define for multimedia applications. The main reason is that bath audio and video quality are subjective and difficult to quantify. Much work has been done in the past to map the subjective quality of video and audio into measurable quantities. Unfortunately, when it comes to IP environments, not much experience and mathematical work exists that can be used to define robust metrics for measurement of QoS. In this paper, we report on measurements of multimedia QoS and try to map subjective criteria to discrete measurables in terms of packet loss rates, packet delays, and other quantities. We report the results of measurements done at the application level and show how network characteristics affect the perceived quality of multimedia applications. In particular, we analyze the application traffic generated by MBone clients in a distributed network education scenario. In order to measure the traffic, we have implemented software on a non-intrusive probe developed by NIKSUN INC.* that can accurately monitor all traffic from a variety of networks. We have developed MBone aware software modules which can not only play back the recorded streams but also provide the essential statistics in real-time. We report in detail the results of our study of a particular end-to-end MBone session.

  • 34.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Comparison of KVM, VMware and XenServer using a Large Telecommunication Application2014Conference paper (Refereed)
    Abstract [en]

    One of the most important technologies in cloud computing is virtualization. This paper presents the results from a performance comparison of three well-known virtualization hypervisors: KVM, VMware and XenServer. In this study, we measure performance in terms of CPU utilization, disk utilization and response time of a large industrial real-time application. The application is running inside a virtual machine (VM) controlled by the KVM, VMware and XenServer hypervisors, respectively. Furthermore, we compare the three hypervisors based on downtime and total migration time during live migration. The results show that the Xen hypervisor results in higher CPU utilization and thus also lower maximum performance compared to VMware and KVM. However, VMware causes more write operations to disk than KVM and Xen, and Xen causes less downtime than KVM and VMware during live migration. This means that no single hypervisor has the best performance for all aspects considered here.

    Download full text (pdf)
    FULLTEXT01
  • 35.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robert, Remi
    Ericsson Research, Sweden.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On the Performance and Scalability of Consensus Mechanisms in Privacy-Enabled Decentralized Renewable Energy Marketplace2023In: Annales des télécommunications, ISSN 0003-4347, E-ISSN 1958-9395Article in journal (Refereed)
    Abstract [en]

    Renewable energy sources were introduced as an alternative to fossil fuel sources to make electricity generation cleaner. However, today's renewable energy markets face a number of limitations, such as inflexible pricing models and inaccurate consumption information. These limitations can be addressed with a decentralized marketplace architecture. Such architecture requires a mechanism to guarantee that all marketplace operations are executed according to predefined rules and regulations. One of the ways to establish such a mechanism is blockchain technology. This work defines a decentralized blockchain-based peer-to-peer (P2P) energy marketplace which addresses actors' privacy and the performance of consensus mechanisms. The defined marketplace utilizes private permissioned Ethereum-based blockchain client Hyperledger Besu (HB) and its smart contracts to automate the P2P trade settlement process. Also, to make the marketplace compliant with energy trade regulations, it includes the regulator actor, which manages the issue and consumption of guarantees of origin and certifies the renewable energy sources used to generate traded electricity. Finally, the proposed marketplace incorporates privacy-preserving features, allowing it to generate private transactions and store them within a designated group of actors. Performance evaluation results of HB-based marketplace with three main consensus mechanisms for private networks, i.e., Clique, IBFT 2.0, and QBFT, demonstrate a lower throughput than another popular private permissioned blockchain platform Hyperledger Fabric (HF). However, the lower throughput is a side effect of the Byzantine Fault Tolerant characteristics of HB's consensus mechanisms, i.e., IBFT 2.0 and QBFT, which provide increased security compared to HF's Crash Fault Tolerant consensus RAFT.

    Download full text (pdf)
    fulltext
  • 36.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robert, Remi
    Ericsson Research, Stockholm, Sweden.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On the Performance of Consensus Mechanisms in Privacy-Enabled Decentralized Peer-to-Peer Renewable Energy Marketplace2023In: Proceedings of the 26th Conference on Innovation in Clouds, Internet and Networks, ICIN 2023 / [ed] Lopez D., Montpetit M.-J., Cerroni W., Di Mauro M., Borylo P., Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 179-186Conference paper (Refereed)
    Abstract [en]

    This work defines a decentralized blockchain-based peer-to-peer (P2P) energy marketplace which addresses actors' privacy and the performance of consensus mechanisms. The defined marketplace utilizes private permissioned Ethereum-based blockchain client Hyperledger Besu (HB) and its smart contracts to automate the P2P trade settlement process. Also, to make the marketplace compliant with energy trade regulations, it includes the regulator actor, which manages the issue and generation of guarantees of origin and certifies the renewable energy sources used to generate traded electricity. Finally, the proposed marketplace incorporates privacy-preserving features, allowing it to generate private transactions and store them within a designated group of actors. Performance evaluation results of HB-based marketplace with three main consensus mechanisms for private networks, i. e., Clique, IBFT 2.0, and QBFT, demonstrate a lower throughput than another popular private permissioned blockchain platform Hyperledger Fabric (HF). However, the lower throughput is a side effect of the Byzantine Fault Tolerant characteristics of HB's consensus mechanisms, i. e., IBFT 2.0 and QBFT, which provide increased security compared to HF's Crash Fault Tolerant consensus RAFT. © 2023 IEEE.

  • 37.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robert, Remi
    Ericsson Research, Sweden.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Efficient Privacy and Trust in Decentralized Blockchain-Based Peer-to-Peer Renewable Energy Marketplace2023In: Sustainable Energy, Grids and Networks, E-ISSN 2352-4677, Vol. 35, article id 101146Article in journal (Refereed)
    Abstract [en]

    Renewable energy sources are becoming increasingly important as a substitute for fossil energy production. However, distributed renewable energy production faces several challenges regarding trading and management, such as inflexible pricing models and inaccurate green consumption information. A decentralized peer-to-peer (P2P) electricity marketplace may address these challenges. It enables prosumers to market their self-produced electricity. However, such a marketplace needs to guarantee that the transactions follow market rules and government regulations, cannot be manipulated, and are consistent with the generated electricity. One of the ways to provide these guarantees is to leverage blockchain technology.

    This work describes a decentralized blockchain-based P2P energy marketplace addressing privacy, trust, and governance issues. It uses a private permissioned blockchain Hyperledger Fabric (HF) and its smart contracts to perform energy trading settlements. The suggested P2P marketplace includes a particular regulator actor acting as a governmental representative overseeing marketplace operations. In this way, the suggested P2P marketplace can address the governance issues needed in electricity marketplaces. Further, the proposed marketplace ensures actors’ data privacy by employing HF’s private data collections while preserving the integrity and auditability of all operations. We present an in-depth performance evaluation and provide insights into the security and privacy challenges emerging from such a marketplace. The results demonstrate that partial centralization by the applied regulator does not limit the P2P energy trade settlement execution. Blockchain technology allows for automated marketplace operations enabling better incentives for prosumer electricity production. Finally, the suggested marketplace preserves the user’s privacy when P2P energy trade settlements are conducted.

    Download full text (pdf)
    fulltext
  • 38.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robert, Remi
    Ericsson Research, Sweden.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kebande, Victor R.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    On the Application of Enterprise Blockchains in Decentralized Renewable Energy MarketplacesManuscript (preprint) (Other academic)
    Abstract [en]

    The energy distribution infrastructure is a vital part of any modern society. Thus, renewable energy sources are becoming increasingly important as a substitute for energy produced with fossil fuels. However, renewable energy production faces several challenges in the energy market and its management, such as inflexible pricing models and inaccurate green consumption information. A decentralized electricity marketplace may address these challenges. However, such a platform must guarantee that the transactions follow the market rules and regulations, cannot be manipulated, and are consistent with the energy generated. One of the ways to provide these guarantees is to leverage blockchain technology. Our previous studies demonstrate the current energy trade regulations result in partial marketplace centralization around governmental authority. The governmental authority, i.e., the regulator, oversees marketplace operations and requires energy providers to share private data about electricity generation and energy trade settlement. This study proposes amendments to D2018/2001 legislation and the governmental regulator actor to improve marketplace flexibility and data privacy. Further, we propose a new blockchain-based P2P energy marketplace model with increased flexibility and scalability while addressing actors' privacy and trust requirements. The marketplace utilizes a private permissioned blockchain Hyperledger Fabric (HF) due to its privacy-preserving and trust-enabling capabilities. This study provides HF comparison with Ethereum-based competitor Hyperledger Besu (HB). Further, based on the identified advantages and limitations, we discuss the rationale for the choice of HF. We utilize HF's smart contracts to enable P2P energy trade settlement orchestration and management. Based on previous studies, we propose an improvement towards HF security by utilizing a Byzantine Fault Tolerant (BFT) consensus mechanism, which is protected against malicious system actors. The results demonstrate that while protecting the blockchain network from malicious system actors, the BFT mechanism shows a similar throughput to the RAFT Crash Fault Tolerant consensus in the context of the P2P energy marketplace. Finally, BFT consensus enables legislation enhancements, resulting in increased flexibility and data privacy in the energy trade marketplace.

  • 39.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Building a Framework for Automated Security Testbeds in Cloud Infrastructures2020In: Proceedings of SNCNW 2020: 16th Swedish National Computer Networking Workshop, SNCNW, Kristianstad, 2020Conference paper (Refereed)
    Abstract [en]

    When exposed to the network, applications and devices are exposed to constant security risks. This puts pressure on hardware and software vendors to test even more than before how secure applications and devices are before being released to customers.

    We have worked towards defining and developing a frame- work for automated security testbeds. Testbeds comprise both the ability to build on-demand virtual isolated networks that emulate corporate networks, as well as the ability to automate security breach scenarios, which accelerates the testing process. In order to accomplish both features of the testbed, we have based the framework on well-established cloud and orchestration technologies e. g. , OpenStack and Ansible. Although many of these technologies are powerful, they are also complex, leading to a steep learning curve for new users. Thus, one of the main goals of the developed framework is to hide the underlying complexities through a template approach and a simplified user interface that shortens the initial training time.

    In this paper, we present the full stack of technologies that were used for constructing the testbed framework. The framework allows us to create entire virtual networks and to manipulate network devices started in it, via comprehensive yet simple interfaces. Also, we describe a specific testbed solution, developed as a part of the Test Arena Blekinge project.

    Download full text (pdf)
    SNCNW_2020
  • 40.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Decentralized Blockchain-based Telecommunication Services Marketplaces: Tutorial presentation2021In: IEEE International Conference on Network Softwarization (IEEE NetSoft 2021), 2021Conference paper (Other academic)
    Download full text (pdf)
    fulltext
  • 41.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Orchestrating Future Service Chains in the Next Generation of Clouds2019In: Proceedings of SNCNW 2019: The 15th Swedish National Computer Networking Workshop, Luleå, 2019, p. 18-22Conference paper (Refereed)
    Abstract [en]

    Service Chains have developed into an important concept in service provisioning in today’s and future Clouds. Cloud systems, e.g., Amazon Web Services (AWS), permit the implementation and deployment of new applications, services and service chains rapidly and flexibly. They employ the idea of Infrastructure as Code (IaC), which is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files.

    In this paper, we first detail future service chains with particular focus on Network Function Virtualization (NFV) and machine learning in AI. Afterwards, we analyze and summarize the capabilities of today’s IaC tools for orchestrating Cloud infrastructures and service chains. We compare the functionality of the major five IaC tools: Puppet, Chef, SaltStack, Ansible, and Terraform. In addition, we demonstrate how to analyze the functional capabilities of one of the tools. Finally, we give an outlook on future research issues on using IaC tools across multiple operators, data center domains, and different stockholders that collaborate on service chains.

    Download full text (pdf)
    fulltext
  • 42.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards a Secure Proxy-based Architecture for Collaborative AI Engineering2020In: CANDAR 2020: International Symposium on Computing and Networking, IEEE, 2020, p. 373-379, article id 9355887Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate how to design a security architecture of a Platform-as-a-Service (PaaS) solution, denoted as Secure Virtual Premise (SVP), for collaborative and distributed AI engineering using AI artifacts and Machine Learning (ML) pipelines. Artifacts are re-usable software objects which are a) tradeable in marketplaces, b) implemented by containers, c) offer AI functions as microservices, and, d) can form service chains, denoted as AI pipelines. Collaborative engineering is facilitated by the trading and (re-)using artifacts and, thus, accelerating the AI application design.

    The security architecture of the SVP is built around the security needs of collaborative AI engineering and uses a proxy concept for microservices. The proxy shields the AI artifact and pipelines from outside adversaries as well as from misbehaving users, thus building trust among the collaborating parties. We identify the security needs of collaborative AI engineering, derive the security challenges, outline the SVP’s architecture, and describe its security capabilities and its implementation, which is currently in use with several AI developer communities. Furthermore, we evaluate the SVP’s Technology Readiness Level (TRL) with regard to collaborative AI engineering and data security.

    Download full text (pdf)
    fulltext
  • 43.
    Tkachuk, Roman-Valentyn
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Robert, Remi
    Ericsson Research, SWE.
    A Survey on Blockchain-based Telecommunication Services Marketplaces2022In: IEEE Transactions on Network and Service Management, ISSN 1932-4537, E-ISSN 1932-4537, Vol. 19, no 1, p. 228-255Article in journal (Refereed)
    Abstract [en]

    Digital marketplaces were created recently to accelerate the delivery of applications and services to customers. Their appealing feature is to activate and dynamize the demand, supply, and development of digital goods, applications, or services. By being an intermediary between producer and consumer, the primary business model for a marketplace is to charge the producer with a commission on the amount paid by the consumer. However, most of the time, the commission is dictated by the marketplace facilitator itself and creates an imbalance in value distribution, where producer and consumer sides suffer monetarily. In order to eliminate the need for a centralized entity between the producer and consumer, a blockchain-based decentralized digital marketplace concept was introduced. It provides marketplace actors with the tools to perform business transactions in a trusted manner and without the need for an intermediary. In this work, we provide a survey on Telecommunication Services Marketplaces (TSMs) which employ blockchain technology as the main trust enabling entity in order to avoid any intermediaries. We provide an overview of scientific and industrial proposals on the blockchain-based online digital marketplaces at large, and TSMs in particular. We consider in this study the notion of telecommunication services as any service enabling the capability for information transfer and, increasingly, information processing provided to a group of users by a telecommunications system. We discuss the main standardization activities around the concepts of TSMs and provide particular use-cases for the TSM business transactions such as SLA settlement. Also, we provide insights into the main foundational services provided by the TSM, as well as a survey of the scientific and industrial proposals for such services. Finally, a prospect for future developments is given. Author

    Download full text (pdf)
    fulltext
  • 44. Vogeleer, Karel De
    et al.
    Ilie, Dragos
    Popescu, Adrian
    Constrained-Path Discovery by Selective Diffusion2008Conference paper (Refereed)
    Abstract [en]

    The demand for live and interactive multimedia services over the Internet raises questions on how well the Internet Protocol (IP)’s best-effort effort service can be adapted to provide adequate end-to-end quality of service (QoS) for the users. Although the Internet community has developed two different IP-based QoS architectures, neither has been widely deployed. Overlay networks are seen as a step to address the demand for end-to-end QoS until a better solution can be obtained. As part of the telecommunication research at Blekinge Institute of Technology (BTH) in Karlskrona, Sweden we are investigating new theories and algorithms concerning QoS routing. We are in the process of developing Overlay Routing Protocol (ORP), a framework for overlay QoS routing consisting of two protocols: Route Discovery Protocol (RDP) and Route Management Protocol (RMP). In this paper we describe RDP and provide preliminary simulation results for it.

    Download full text (pdf)
    FULLTEXT01
1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf