Endre søk
Begrens søket
1234567 151 - 200 of 435
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 151.
    Guo, Yang
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier. Blekinge institute of Technology.
    Yao, Yong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bai, Guohua
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    eHASS: A Smart Appointment Scheduling System for eHealth2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In the eHealth system, the appointment scheduling is an important task for the delivery of healthcare service among different healthcare actors. The key procedure is to do the decision making on the selection of suitable appointments between the care providers and the care receivers. The appointment decisionis a sophisticated problem in terms of how to efficiently deal with various parameters of involved healthcare actors. To solve this problem, we suggest a smart system called eHealth Appointment Scheduling System (eHASS). eHASS takes into account both heterogeneous aspects and interoperability requirements of eHealth system. As such, eHASS is capable of jointly considering various appointment characterizations and decision making algorithms for conducting appointment scheduling. The paper reports the eHASS architecture as well as the related work-in-progress.

  • 152.
    Gustafsson, Kevin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sundstedt, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Automated file extraction in a cloud environment for forensic analysis2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The possibility to use the snapshot functionality of OpenStack as a method of securing evidence has been examined in this paper. In addition, the possibility of extracting evidence automatically using an existing operation tool has been investigated. The usability of snapshots in a forensic investigation was examined by conducting a series of tests on both snapshots and physical disk images. The results of the tests were then compared to evaluate the usefulness of the snapshot. Automatic extraction of evidence was investigated by implementing a solution using Ansible and evaluating the algorithm based on the existing standard ISO 27037. It was concluded that the snapshots created by OpenStack behaves similar enough to disks to be useful in a forensic investigation. The algorithm proposed to extract evidence automatically seems to not breach the standard.

  • 153.
    Gustafsson, Marcus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ramverk vs Vanilla JavaScript: Vilken teknik bör väljas för en modern webbapplikation?2018Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
    Abstract [en]

    This study is a second year thesis in Software Engineering at the Blekinge Institute of Technology. It investigates differences between frameworks and Vanilla JavaScript according to requirements in modern web applications. Region Blekinge, a municipal institution, wanted to research a prototype for a search function for their future website. Using the prototype young people would be able to retrieve information about schools and educations in their area to better be able to make a good choice. The objective is to find out what a JavaScript framework has to contribute, and especially when it comes to maintainability. A comparative analysis focusing on the code implementation was therefore made between two prototypes of the application. The results of the study shows that Vanilla JavaScript is more popular and has a higher maturity, while the framework Vue.js is more maintainable when it comes to reusability of code components, databinding, readability of code and code size. A drawback for frameworks is that they have a tendency to evolve quickly, and some of them even gets obsolete. The choice between the competing techniques was hard, but in the end Vanilla JavaScript was chosen for the application. The main reason being that the future is estimated to be more stable for Vanilla JavaScript, and for a municipal institution stability is important since one needs to appear trustworthy and build systems that will remain as stable as possible in the long term.

  • 154.
    Gustafsson, Robin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ordering Classifier Chains using filter model feature selection techniques2017Independent thesis Advanced level (degree of Master (Two Years)), 80 poäng / 120 hpOppgave
    Abstract [en]

    Context: Multi-label classification concerns classification with multi-dimensional output. The Classifier Chain breaks the multi-label problem into multiple binary classification problems, chaining the classifiers to exploit dependencies between labels. Consequently, its performance is influenced by the chain's order. Approaches to finding advantageous chain orders have been proposed, though they are typically costly.

    Objectives: This study explored the use of filter model feature selection techniques to order Classifier Chains. It examined how feature selection techniques can be adapted to evaluate label dependence, how such information can be used to select a chain order and how this affects the classifier's performance and execution time.

    Methods: An experiment was performed to evaluate the proposed approach. The two proposed algorithms, Forward-Oriented Chain Selection (FOCS) and Backward-Oriented Chain Selection (BOCS), were tested with three different feature evaluators. 10-fold cross-validation was performed on ten benchmark datasets. Performance was measured in accuracy, 0/1 subset accuracy and Hamming loss. Execution time was measured during chain selection, classifier training and testing.

    Results: Both proposed algorithms led to improved accuracy and 0/1 subset accuracy (Friedman & Hochberg, p < 0.05). FOCS also improved the Hamming loss while BOCS did not. Measured effect sizes ranged from 0.20 to 1.85 percentage points. Execution time was increased by less than 3 % in most cases.

    Conclusions: The results showed that the proposed approach can improve the Classifier Chain's performance at a low cost. The improvements appear similar to comparable techniques in magnitude but at a lower cost. It shows that feature selection techniques can be applied to chain ordering, demonstrates the viability of the approach and establishes FOCS and BOCS as alternatives worthy of further consideration.

  • 155.
    Gutiérrez, Enrique García
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Outdoor localization system based on Android and ZigBee capable devices2014Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Context. Localization and positioning services are nowadays very extended and the growth is still continuing. Many places already provide wireless tracking systems to monitor the people or material movements, specially indoors. The new arising ZigBee wireless technology provides an efficient network management and a low battery consumption, making it appropriate for location purposes in portable devices like mobile phones. Objectives. The aim is to locate a ZigBee device located inside a golf ball that has been lost within an outdoors area. An Android phone connected to a ZigBee device via USB will serve as coordinator of the localization network and by giving on-screen instructions and guidance provided by the conceptual Decision Support System (DSS). Methods. The measurement used in the localization process is the Received Signal Strength (RSS). With this data, the distance between the sensors can be estimated. However to obtain an accurate position several readings from different sensors might be needed. This paper tests the precision levels of the ZigBee modules varying the number of sensors in the localization network and using the triangulation method. Results. The precision is the main variable measured in the results, which reaches distance variation of less than 1 meter in cases where the triangulation approach can be applied. For the localization process, the use of less than three sensors lead to very poor results, obtaining a wrong localization in around 30\% of the cases. Also, movement patterns were discovered to improve the localization process. All this data can be used as an input for the DSS for future improvements. Conclusions. This study proves that outdoor positioning with ZigBee devices is possible if the required level of precision is not very high. However, more studies concerning localization with less than three sensors have to be conducted to try to reach the goal of one-on-one localization. This study opens the door for further investigations in this matter.

  • 156.
    Hassan, Mohamed
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A literature study of bottlenecks in 2D and 3D Big Data visualization2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. Big data visualization is a vital part of today's technological advancement. It is about visualizing different variables on a graph, map, or other means often in real-time.

    Objectives. This study aims to determine what challenges there are for big data visualization, whether significant amounts of data impact the visualization, and finding existing solutions for the problems.

    Methods. Databases used in this systematic literature review include Inspec, IEEE Xplore, and BTH Summon. Papers are included in the review if certain criteria are upheld.

    Results. 6 solutions are found to reduce large data sets and reduce latency when viewing 2D and 3D graphs.

    Conclusions. In conclusion, many solutions exist in various forms to improve visualizing graphs of different dimensions. Future grows of data might change this though and might require new solutions of the growing data.

  • 157.
    Hellman, Felix
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hellmann, Pierre
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Implications of vulnerable internetconnected smart home devices2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Background. With the rise of Internet of Things and Internet connected devices many things become convenient and efficient but these products also carry risks. Even though a lot of people own devices like this not so many think of the consequences if these devices aren't secure.

    Objectives. Given this our thesis aims to discover the implications of vulnerable devices and also at what rate there are insecure, unpatched devices compared to the patched, secure counterpart.

    Methods. The approach implemented uses Shodan to find these devices on the internet and also to find version information about each device. After the devices are found the objective is to calculate a CVSS score on the vulnerabilities and the exploit that can abuse the vulnerability, if there exists any.

    Results. What we found was that 71.85% of a smart home server brand was running an insecure version. As to the consequences of having an insecure device, it can be severe.Conclusions. We found that, for instance, an attacker can without much difficulty shut off alarms in your smart home and then proceed to break into your house.

    Keywords: Vulnerability; Shodan; Internet of Things (IoT); Patching

  • 158.
    Henriksson, Oscar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Falk, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Static Vulnerability Analysis of Docker Images2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Docker is a popular tool for virtualization that allows for fast and easy deployment of applications and has been growing increasingly popular among companies. Docker also include a large library of images from the repository Docker Hub which mainly is user created and uncontrolled. This leads to low frequency of updates which results in vulnerabilities in the images. In this thesis we are developing a tool for determining what vulnerabilities that exists inside Docker images with a Linux distribution. This is done by using our own tool for downloading and retrieving the necessary data from the images and then utilizing Outpost24's scanner for finding vulnerabilities in Linux packages. With the help of this tool we also publish statistics of vulnerabilities from the top downloaded images of Docker Hub. The result is a tool that can successfully scan a Docker image for vulnerabilities in certain Linux distributions. From a survey over the top 1000 Docker images it has also been shown that the amount of vulnerabilities have increased in comparison to earlier surveys of Docker images.

  • 159.
    Hjerpe, David
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bengtsson, Henrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Digital forensics - Performing virtual primary memory extraction in cloud environments using VMI2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Infrastructure as a Service and memory forensics are two subjects which have recently gained increasing amounts of attention. Combining these topics poses new challenges when performing forensic investigations. Forensics targeting virtual machines in a cloud environment is problematic since the devices are virtual, and memory forensics are a newer branch of forensics which is hard to perform and is not well documented.

    It is, however an area of utmost importance since virtual machines may be targets of, or participate in suspicious activity to the same extent as physical machines. Should such activity require an investigation to be conducted, some data which could be used as evidence may only be found in the primary memory. This thesis aims to further examine memory forensics in cloud environments and expand the academic field of these subjects and help cloud hosting organisations.

    The objective of this thesis was to study if Virtual Machine Introspection is a valid technique to acquire forensic evidence from the virtual primary memory of a virtual machine. Virtual Machine Introspection is a method of monitoring and analysing a guest via the hypervisor.

    In order to verify whether Virtual Machine Introspection is a valid forensic technique, the first task was to attempt extracting data from the primary memory which had been acquired using Virtual Machine Introspection. Once extracted, the integrity of the data had to be authenticated. This was done by comparing a hash sum of a file located on a guest with a hash sum of the extracted data. The experiment showed that the two hashes were an exact match. Next, the solidity of the extracted data was tested by changing the memory of a guest while acquiring the memory via Virtual Machine Introspection. This showed that the solidity is heavily compromised because memory acquisition process used was too slow. The final task was to compare Virtual Machine Introspection to acquiring the physical memory of the host. By setting up two virtual machines and examining the primary memory, data from both machines was found where as Virtual Machine Introspection only targets one machine, providing an advantage regarding privacy.

  • 160.
    Hoeft, Robert
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nieznanska, Agnieszka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Empirical evaluation of procedural level generators for 2D platform games2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Procedural content generation (PCG) refers to algorithmical creation of game content (e.g. levels, maps, characters). Since PCG generators are able to produce huge amounts of game content, it becomes impractical for humans to evaluate them manually. Thus it is desirable to automate the process of evaluation. Objectives. This work presents an automatic method for evaluation of procedural level generators for 2D platform games. The method was used for comparative evaluation of four procedural level generators developed within the research community. Methods. The evaluation method relies on simulation of the human player's behaviour in a 2D platform game environment. It is made up of three components: (1) a 2D platform game Infinite Mario Bros with levels generated by the compared generators, (2) a human-like bot and (3) quantitative models of player experience. The bot plays the levels and collects the data which are input to the models. The generators are evaluated based on the values output by the models. A method based on the simple moving average (SMA) is suggested for testing if the number of performed simulations is sufficient. Results. The bot played all 6000 evaluated levels in less than ten minutes. The method based on the SMA showed that the number of simulations was sufficiently large. Conclusions. It has been shown that the automatic method is much more efficient than the traditional evaluation made by humans while being consistent with human assessments.

  • 161.
    Holmgren, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nikopoulou, Zoi
    Ramstedt, Linda
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Woxenius, Johan
    Modelling modal choice effects of regulation on low-sulphur marine fuels in Northern Europe2014Inngår i: Transportation Research Part D: Transport and Environment, ISSN 1361-9209, E-ISSN 1879-2340, Vol. 28, nr S1, s. 62-73Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The implementation of MARPOL Annex VI in the North and Baltic Sea Sulphur Emission Control Area (SECA) has raised economic concerns among shippers and shipowners, as well as spurred policymakers to appeal to various interests, such as citizen health, export industry competitiveness, and consumer prices. To justify their cases, policymakers and stakeholders have commissioned various agencies to monitor the implementation’s effects upon sustainability, especially regarding a potential modal shift from sea to road transport. This article thus reviews some of these commissioned studies in order to analyse the effects of the implementation and the possibility of modal shift. It also provides an agent-based simulation study of route choice for comparatively high-value cargo from Lithuania in the east to the United Kingdom in the west. Ultimately, the results of our TAPAS study do not provide concrete evidence supporting a modal shift from sea to road transport and indeed, they indicate that a shift is unlikely to occur.

  • 162.
    Holmgren, Johan
    et al.
    Malmo Univ., SWE.
    Persson, Marie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An Optimization Model for Sequence Dependent Parallel Operating Room Scheduling2016Inngår i: HEALTH CARE SYSTEMS ENGINEERING FOR SCIENTISTS AND PRACTITIONERS, 2016, s. 41-51Konferansepaper (Fagfellevurdert)
  • 163.
    Holmgren, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ramstedt, Linda
    Davidsson, Paul
    A study on quantitative freight transport analysis models in Denmark and Sweden2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Purpose The aim of this paper is to present a study on freight transport analysis models. The purpose is to identify different stakeholders’ perceptions of existing models, e.g., strengths and weaknesses, and of their requirements and views on future models. Design/methodology/approach The study is based on a questionnaire and interviews with representatives of public authorities, consultancy companies, and universities in Sweden and Denmark. Findings The study shows that there is a need for freight analysis models for supporting the transport planning in public authorities, including impact assessment of actions and estimation of freight flows. The respondents work mainly with macro-level models, whose main strength is their large geographic scopes, which allow comparative studies on, e.g., the national level using one model. Weaknesses include poor quality, missing functionality, and inadequate user-friendliness. In order to achieve improved freight transport analysis, the respondents wish to include more detailed logistics aspects in their analyses, which could possibly be achieved by combining macro-level and agent-based models. Research limitations/implications The limitation of this study is that we only included Danish and Swedish respondents, who mainly work with macro-level models. Moreover, only one Danish person answered the questionnaire. However, the respondent group represent a wide knowledge on freight and passenger transport models, and the study concerns to a large extent model types, not only particular models. Therefore, we argue that our findings have a wider geographic applicability. Practical implications The outcome of our study might be used by researchers and public authorities in order to, e.g., guide the decision-making on future model development: the views of the model users and clients are important to consider in order to assure that the development and research efforts lead to fulfilling their needs. Original/value The presented work provides insight into the needs and attitudes of model users and clients involved in freight transport analysis. This knowledge is important, e.g., for researchers involved in model development. According to the best of our knowledge, there is no previous study like the one presented.

  • 164.
    Holmgren, Johan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ramstedt, Linda
    Davidsson, Paul
    Edwards, Henrik
    Persson, Jan A.
    Combining macro-level and agent-based modeling for improved freight transport analysis2014Inngår i: Procedia Computer Science, Elsevier , 2014, Vol. 32, s. 380-387Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Macro-level models is the dominating type of freight transport analysis models for supporting the decision-making in public authorities. Recently, also agent-based models have been used for this purpose. These two model types have complementing characteristics: macro-level models enable to study large geographic regions in low level of detail, whereas agent-based models enable to study entities in high level of detail, but typically in smaller regions. In this paper, we suggest and discuss three approaches for combining macro-level and agent-based modeling: exchanging data between models, conducting supplementary sub-studies, and integrating macro-level and agent-based modeling. We partly evaluate these approaches using two case studies and by elaborating on existing freight transport analysis approaches based on executing models in sequence.

  • 165.
    Holmqvist, Andreas
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lycke, Fredrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Vulnerability Analysis of Vagrant Boxes2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Virtual machines are often considered more secure than regular machines due to the abstraction from the hardware layer. Abstraction does provide some extra security benefits, but many vulnerabilities that exist on a regular machine still exist on virtual machines. Moreover, the sheer amount of virtual machines that are running on many systems makes it difficult to analyse potential vulnerabilities.

    Vagrant is a management tool for virtual machines packaged in what is called boxes. There are currently no way to automatically scan these Vagrant boxes for vulnerabilities or insecure configurations to determine whether or not they are secure. Therefore we want to establish a method to detect the vulnerabilities of these boxes automatically without launching the box or executing code.

    There are two main parts in the method used to investigate the boxes. First there is the base box scanning. A base box is an image of which the final box is built upon. This base box is launched, a list of packages is extracted, and the information is then sent to a vulnerability scanner. There is also the analysis of the Vagrantfile. The Vagrantfile is the file that is used to ready the base box with needed software and configurations. The configuration file is written in Ruby and in order to extract information from this file a static code analysis is performed.

    The result for each box scanned is a list of all the vulnerabilities present on the base box as well as security configurations like SSH settings and shared folders that is retrieved from the Vagrantfile. The results are not completely accurate because the base box is used for the scan, rather than the box itself. Some of the configurations in the Vagrantfiles could not be retrieved because it required code execution or support for configurations done in by other means, like bash. The method does however provide a good indication of how many vulnerabilities a given box possesses.

  • 166.
    Horyachyy, Oleh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Comparison of Wireless Communication Technologies used in a Smart Home: Analysis of wireless sensor node based on Arduino in home automation scenario2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Internet of Things (IoT) is an extension of the Internet, which now includes physical objects of the real world. The main purpose of Internet of Things is to increase a quality of people’s daily life. A smart home is one of the promising areas in the Internet of Things which increases rapidly. It allows users to control their home devices anytime from any location in the world using Internet connectivity and automate their work based on the physical environment conditions and user preferences. The main issues in deploying the architecture of IoT are the security of the communication between constrained low-power devices in the home network and device performance. Battery lifetime is a key QoS parameter of a battery-powered IoT device which limits the level of security and affects the performance of the communication. These issues have been deepened with the spread of cheap and easy to use microcontrollers which are used by electronic enthusiasts to build their own home automation projects.

    Objectives. In this study, we investigated wireless communication technologies used in low-power and low-bandwidth home area networks to determine which of them are most suitable for smart home applications. We also investigated the correlation between security, power consumption of constrained IoT device, and performance of wireless communication based on a model of a home automation system with a sensor node. Sensor node was implemented using Arduino Nano microcontroller and RF 433 MHz wireless communication module.

    Methods. To achieve the stated objectives of this research following methods were chosen: literature review to define common applications and communication technologies used in a smart home scenario and their requirements, comparison of wireless communication technologies in smart home, study of Arduino microcontroller technology, design and simulation of a part of  home automation project based on Arduino, experimental measurements  of execution time and power consumption of Arduino microcontroller with RF 433 MHz wireless module when transmitting data with different levels of security, and analysis of experimental results.

    Results. In this research, we presented a detailed comparison of ZigBee, WiFi, Bluetooth, Z-Wave, and ANT communication technologies used in a smart home in terms of the main characteristics. Furthermore, we considered performance, power consumption, and security. A model of a home automation system with a sensor node based on Arduino Nano was described with sleep management and performance evaluation. The results show that the battery lifetime of Arduino in a battery-powered sensor node scenario is determined by the communication speed, sleep management, and affected by encryption.

    Conclusions. The advanced communication strategy can be used to minimize the power consumption of the device and increase the efficiency of the communication. In that case, our security measures will reduce the productivity and lifetime of the sensor node not significantly. It’s also possible to use symmetric encryption with smaller block size.

  • 167.
    Hussain, Syed Asad
    et al.
    COMSATS Institute of Information Technology, PAK.
    Fatima, Mehwish
    COMSATS Institute of Information Technology, PAK.
    Saeed, Atif
    Lancaster University, GBR.
    Raza, Imran
    COMSATS Institute of Information Technology, PAK.
    Shahzad, Raja Khurram
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Multilevel classification of security concerns in cloud computing2017Inngår i: Applied Computing and Informatics, ISSN 1578-4487, E-ISSN 2210-8327, Vol. 13, nr 1, s. 57-65Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Threats jeopardize some basic security requirements in a cloud. These threats generally constitute privacy breach, data leakage and unauthorized data access at different cloud layers. This paper presents a novel multilevel classification model of different security attacks across different cloud services at each layer. It also identifies attack types and risk levels associated with different cloud services at these layers. The risks are ranked as low, medium and high. The intensity of these risk levels depends upon the position of cloud layers. The attacks get more severe for lower layers where infrastructure and platform are involved. The intensity of these risk levels is also associated with security requirements of data encryption, multi-tenancy, data privacy, authentication and authorization for different cloud services. The multilevel classification model leads to the provision of dynamic security contract for each cloud layer that dynamically decides about security requirements for cloud consumer and provider. © 2016 King Saud University

  • 168.
    Ilie, Dragos
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kommunikationssystem.
    Datta, Vishnubhotla Venkata Krishna Sai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On Designing a Cost-Aware Virtual CDN for the Federated Cloud2016Inngår i: 2016 INTERNATIONAL CONFERENCE ON COMMUNICATIONS (COMM 2016), IEEE, 2016, s. 255-260Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We have developed a prototype for a cost-aware, cloud-based content delivery network (CDN) suitable for a federated cloud scenario. The virtual CDN controller spawns and releases virtual caching proxies according to variations in user demand. A cost-based heuristic algorithm is used for selecting data centers where proxies are spawned. The functionality and performance of our virtual CDN prototype were evaluated in the XIFI federated OpenStack cloud. Initial results indicate that the virtual CDN can offer reliable and prompt service. Multimedia providers can use this virtual CDN solution to regulate expenses and have greater freedom in choosing the placement of virtual proxies as well as more flexibility in configuring the hardware resources available to the proxy (e.g., CPU cores, memory and storage).

  • 169.
    Indukuri, Pavan Sutha Varma
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance comparison of Linux containers(LXC) and OpenVZ during live migration: An experiment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilisation and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another.  Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software.

     Objectives: In this study, live migration of LXC and OpenVZ containers has been performed.  Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilisation, disk utilisation and an average load of the servers is also evaluated during the process of live migration. The main aim of this research is to compare the performance of LXC and OpenVZ during live migration of containers.

     Methods: A literature study has been done to gain knowledge about the process of live migration and the metrics that are required to compare the performance of LXC and OpenVZ during live migration of containers. Further an experiment has been conducted to compute and evaluate the performance metrics that have been identified in the literature study. The experiment was done to investigate and evaluate migration process for both LXC and OpenVZ. Experiments were designed and conducted based on the objectives which were to be met.

    Results:  The results of the experiments include the migration performance of both LXC and OpenVZ. The performance metrics identified in the literature review, total migration time and downtime, were evaluated for LXC and OpenVZ. Further graphs were plotted for the CPU utilisation, disk utilisation, and average load during the live migration of containers. The results were analysed to compare the performance differences between OpenVZ and LXC during live migration of containers.

    Conclusions.  The conclusions that can be drawn from the experiment. LXC has shown higher utilisation, thus lower performance when compared with OpenVZ. However, LXC has less migration time and downtime when compared to OpenVZ.

  • 170.
    Iqbal, Mubashir
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Multi-agent Based Model for Inter Terminal Transportation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Due to an increase in inter-terminal transportation (ITT) volume within a container port; complexity of transportation processes between the terminals has also increased. Problems with the current way of handling ITT resources are expected to rise in the near future. Different types of vehicles are already in place for transporting containers between different terminals in a container port. However, there need to be an efficient and effective use of these vehicle types in order to get maximum benefits out of these resources.

    Objectives: In this thesis, we investigate and propose a solution model for ITT considering the combination of both manned (MTS, Trucks) and unmanned (AGV) vehicles. An agent based model is proposed for ITT focusing on three ITT vehicle types. Objective of proposed model is to investigate the capabilities and combination of different vehicles for transporting containers between different container terminals in a port. 

    Methods: A systematic literature review is conducted to identify the problems and methods and approaches for solving those problems in the domain of container transportation. As a case, an agent-based model is proposed for the Maasvlakte area of the Rotterdam port. Simulations are performed on different scenarios to compare three different road vehicle types, i.e., AGV, MTS, and truck, in a network comprising of ten terminals.

    Results: Literature review results indicate that heuristics is the most commonly used method to solve different problems of container transportation in the recent past. The review also depicts that limited research has been published focusing on ITT when compared to intra-terminal transportation. Simulation results of our proposed model indicate that AGVs outperforms trucks in terms of loading/unloading time and number of vehicles required to handle the given volume of all scenarios. In most of the cases, it is observed that the number of trucks required are twice as much as compared to AGVs in order to transport containers between different terminals. Results also show that lower number MTS vehicles (as compared to AGVs) are required for handling containers in certain scenarios; however, the loading/unloading time for MTS is much higher than that of AGVs.

    Conclusions: Using agent-based simulation experiments, we propose a model that can help in estimating the required resources (vehicles) to handle the ITT containers volume and improve the utilization of different resources in a network of terminals. From comparison of three road vehicle types, it was concluded that trucks are incapable to handle higher container volume in an ITT. It was also concluded that AGVs can be an appropriate choice if automated operations are supported in the terminals, otherwise MTS is the best choice concerning the number of vehicles required to handle containers. Our simulation results may help the ITT planners in better estimations and planning of ITT to meet current and future challenges of transporting high containers volume.

  • 171.
    Iqbal, Nayyar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    TECHNIQUES FOR APPLYING LEAN PRINCIPLES IN SERVICE DESIGN AND DEPLOYMENT IN FUTURE NETWORKS: DESIGN AND IMPLEMENTATION2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
  • 172. Isaksson, Ola
    et al.
    Bertoni, Marco
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för maskinteknik.
    Hallstedt, Sophie
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för strategisk hållbar utveckling.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Model Based Decision Support for Value and Sustainability in Product Development2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Decomposing and clarify “sustainability” implications in the same way as concrete targets on product functionality is challenging, mainly due to the problem of showing numbers and ‘hard facts’ related to the value generated by sustainability-oriented decisions. The answer lies in methods and tools that are able, already in a preliminary design stage, to highlight how sustainable design choice can create value for customers and stakeholders, generating market success in the long term. The paper objective is to propose a framework where Sustainable Product Development (SPD) and Value Driven Design (VDD) can be integrated to realize a model-driven approach to support early stage design decisions. Also, the paper discusses how methods and tools for Model-Based Decision Support (MBDS) (e.g., response surface methodology) can be used to increase the computational efficiency of sustainability- and value-based analysis of design concepts. The paper proposes a range of activities to guide a model-based evaluation of sustainability consequences in design, showing also that capabilities exist already today for combining research efforts into a multi disciplinary decision making environment.

  • 173.
    Ivvala, Avinash Kiran
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Assessment of Snort Intrusion Prevention System in Virtual Environment Against DoS and DDoS Attacks: An empirical evaluation between source mode and destination mode2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Cloud computing (CC) is developed as a Human-centered computing model to facilitate its users to access resources anywhere on the globe. The resources can be shared among any cloud user which mainly questions the security in cloud computing. There are Denial of Service and Distributed Denial of Service attacks which are generated by the attackers to challenge the security of CC. The Next-Generation Intrusion Prevention Systems (sometimes referred as Non-Traditional Intrusion Prevention Systems (NGIPS) are being used as a measure to protect users against these attacks. This research is concerned with the NGIPS techniques that are implemented in the cloud computing environment and their evaluation.

    Objectives. In this study, the main objective is to investigate the existing techniques of the NGIPS that can be deployed in the cloud environment and to provide an empirical comparison of source mode and destination mode in Snort IPS technique based on the metrics used for evaluation of the IPS systems.

    Methods. In this study, a systematic literature review is used to identify the existing NGIPS techniques. The library databases used to search the literature are Inspec, IEEE Xplore, ACM Digital Library, Wiley, Scopus and Google scholar. The articles are selected based on an inclusion and exclusion criteria. The experiment is selected as a research method for the empirical comparison of Source mode and destination mode of Snort NGIPS found through literature review. The testbed is designed and implemented with the Snort filter techniques deployed in the virtual machine.

    Results. Snort is one of the mostly used NGIPS against DoS and DDoS attacks in the cloud environment. Some common metrics used for evaluating the NGIPS techniques are CPU load, Memory usage, bandwidth availability, throughput, true positive rate, false positive rate, true negative rate, false negative rate, and accuracy. From the experiment, it was found that Destination mode performs better than source mode in Snort. When compared with the CPU load, Bandwidth, Latency, Memory Utilization and rate of packet loss metrics.

    Conclusions. It was concluded that many NGIPS of the cloud computing model are related to each other and use similar techniques to prevent the DoS and DDoS attacks. The author also concludes that using of source based and destination based intrusion detection modes in Snort has some difference in the performance measures.

  • 174.
    Jacobsson, Andreas
    et al.
    Malmo Univ, Dept Comp Sci, S-20505 Malmo, Sweden..
    Boldt, Martin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A risk analysis of a smart home automation system2016Inngår i: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 56, s. 719-733Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Enforcing security in Internet of Things environments has been identified as one of the top barriers for realizing the vision of smart, energy-efficient homes and buildings. In this context, understanding the risks related to the use and potential misuse of information about homes, partners, and end-users, as well as, forming methods for integrating security-enhancing measures in the design is not straightforward and thus requires substantial investigation. A risk analysis applied on a smart home automation system developed in a research project involving leading industrial actors has been conducted. Out of 32 examined risks, 9 were classified as low and 4 as high, i.e., most of the identified risks were deemed as moderate. The risks classified as high were either related to the human factor or to the software components of the system. The results indicate that with the implementation of standard security features, new, as well as, current risks can be minimized to acceptable levels albeit that the most serious risks, i.e., those derived from the human factor, need more careful consideration, as they are inherently complex to handle. A discussion of the implications of the risk analysis results points to the need for a more general model of security and privacy included in the design phase of smart homes. With such a model of security and privacy in design in place, it will contribute to enforcing system security and enhancing user privacy in smart homes, and thus helping to further realize the potential in such loT environments. (C) 2015 Elsevier B.V. All rights reserved.

  • 175.
    Jacobsson, Bastian
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Cybercriminal Organizations: Utilization of Botnets2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Botnets, networks of hundreds to millions of computers, controlled by one or more individuals, increasingly play a part in cybercrimes, with astonishing results. The access of a botnet gives the controller abilities of a large majority of all the cyberattacks over the internet, and with the possibility of buying a complete botnet, this opens the market to nontechnical criminals. The Darknet and the market it provides, enable the buyers to buy and trade everything from botnets and malware to complete schemes.

     

    The increase in cybercriminal activities and organizations has been alarmingly high in recent years, and no wonder, when criminals just need to invest a small amount of money to gain potentially millions of dollars without any advance knowledge of computer science, and with only a slight chance of getting caught due to the anonymity of the internet and botnets.

     

    Based on a literature review combined with a critically reflective analysis of a selection of information about botnets from other sources available on the internet, this paper has identified some of the main types of organizations used in cybercrime and their operations as well as basic information about botnets, the players and stakeholders in this area, the theft and schemes used by botnets and the online money laundering service involved. 

  • 176.
    Jacobsson, Mattias
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bitwise relations between n and φ(n): A novel approach at prime number factorization2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Cryptography plays a crucial role in today’s society. Given the influence, cryptographic algorithms need to be trustworthy. Cryptographic algorithms such as RSA relies on the problem of prime number factorization to provide its confidentiality. Hence finding a way to make it computationally feasible to find the prime factors of any integer would break RSA’s confidentiality.

    The approach presented in this thesis explores the possibility of trying to construct φ(n) from n. This enables factorization of n into its two prime numbers p and q through the method presented in the original RSA paper. The construction of φ(n) from n is achieved by analyzing bitwise relations between the two.

    While there are some limitations on p and q this thesis can in favorable circumstances construct about half of the bits in φ(n) from n. Moreover, based on the research a conjecture has been proposed which outlines further characteristics between n and φ(n).

  • 177.
    Jiangcheng, Qin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    User Behavior Trust Based Cloud Computing Access Control Model2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the development of computer software, hardware, and communication technologies, a new type of human-centered computing model, called Cloud Computing (CC) has been established as a commercial computer network service. However, the openness of CC brings huge security challenge to the identity-based access control system, as it not able to effectively prevent malicious users accessing; information security problems, system stability problems, and also the trust issues between cloud service users (CSUs) and cloud service providers (CSPs) are arising therefrom. User behavior trust (UBT) evaluation is a valid method to solve security dilemmas of identity-based access control system, but current studies of UBT based access control model is still not mature enough, existing the problems like UBT evaluation complexity, trust dynamic update efficiency, evaluation accuracy, etc.

    Objective. The aim of the study is to design and develop an improved UBT based CC access control model compare to the current state-of-art. Including an improved UBT evaluation method, able to reflect the user’s credibility according to the user’s interaction behavior, provides access control model with valid evidence to making access control decision; and a dynamic authorization control and re-allocation strategy, able to timely response to user’s malicious behavior during entire interaction process through real-time behavior trust evaluation. Timely updating CSUs trust value and re-allocating authority degree.

    Methods. This study presented a systematical literature review (SLR) to identify the working structure of UBT based access control model; summarize the CSUs’ behaviors that can be collected as UBT evaluation evidence; identify the attributes of trust that will affect the accuracy of UBT evaluation; and evaluated the current state-of-art of UBT based access control models and their potential advantages, opportunities, and weaknesses. Using the acquired knowledge, design a UBT based access control model, and adopt prototype method to simulate the performance of the model, in order to verify its validation, verify improvements, and limitations.

    Results. Through the SLR, two types of UBT based access control model working structures are identified and illustrated, essential elements are summarized, and a dynamic trust and access update module is described; 23 CSU’s behavior evidence items are identified and classified into three classes; four important trust attributes, influences, and corresponding countermeasures are identified and summarized; and eight current state-of-art of UBT based access control models are identified and evaluated. A Triple Dynamic Window based Access Control model (TDW) was designed and established as a prototype, the simulation result indicates the TDW model is well performed on the trust fraud problem and trust expiration problem.

    Conclusions. From the research results that we obtained from this study, we have identified several basic elements of UBT evaluation method, evaluated the current state-of-art UBT based access control models. Towards the weaknesses of trust fraud prevention and trust expiration problem, this paper designed a TDW based access control model. In comparing to the current state-of-art of UBT models, the TDW model has the following advantages, such as it is effectively preventing trust fraud problem with “slow rise” principle, able to timely response to malicious behavior by constantly aggravate punishment strategy (“rapid decrease” principle), effectively prevent malicious behavior and malicious user, and able to reflect the recent credibility of accessing user by expired trust update strategy and most recent trust calculation; finally, it has simple and customizable data structure, simple trust evaluation method, which has good scalability.

  • 178.
    Jilkén, Oskar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Säkerhet och integritet i närfältskommunikation2014Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Context. In today’s society we use smart cards in many areas, NFC is a smart card technology that allows contactless interaction between a reader and the tag, the tag is often in the form of a card. NFC can be used for various payment methods or as access card to a building which makes life easier. In previous studies, the technique has proven to be weak to attacks using an NFC reader connected to a computer. Among today’s smartphones, there are phones that have built-in read and write support for NFC tags. Objectives. In this study, I examine the NFC tags that are frequently used in our society, entry cards and debit cards to determine the security against the increasing use of smartphones as a potential attack tool. If there is a threat I will try to remedy the found lack. Methods. My approach was to select a number of test items and analyze the objects using only a smartphone with NFC support to determine the risk for each of the items. The test conducted was the modification, cloning and unique copy. Results. Through this investigation, I concluded that four of the non-empty items was at risk of being intimidated. All four are used in public transport and the objects were vulnerable to unique copy. Conclusions. In order to remedy this vulnerability should be the management of the tag’s data is handled in a different way, perhaps by storing the data in a internal system or to replace the tags for a safer tag alternative.

  • 179.
    Johan, Eliasson
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Detecting Crime Series Based on Route Estimation and Behavioral Similarity2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
  • 180.
    Johansen, Valdemar
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Object serialization vs relational data modelling in Apache Cassandra: a performance evaluation2015Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. In newer database solutions designed for large-scale, cloud-based services, database performance is of particular concern as these services face scalability challenges due to I/O bottlenecks. These issues can be alleviated through various data model optimizations that reduce I/O loads. Object serialization is one such approach.

    Objectives. This study investigates the performance of serialization using the Apache Avro library in the Cassandra database. Two different serialized data models are compared with a traditional relational database model.

    Methods. This study uses an experimental approach that compares read and write latency using Twitter data in JSON format.

    Results. Avro serialization is found to improve performance. However, the extent of the performance benefit is found to be highly dependent on the serialization granularity defined by the data model.

    Conclusions. The study concludes that developers seeking to improve database throughput in Cassandra through serialization should prioritize data model optimization as serialization by itself will not outperform relational modelling in all use cases. The study also recommends that further work is done to investigate additional use cases, as there are potential performance issues with serialization that are not covered in this study.

  • 181.
    Johansson, Christian
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On Intelligent District Heating2014Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Intelligent district heating is the combination of traditional district heating engineering and modern information and communication technology. A district heating systemis a highly complex environment consisting of a large number of distributed entities, and this complexity and geographically dispersed layout suggest that they are suitable for distributed optimization and management. However, this would in practice imply a transition from the classical production-centric perspective normally found within district heating management to a more consumer-centric perspective. This thesis describes a multiagent-based system which combines production, consumption and distribution aspects into a single coherent operational management framework. The flexibility and robustness of the solution in industrial settings is thoroughly examined and its performance is shown to lead to significant operational, financial and environmental benefits compared to current management schemes.

  • 182.
    Johansson, Christian
    et al.
    NODA, SWE.
    Bergkvist, Markus
    NODA, SWE.
    Geysen, Davy
    EnergyVille, BEL.
    De Somer, Oscar
    EnergyVille, BEL.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Vanhoudt, Dirk
    EnergyVille, BEL.
    Operational Demand Forecasting In District Heating Systems Using Ensembles Of Online Machine Learning Algorithms2017Inngår i: 15TH INTERNATIONAL SYMPOSIUM ON DISTRICT HEATING AND COOLING (DHC15-2016) / [ed] Ulseth, R, ELSEVIER SCIENCE BV , 2017, s. 208-216Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Heat demand forecasting is in one form or another an integrated part of most optimisation solutions for district heating and cooling (DHC). Since DHC systems are demand driven, the ability to forecast this behaviour becomes an important part of most overall energy efficiency efforts. This paper presents the current status and results from extensive work in the development, implementation and operational service of online machine learning algorithms for demand forecasting. Recent results and experiences are compared to results predicted by previous work done by the authors. The prior work, based mainly on certain decision tree based regression algorithms, is expanded to include other forms of decision tree solutions as well as neural network based approaches. These algorithms are analysed both individually and combined in an ensemble solution. Furthermore, the paper also describes the practical implementation and commissioning of the system in two different operational settings where the data streams are analysed online in real-time. It is shown that the results are in line with expectations based on prior work, and that the demand predictions have a robust behaviour within acceptable error margins. Applications of such predictions in relation to intelligent network controllers for district heating are explored and the initial results of such systems are discussed. (C) 2017 The Authors. Published by Elsevier Ltd.

  • 183. Johansson, E.
    et al.
    Gahlin, C.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Crime Hotspots: An Evaluation of the KDE Spatial Mapping Technique2015Inngår i: Proceedings - 2015 European Intelligence and Security Informatics Conference, EISIC 2015 / [ed] Brynielsson J.,Yap M.H., IEEE Computer Society, 2015, s. 69-74Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Residential burglaries are increasing. By visualizing patterns as spatial hotspots, law-enforcement agents can get a better understanding of crime distributions and trends. Two aspects are investigated, first, measuring the accuracy and performance of the KDE algorithm using small data sets. Secondly, investigation of the amount of crime data needed to compute accurate and reliable hotspots. The Prediction Accuracy Index is used to effectively measure the accuracy of the algorithm. The data from three geographical areas in Sweden, including Stockholm, Gothenburg and Malmö are analyzed and evaluated over a one year. The results suggest that the usage of the KDE algorithm to predict residential burglaries performs well overall when having access to enough crimes, but is capable with small data sets as well

  • 184.
    Johansson, Erik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Gåhlin, Christoffer
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Crime hotspots: An evaluation of the KDE spatial mapping technique: Spatial analysis2014Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context

    Crime rates are increasing more and more, especially residential burglaries. This thesis includes a study of the Kernel Density Estimation algorithm, and how to use this algorithm for mapping crime patterns based on geographical data. By visualizing patterns as spatial hotspots, law-enforcements can get a better understanding of how criminals think and act. 

    Objectives

    The thesis focuses on two experiments, including measuring the accuracy and performance of the KDE algorithm, as well as the analysis of the amount of crime data needed to compute accurate and reliable results. 

    Methods

    A Prediction Accuracy Index is used to effectively measure the accuracy of the algorithm. The development of a Python test program, which is used for extracting and evaluating the results is also included in the study.  

    Results

    The data from three geographical areas in Sweden, including Stockholm, Gothenburg and Malmoe are analyzed and evaluated over a time period of one year. 

    Conclusions

    The study conclude that the usage of the KDE algorithm to map residential burglaries performs well overall when having access to enough crimes. The minimum number of crimes for creating a trustworthy hotspot are presented in the result and conclusion chapters. The results further shows that KDE performs well in terms of execution time and scalability. Finally the study concludes that the amount of data that was available for the study was not enough for producing highly reliable hotspots.

  • 185.
    Johansson, Filip
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Wikström, Jesper
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Result Prediction by Mining Replays in Dota 22015Oppgave
    Abstract [en]

    Context: Real-time games like Dota 2 lack the extensive mathematical modeling of turn-based games that can be used to make objective statements about how to best play them. Understanding a real-time computer game through the same kind of modeling as a turn-based game is practically impossible. Objectives: In this thesis an attempt was made to create a model using machine learning that can predict the winning team of a Dota 2 game given partial data collected as the game progressed. A couple of different classifiers were tested, out of these Random Forest was chosen to be studied more in depth. Methods: A method was devised for retrieving Dota 2 replays and parsing them into a format that can be used to train classifier models. An experiment was conducted comparing the accuracy of several machine learning algorithms with the Random Forest algorithm on predicting the outcome of Dota 2 games. A further experiment comparing the average accuracy of 25 Random Forest models using different settings for the number of trees and attributes was conducted. Results: Random Forest had the highest accuracy of the different algorithms with the best parameter setting having an average of 88.83% accuracy, with a 82.23% accuracy at the five minute point. Conclusions: Given the results, it was concluded that partial game-state data can be used to accurately predict the results of an ongoing game of Dota 2 in real-time with the application of machine learning techniques.

  • 186.
    Joseph, Robin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Enhancing OpenStack clouds using P2P technologies2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    It was known for a long time that OpenStack has issues with scalability. Peer-to-Peer systems, on the other hand, have proven to scale well without significant reduction of performance. The objectives of this thesis are to study the challenges associated with P2P-enhanced clouds and present solutions for overcoming them. As a case study, we take the architecture of the P2P-enhanced OpenStack implemented at Ericsson that uses the CYCLON P2Pprotocol. We study the OpenStack architecture and P2P technologies and finally propose solutions and provide possibilities in addressing the challenges that are faced by P2P-enhanced OpenStack clouds. We emphasize mainly on a decentralized identity service and management of Virtual machine images. This work also investigates the characterization of P2P architectures for their use in P2P-enhanced OpenStack clouds. The results section shows that the proposed solution enables the existing P2P system to scale beyond what was originally possible. We also show that the P2P-enhanced system performs better than the standard OpenStack.

  • 187.
    Josyula, Sai Prashanth
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On the Applicability of a Cache Side-Channel Attack on ECDSA Signatures: The Flush+Reload attack on the point multiplication in ECDSA signature generation process2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Digital counterparts of handwritten signatures are known as Digital Signatures. The Elliptic Curve Digital Signature Algorithm (ECDSA) is an Elliptic Curve Cryptography (ECC) primitive, which is used for generating and verifying digital signatures. The attacks that target an implementation of a cryptosystem are known as side-channel attacks. The Flush+Reload attack is a cache side-channel attack that relies on cache hits/misses to recover secret information from the target program execution. In elliptic curve cryptosystems, side-channel attacks are particularly targeted towards the point multiplication step. The Gallant-Lambert-Vanstone (GLV) method for point multiplication is a special method that speeds up the computation for elliptic curves with certain properties.

    Objectives. In this study, we investigate the applicability of the Flush+Reload attack on ECDSA signatures that employ the GLV method to protect point multiplication.

    Methods. We demonstrate the attack through an experiment using the curve secp256k1. We perform a pair of experiments to estimate both the applicability and the detection rate of the attack in capturing side-channel information.

    Results. Through our attack, we capture side-channel information about the decomposed GLV scalars.

    Conclusions. Based on an analysis of the results, we conclude that for certain implementation choices, the Flush+Reload attack is applicable on ECDSA signature generation process that employs the GLV method. The practitioner should be aware of the implementation choices which introduce vulnerabilities, and avoid the usage of such ECDSA implementations.

  • 188.
    Josyula, Sai Prashanth
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Passenger-oriented Railway Traffic Re-scheduling: A Review of Alternative Strategies utilizing Passenger Flow Data2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Developing and operating seamless, attractive and efficient public transport services in a liberalized market requires significant coordination between involved actors, which is both an organizational and technical challenge. During a journey, passengers often transfer between different transport services. A delay of one train or a bus service can potentially cause the passenger to miss the transfer to the subsequent service. If those services are provided by different operators and those are not coordinated and the information about the services are scattered, the passengers will suffer. In order to incorporate the passenger perspective in the re-scheduling of railway traffic and associated public transport services, the passenger flow needs to be assessed and quantified. We therefore perform a survey of previous research studies that propose and apply computational re-scheduling support for railway traffic disturbance management with a passenger-oriented objective. The analysis shows that there are many different ways to represent and quantify the effects of delays on passengers, i.e.“passenger inconvenience”. In the majority of the studies, re-scheduling approaches rely on historic data on aggregated passenger flows, which are independent of how the public transport services are re-scheduled. Few studies incorporate a dynamic passenger flow model that reacts based on how the transport services are re-scheduled. None of the reviewed studies use real-time passenger flow data in the decision-making process. Good estimations of the passenger flows based on historic data are argued to be sufficient since access to large amounts of passenger flow data and accurate prediction models is available today.

  • 189.
    Josyula, Sai Prashanth
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A parallel algorithm for train rescheduling2018Inngår i: Transportation Research Part C: Emerging Technologies, ISSN 0968-090X, E-ISSN 1879-2359, Vol. 95, s. 545-569Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    One of the crucial factors in achieving a high punctuality in railway traffic systems, is the ability to effectively reschedule the trains when disturbances occur. The railway traffic rescheduling problem is a complex task to solve both from a practical and a computational perspective. Problems of practically relevant sizes have typically a very large search space, making them time-consuming to solve even for state-of-the-art optimization solvers. Though competitive algorithmic approaches are a widespread topic of research, not much research has been done to explore the opportunities and challenges in parallelizing them. This paper presents a parallel algorithm to efficiently solve the real-time railway rescheduling problem on a multi-core parallel architecture. We devised (1) an effective way to represent the solution space as a binary tree and (2) a novel sequential heuristic algorithm based on a depth-first search (DFS) strategy that quickly traverses the tree. Based on that, we designed a parallel algorithm for a multi-core architecture, which proved to be 10.5 times faster than the sequential algorithm even when run on a single processing core. When executed on a parallel machine with 8 cores, the speed further increased by a factor of 4.68 and every disturbance scenario in the considered case study was solved within 6 s. We conclude that for the problem under consideration, though a sequential DFS approach is fast in several disturbance scenarios, it is notably slower in many other disturbance scenarios. The parallel DFS approach that combines a DFS with simultaneous breadth-wise tree exploration, while being much faster on an average, is also consistently fast across all scenarios.

  • 190.
    Kalakuntla, Preetham
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Analysis of kNN Query Processing on large datasets using CUDA & Pthreads: comparing between CPU & GPU2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Telecom companies do a lot of analytics to provide consumers a better service and to stay in competition. These companies accumulate special big data that has potential to provide inputs for business. Query processing is one of the major tool to fire analytics at their data.

    Traditional query processing techniques which follow in-memory algorithm cannot cope up with the large amount of data of telecom operators. The k nearest neighbour technique(kNN) is best suitable method for classification and regression of large datasets. Our research is focussed on implementation of kNN as query processing algorithm and evaluate the performance of it on large datasets using single core, multi-core and on GPU.

    This thesis shows an experimental implementation of kNN query processing on single core CPU, Multicore CPU and GPU using Python, P- threads and CUDA respectively. We considered different levels of sizes, dimensions and k as inputs to evaluate the performance. The experiment shows that GPU performs better than CPU single core on the order of 1.4 to 3 times and CPU multi-core on the order of 5.8 to 16 times for different levels of inputs.

  • 191.
    Kalimullah, Khan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sushmitha, Donthula
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Influence of Design Elements in Mobile Applications on User Experience of Elderly People2017Inngår i: 8TH INTERNATIONAL CONFERENCE ON EMERGING UBIQUITOUS SYSTEMS AND PERVASIVE NETWORKS (EUSPN 2017) / 7TH INTERNATIONAL CONFERENCE ON CURRENT AND FUTURE TRENDS OF INFORMATION AND COMMUNICATION TECHNOLOGIES IN HEALTHCARE (ICTH-2017) / AFFILIATED WORKSHOPS / [ed] Shakshuki, E, ELSEVIER SCIENCE BV , 2017, s. 352-359Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Technology in the field of health care has taken a step forward for making health maintenance easy on a daily basis. With a gradual increase in the elderly population, it is important to provide them with facilities made accessible through technological innovations. But it is observed that the elderly show reluctance to the use of new technology such as mobile applications. In this paper, an effort is made to overcome this barrier with the study of both elderly user experiences and user interface design of an mHealth application and an analysis of the relation between them. (C) 2017 The Authors. Published by Elsevier B.V.

  • 192.
    Kamana, Manaschandra
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Investigating usability issues of mHealth apps for elderly people: A case study approach2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
  • 193.
    Kamma, Aditya
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An Approach to Language Modelling for Intelligent Document Retrieval System2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
  • 194.
    Kanumuri, Sai Srilakshmi
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    ON EVALUATING MACHINE LEARNING APPROACHES FOR EFFICIENT CLASSIFICATION OF TRAFFIC PATTERNS2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the increased usage of mobile devices and internet, the cellular network traffic has increased tremendously. This increase in network traffic has led to increased occurrences of communication failures among the network nodes. Each communication failure among the nodes is defined as a bad event and occurrence of one such bad event acts as a source of origin for several consecutive bad events. These bad events as a whole may eventually lead to node failures (not being able to respond to any data requests). But it requires a lot of human effort and cost to be invested in by the telecom companies to implement workarounds for these node failures. So, there is a need to prevent node failures from happening. This can be done by classifying the traffic patterns between nodes in the network, identify bad events in them and deliver the verdict immediately after their detection.

    Objectives. Through this study, we aim to find the best suitable machine learning algorithm which can efficiently classify the traffic patterns of SGSN-MME (SGSN (Serving GPRS (General Packet Radio Service) Support node) and MME (Mobility Management Entity). SGSN-MME is a network management tool designed to support the functionalities of two nodes namely SGSN and MME. We do this by evaluating the classification performance of four machine learning classification algorithms, namely Support vector machines (SVMs), Naïve Bayes, Decision trees and Random forests, on the traffic patterns of SGSN and MME. The selected classification algorithm will be developed in such a way that, whenever it detects a bad event, it notifies the user about it by prompting a message saying, “Something bad is happening”.

    Methods. We have conducted an experiment for evaluating the classification performance of our four chosen classification algorithms on the dataset provided by Ericsson AB, Gothenburg. The experimental dataset is a combination of three logs, one of which represents the traffic patterns in real network and the other two logs contain synthetic traffic patterns that are generated manually. The dataset is unlabeled with 720 data instances and 4019 attributes in it. K-means clustering is performed for dividing the data instances into groups and thereby proceed with labeling them accordingly into good and bad events. Also, since the number of attributes in the experimental dataset are more than the number of instances, feature selection is performed for selecting the subset of relevant attributes which best represents the whole data. All the chosen classification algorithms are trained and tested with ten-fold cross validation sets using the selected subset of attributes and the obtained performance measures like classification accuracy, F1 score and training time are analyzed and compared for selecting the best suitable one among them. Finally, the chosen algorithm is tested on unlabeled real data and the performance measures are analyzed in order to check if is able to detect the bad events correctly or not.

    Results. Experimental results showed that Random forests outperformed Support vector machines, Naïve Bayes and Decision trees with an average classification accuracy of 99.72% and average F1 score of 99.6, when classification accuracy and F1 score are considered. On the other hand, Naive Bayes outperformed Support vector machines, Decision trees and Random forests with an average training time of 0.010 seconds, when training time is considered. Also, the classification accuracy and F1 score of Random forests on unlabeled data are found to be 100% and 100 respectively.

    Conclusions. Since our study focuses on classifying the traffic patterns of SGSN-MME more accurately, classification accuracy and F1 score are of highest importance than the training time of algorithm. Therefore, based on experimental results, we conclude that Random forests is the best suitable machine learning algorithm for classifying the traffic patterns of SGSN -MME. However, Naive Bayes can be also used if classification has to be performed in the least time possible and with moderate accuracy (around 70%). 

  • 195.
    Kappelin, Frida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Rudvall, Jimmie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Fraud Detection within Mobile Money: A mathematical statistics approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Today it is easy to do banking transaction digitally, both on a computer or by using a mobile phone. As the banking-services increases and gets implemented to multi-platforms it makes it easier for a fraudster to commit financial fraud. This thesis will focus on investigating log-files from a Mobile Money system that makes it possible to do banking transactions with a mobile phone. 

    Objectives: The objectives in this thesis is to evaluate if it is possible to combine two statistical methods, Benford's law together with statistical quantiles, to find a statistical way to find fraudsters within a Mobile Money system.

    Methods: Rules was extracted from a case study with focus on a Mobile Money system and limits was calculated by using quantiles. A fraud detector was implemented that use these rules together with limits and Benford's law in order to detect fraud.The fraud detector used the methods both independently and combined.The performance was then evaluated.

    Results: The results show that it is possible to use the Benford's law and statistical quantiles within the studied Mobile Money system. It is also shown that there is only a very small difference when the two methods are combined or not both in detection rate and accuracy precision.

    Conclusions: We conclude that by combining the chosen methods it is possible to get a medium-high true positive rates and very low false positive rates. The most effective method to find fraudsters is by only using quantiles. However, combining Benford's law with quantiles gives the second best result.

  • 196.
    Karlsson, Robin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Cooperative Behaviors BetweenTwo Teaming RTS Bots in StarCraft2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Video games are a big entertainment industry. Many video games let players play against or together. Some video games also make it possible for players to play against or together with computer controlled players, called bots. Artificial Intelligence (AI) is used to create bots.

    Objectives. This thesis aims to implement cooperative behaviors between two bots and determine if the behaviors lead to an increase in win ratio. This means that the bots should be able to cooperate in certain situations, such as when they are attacked or when they are attacking.

    Methods. The bots win ratio will be tested with a series of quantitative experiments where in each experiment two teaming bots with cooperative behavior will play against two teaming bots without any cooperative behavior. The data will be analyzed with a t-test to determine if the data are statistical significant.

    Results and Conclusions. The results show that cooperative behavior can increase performance of two teaming Real Time Strategy bots against a non-cooperative team with two bots. However, the performance could either be increased or decreased depending on the situation. In three cases there were an increase in performance and in one the performance was decreased. In three cases there was no difference in performance. This suggests that more research is needed for these cases.

  • 197.
    khadka, pawan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Analysis of Decode and Forward Relaying with Keyhole in Nakagami-m Fading Channels2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Fading has always been a major obstacle in the effective transmission of signals from the transmit antenna to the receive antenna. The innovation of multiple-input multiple-output (MIMO) system has led to the regression of the adverse effect of these fading entities and thus provided a higher data rate communication while maintaining the quality of service (QoS). A MIMO system is efficient and reliable but has hardware complexities. Thus, the concept of cooperative technology was put forth with a view to deal with the drawbacks of MIMO system. Cooperative communication comprises of relay terminals placed in between source and destination which then act as a virtual MIMO system and thus maintaining the QoS and reliability. In this thesis, we consider a downlink MIMO decode and forward (DF) relay system with ns antennas at the source S, nr antennas at the relay R and nd antennas at the destination D. An orthogonal space time block code (OSTBC) transmission is applied to the source-destination, source-relay and relay-destination links in order to exploit diversity gain. The transmitted signal from the source supposedly passes through a keyhole Nakagami-m fading channel. We then obtain the moment generating function (MGF) of the overall system which leads to the derivation of an analytical expressions for symbol error rate (SER) applying M-ary phase shift keying (MPSK) in a MIMO DF relay system. We then, as per our objective, obtain and analyse the outputs of the MIMO relay systems for different antenna pairs and fading severity parameters on the keyhole affected Nakagami-m channel. We analyse the effects of the MIMO DF relay system over the keyhole Nakagami-m channel.

  • 198.
    Khambhammettu, Mahith
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analyzing behavior and applicability of an optimization model: A simulation study for sequence dependent scheduling of surgeries2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the proportional increase in the population of elderly people, there is an increase in the need for providing quality health-care. Operating room planning is one aspect that is considered to meet the requirement of providing quality health care. Operatingroom planning concerns about the efficient management of the available resources to perform surgeries. It deals with allocation and assignment of surgeries to operating rooms in a sequential manner using resource optimization strategies to manage with the available operating rooms.

    Objectives. In this thesis, we investigate the behavior and applicabilityof an optimization model and measure the degree to which the model can efficiently utilize the available hospital resources.

    Methods. Simulations are conducted to test the impact of implemented model on turnover time. The experiment is conducted on three different scenarios using the real world data collected from Blekinge hospital.

    Results. The impact on the turnover time measured for the three different scenarios is evaluated using simulation experiment. The relationship between the scenarios is identified by comparing the results with a baseline scenario (real world schedule).

    Conclusions. Based on the analysis, we conclude that the new optimization model is capable of scheduling better than the existing scheduling system used by the hospitals. The observations show that optimization model significantly reduces the turnover time compared to the real schedule. Besides, the scenario using an additional resourceis found to have better performance compared to other scenarios. The thesis concludes by showcasing the performance and applicability of the optimization model.

  • 199.
    Khambhammettu, Mahith
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Persson, Marie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analyzing a Decision Support System for Resource Planning and Surgery Scheduling2016Inngår i: Procedia Computer Science / [ed] Martinho R.,Rijo R.,Cruz-Cunha M.M.,Bjorn-Andersen N.,Quintela Varajao J.E., Elsevier, 2016, Vol. 100, s. 532-538Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This study aims to propose a decision support system based on optimization modelling for operating room resource planning and sequence dependent scheduling of surgery operations. We conduct a simulation experiment using real world data collected from the local hospital to evaluate the proposed model. The obtained results are compared with real surgery schedules, planned at the local hospital. The experiment shows that the efficiency of schedules produced by the proposed model are significantly improved, in terms of less surgery turnover time, increased utilization of operating rooms and minimized make-span, compared to the real schedules. Moreover, the proposed optimization based decision support system enables analysis of surgery scheduling in relation to resource planning.

  • 200.
    Khan, Kalimullah
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Investigating motivational and usability issues of mHealth wellness apps for peoples to ensure satisfaction: Exploratory Study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
1234567 151 - 200 of 435
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf