Endre søk
Begrens søket
456789 301 - 350 of 435
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 301.
    Niyizamwiyitira, Christine
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of Trajectory Queries on Multiprocessor and Cluster2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this study, we evaluate the performance of trajectory queries that are handled by Cassandra, MongoDB,  and  PostgreSQL.  The  evaluation  is  conducted  on  a  multiprocessor  and  a  cluster. Telecommunication companies collect a lot of data from their mobile users. These data must be analysed in order to support business decisions, such  as  infrastructure  planning.  The  optimal choice of hardware platform and database can be different from a query to another. We use data collected  from  Telenor  Sverige,  a  telecommunication  company  that  operates  in  Sweden.  These data are collected every five minutes for an entire  week  in  a  medium  sized  city.  The  execution time  results  show  that  Cassandra  performs  much  better  than  MongoDB  and  PostgreSQL  for queries  that  do  not  have  spatial  features.  Statio’s  Cassandra  Lucene  index  incorporates  a geospatial  index  into  Cassandra,  thus  making  Cassandra  to  perform  similarly  as  MongoDB  to handle  spatial  queries.  In  four  use  cases,  namely, distance  query,  k-nearest  neigbhor  query, range   query,   and   region   query,   Cassandra   performs   much   better   than   MongoDB   and PostgreSQL for two cases, namely range query and region query. The scalability is also good for these two use cases.

  • 302.
    Niyizamwiyitira, Christine
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Period assignment in real-time scheduling of multiple virtual machines2015Inngår i: Proceedings of the 7th International Conference on Management of computational and collective intElligence in Digital EcoSystems, Association for Computing Machinery (ACM), 2015, s. 180-187Konferansepaper (Fagfellevurdert)
  • 303.
    Niyizamwiyitira, Christine
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    “Real-time scheduling of multiple virtual machines2017Inngår i: International journal of Computers and their applications, Vol. 24, nr 3, s. 91-109Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

        The use of virtualized systems is growing, and one would like to   benefit from   this   kind   of   systems   also   for   real-time applications  with  hard  deadlines.    There  are  two  levels  of  scheduling  in  real-time  applications  executing  in  a  virtualized  environment: traditional real-time scheduling of the tasks in the real-time  application  inside  a  Virtual  Machine  (VM),  and  scheduling   of   different   VMs   on   the   hypervisor   level.   Traditional real-time scheduling uses methods based on periods, deadlines and worst-case execution times of the real-time tasks.In   order   to   apply   the   existing   theory   also   to   virtualized   environments   we   must   obtain   periods   and   (worst-case) execution times for VMs containing real-time applications.   In this paper, we describe a technique for calculating periods and execution  times  and  utilization  for  VMs  containing  real-time applications with hard deadlines.  We show that when we look at  all  VMs  that  share  a  physical  processor  we  are  able  to  use  longer  (better)  periods.   Alternatively,  if  the  periods  are  the  same,  we  are  able  to  use  a  smaller  amount  of  the  processor  resource  for  the  VMs  and  more  tasks  become  schedulable  compared to when we look at each  VM  in  isolation.   We  also  introduce an overhead model that makes it possible to find VM periods that minimize the processor utilization.

  • 304.
    Nizampuram, Pranay
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Prediction of re-admissions for critical health conditions: A Machine Learning Approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Re-admission is the return hospitalization within 30 days from the date of original admission or discharge from hospital. Thecosts of the unplanned re-admissions were estimated to $25 billion per year alone in the U.S. Re-admission rate also has a huge impact onquality of care provided to the patients, cost of health care, and utilization of hospital resources and the image of the care provider. Studies indicate huge potential of savings that can be achieved with incremental performance improvements in detecting cases of preventable re-admissions.

    Objectives. In this study we find the different features that helpin predicting readmissions, compare different machine learning techniques and build a model to predict readmissions using one technique.We also propose a framework for implementation of this model in the real world situations.

    Methods. To reach the objective, the data of the patients over a period of time were studied to determine the factors that help in identifying re-admissions. Experiments are performed for identifying the features that are more relevant to predict re-admissions and for investigating the most suitable machine learning techniques for this purpose.This model was tested to predict re-admission cases for Acute Myocardial Infarction and Pneumonia.

    Results. The features that help in predicting re-admissions are determined,and a model was developed using these features and the selected machine learning algorithm. The model showed good results in predicting re-admissions. The model predicted risk of Acute Myocardial Infarction(c-statistic=0.811), and Pneumonia(c=0.76).

    Conclusions. We conclude that our model showed good results in predicting re-admissions. The developed model is discriminative for specific diseases like Acute Myocardial Infarction and Pneumonia. Itis also generalized as it incorporates the features that can be easily available from all of the patient population over the globe.

  • 305.
    Nordahl, Christian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Persson, Marie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Organizing, Visualizing and Understanding Households Electricity Consumption Data through Clustering Analysis.2018Inngår i: Organizing, Visualizing and Understanding Households Electricity Consumption Data through Clustering Analysis, https://sites.google.com/view/arial2018/accepted-papersprogram , 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a cluster analysis approach for organizing, visualizing and understanding households’ electricity consumption data. We initially partition the consumption data into a number of clusters with similar daily electricity consumption profiles. The centroids of each cluster can be seen as representative signatures of a household’s electricity consumption behaviors. We evaluate the proposed approach by conducting a number of experiments on electricity consumption data of ten selected households. Our results show that the approach is suitable for data analysis, understanding and creating electricity consumption behavior models.

  • 306.
    Nordahl, Christian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lindström, Malin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    The BXT-Bitmap: An Efficient Searchable Symmetric Encryption Scheme2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
  • 307.
    Nordahl, Christian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Persson, Marie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Detection of Residents' Abnormal Behaviour by Analysing Energy Consumption of Individual Households2017Inngår i: Proceedings of the 17th IEEE International Conference on Data Mining Workshops (ICDMW) / [ed] Gottumukkala, R; Ning, X; Dong, G; Raghavan, V; Aluru, S; Karypis, G; Miele, L; Wu, X, IEEE, 2017, s. 729-738Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As average life expectancy continuously rises, assisting the elderly population with living independently is of great importance. Detecting abnormal behaviour of the elderly living at home is one way to assist the eldercare systems with the increase of the elderly population. In this study, we perform an initial investigation to identify abnormal behaviour of household residents using energy consumption data. We conduct an experiment in two parts, the first to identify a suitable prediction algorithm to model energy consumption behaviour, and the second to detect abnormal behaviour. This approach allows for an initial step for the elderly care that has a low cost, is easily deployable, and is non-intrusive.

  • 308.
    Nordgren, Daniella
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Phishing attacks targeting hospitals: A study over phishing knowledge at Blekingesjukhuset2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. Phishing emails is a type of computer attack targeting users and tries to trick them into giving out personal information, follow shady links or download malicious attachments. Phishing is often closely linked to ransomware, which is a type of attack that locks a users computer and asks for a ransom in order to give access back. Ransomware viruses often contaminate a computer through a phishing email. Hospitals are a growing target for these types of attacks because of their need of being able to access their system at all times.

    Objectives. This study intends to research the phishing knowledge among employees at Blekingesjukhuset and whether Blekingesjukhuset is at a risk of falling victim to a ransomware attack through a phishing email opened by an employee.

    Methods. This is researched by reading relevant literature and a survey sent out to employees at Blekingesjukhuset regarding their phishing knowledge.

    Results. The results show that the participants of the survey where overall unsure on how to detect phishing emails and thought that knowledge about the subject is necessary.

    Conclusions. The conclusion was made that the employees did not know what to look for in order to determine whether an email is a phishing email or not. Based on this information the conclusion can be made that it does exist a risk of Blekingesjukhuset falling victim to a ransomware attack through a phishing email unintentionally opened by an employee.

  • 309.
    Nordin, Henrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Jouper, Kevin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance analysis of multithreaded sorting algorithms2015Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. Almost all of the modern computers today have a CPU withmultiple cores, providing extra computational power. In the new ageof big data, parallel execution is essential to improve the performanceto an acceptable level. With parallelisation comes new challenges thatneeds to be considered.

    Objectives. In this work, parallel algorithms are compared and analysedin relation to their sequential counterparts, using the Java platform.Through this, find the potential speedup for multithreading andwhat factors affects the performance. In addition, provide source codefor multithreaded algorithms with proven time complexities.

    Methods. A literature study was conducted to gain knowledge anddeeper understanding into the aspects of sorting algorithms and thearea of parallel computing. An experiment followed of implementing aset of algorithms from which data could be gather through benchmarkingand testing. The data gathered was studied and analysed with itscorresponding source code to prove the validity of parallelisation.

    Results. Multithreading does improve performance, with two threadsin average providing a speedup of up to 2x and four threads up to3x. However, the potential speedup is bound to the available physicalthreads of the CPU and dependent of balancing the workload.

    Conclusions. The importance of workload balancing and using thecorrect number of threads in relation to the problem to be solved,needs to be carefully considered in order to utilize the extra resourcesavailable to its full potential.

  • 310.
    Novak, Gabriela
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Developing a usability method for assessment of M-Commerce systems: a case study at Ericsson2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Usability work in software engineering is a measure of quality and it contributes to the overall acceptability of systems. However it is also the most neglected process in the software industry. While there are established guidelines and methods in usability, they are not used in the industry to the full extent. Objectives. In this case study I examine the level of usability and usability issues with the context of use in an M-Commerce solution. Also, I address the distance between development and the current and potential expert users of the wallet platform solution. Methods. In this exploratory research, a number of article sources such as IEEE Xplore, ACM Digital Library, and Springer Link are used. Studies are selected after reading titles, abstracts and keywords, then chosen if relevant to the subject. The methods used in this study were literature review, case study and experiment. Results. A usability test was performed on a specific user interface, in order to detect potential usability issues that might have been overlooked in the development cycle. As an experiment, the test was performed with proxy users and verified with the actual users. A recommendation list based on the test results was produced for possible improvements in the interface. Conclusions. As a conclusion, a modified usability testing method is proposed. Also I conclude that performing usability testing based on the context of use and the ISO 9241-11 standard, brings value to Ericsson’s current and potential customers as well as to the solution itself. The results from the two groups used for testing were very similar and the proxy user group is thus a good alternative to actual users. With the engagement of a team that is working with the customers dispersed over the world, the context of use can be brought to the development department.

  • 311.
    Novak, Gabriela
    et al.
    Ericsson AB, SWE.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Usability Evaluation of an M-Commerce System Using Proxy Users2015Inngår i: HCI INTERNATIONAL 2015 - POSTERS' EXTENDED ABSTRACTS, PT II, SPRINGER-VERLAG BERLIN , 2015, s. 164-169Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We have done a usability evaluation of a mobile commerce system developed by Ericsson in Sweden. The main market for the system is in developing countries in Africa. Consequently, there is a geographical distance between the developers and the users, and it is difficult to involve actual users in usability tests. Because of this, a team of solution architects that work with the product was used as proxies for the actual users in the usability test. When the test was completed, a group of actual users came to Sweden to attend a course. In order to get additional input to the usability evaluation, the usability test was repeated with the actual users. The results from the two groups were very similar, and our conclusion is that the proxy user group was a good alternative to actual users.

  • 312.
    Ola, Spjuth
    et al.
    Karolinska Institutet, SWE.
    Andreas, Karlsson
    Karolinska Institutet, SWE.
    Mark, Clements
    Karolinska Institutet, SWE.
    Keith, Humphreys
    Karolinska Institutet, SWE.
    Emma, Ivansson
    Karolinska Institutet, SWE.
    Jim, Dowling
    Royal Institute of Technology, SWE.
    Martin, Eklund
    Karolinska Institutet, SWE.
    Alexandra, Jauhiainen
    AstraZeneca AB R&D, SWE.
    Kamila, Czene
    Karolinska Institutet, SWE.
    Henrik, Grönberg
    Karolinska Institutet, SWE.
    Pär, Sparén
    Karolinska Institutet, SWE.
    Fredrik, Wiklund
    Karolinska Institutet, SWE.
    Abbas, Cheddad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    þorgerður, Pálsdóttir
    Nordic Information for Action e-Science Center, SWE.
    Mattias, Rantalainen
    Karolinska Institutet, SWE.
    Linda, Abrahamsson
    Karolinska Institutet, SWE.
    Erwin, Laure
    Royal Institute of Technology, SWE.
    Jan-Eric, Litton
    European Research Infrastructure Consortium, AUT.
    Juni, Palmgren
    Helsinki University, FIN.
    E-Science technologies in a workflow for personalized medicine using cancer screening as a case study2017Inngår i: JAMIA Journal of the American Medical Informatics Association, ISSN 1067-5027, E-ISSN 1527-974X, Vol. 24, nr 5, s. 950-957Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Objective: We provide an e-Science perspective on the workflow from risk factor discovery and classification of disease to evaluation of personalized intervention programs. As case studies, we use personalized prostate and breast cancer screenings.

    Materials and Methods: We describe an e-Science initiative in Sweden, e-Science for Cancer Prevention and Control (eCPC), which supports biomarker discovery and offers decision support for personalized intervention strategies. The generic eCPC contribution is a workflow with 4 nodes applied iteratively, and the concept of e-Science signifies systematic use of tools from the mathematical, statistical, data, and computer sciences.

    Results: The eCPC workflow is illustrated through 2 case studies. For prostate cancer, an in-house personalized screening tool, the Stockholm-3 model (S3M), is presented as an alternative to prostate-specific antigen testing alone. S3M is evaluated in a trial setting and plans for rollout in the population are discussed. For breast cancer, new biomarkers based on breast density and molecular profiles are developed and the US multicenter Women Informed to Screen Depending on Measures (WISDOM) trial is referred to for evaluation. While current eCPC data management uses a traditional data warehouse model, we discuss eCPC-developed features of a coherent data integration platform.

    Discussion and Conclusion: E-Science tools are a key part of an evidence-based process for personalized medicine. This paper provides a structured workflow from data and models to evaluation of new personalized intervention strategies. The importance of multidisciplinary collaboration is emphasized. Importantly, the generic concepts of the suggested eCPC workflow are transferrable to other disease domains, although each disease will require tailored solutions.

  • 313.
    Osekowska, Ewa
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Design and Implementation of a Maritime Traffic Modeling and Anomaly Detection Method2014Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Nowadays ships are usually equipped with a system of marine instruments, one of which is an Automatic Identification System (AIS) transponder. The availability of the global AIS ship tracking data opened the possibilities to develop maritime security far beyond the simple collision prevention. The research work summarized in this thesis explores this opportunity, with the aim of developing an intuitive and comprehensible method for traffic modeling and anomaly detection in the maritime domain. The novelty of the method lays in employing the technique of artificial potential fields. The general idea is for the potentials to represent typical patterns of vessels' behaviors. A conflict between potentials, which have been observed in the past, and the potential of a vessel currently in motion, indicates an anomaly. The developed potential field based method has been examined using a web-based anomaly detection system STRAND (for Seafaring TRansport ANomaly Detection). Its applicability has been demonstrated in several publications, examining its scalability, modeling capabilities and detection performance. The experimental investigations led to identifying optimal detection resolution for different traffic areas (open sea, harbor and river), and extracting traffic rules, e.g., with regard to speed limits and course, i.e., right-hand sailing rule. The map-based display of modeled traffic patterns and detection cases has been analyzed as well, using several demonstrative cases. The massive AIS database created for this study, together with a dataset of real traffic incidents, provides an abundance of challenges for future studies.

  • 314.
    Osekowska, Ewa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Potential fields in maritime anomaly detection2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a novel approach for pattern extraction and anomaly detection in mari- time vessel traffic, based on the theory of potential fields. Potential fields are used to rep- resent and model normal, i.e. correct, behaviour in maritime transportation, observed in historical vessel tracks. The recorded paths of each maritime vessel generate potentials based on metrics such as geographical location, course, velocity, and type of vessel, resulting in a potential-based model of maritime traffic patterns. A prototype system STRAND, developed for this study, computes and displays distinctive traffic patterns as potential fields on a geographic representation of the sea. The system builds a model of normal behaviour, by collating and smoothing historical vessel tracks. The resulting visual presentation exposes distinct patterns of normal behaviour inherent in the recorded maritime traffic data. Based on the created model of normality, the system can then perform anomaly detection on current real-world maritime traffic data. Anomalies are detected as conflicts between vessel’s potential in live data, and the local history-based potential field. The resulting detection performance is tested on AIS maritime tracking data from the Baltic region, and varies depending on the type of potential. The potential field based approach contributes to maritime situational awareness and enables automatic detection. The results show that anomalous behaviours in maritime traffic can be detected using this method, with varying performance, necessitating further study.

  • 315.
    Osekowska, Ewa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Potential fields in modeling transport over water2015Inngår i: Operations Research/Computer Science Interfaces Series, ISSN 1387-666X, Vol. 58, s. 259-280Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Without an explicit road-like regulation, following the proper sailing routes and practices is still a challenge mostly addressed using seamen’s know-how and experience. This chapter focuses on the problem of modeling ship movements over water with the aim to extract and represent this kind of knowledge. The purpose of the developed modeling method, inspired by the theory of potential fields, is to capture the process of navigation and piloting through the observation of ship behaviors in transport over water on narrow waterways. When successfully modeled, that knowledge can be subsequently used for various purposes. Here, the models of typical ship movements and behaviors are used to provide a visual insight into the actual normal traffic properties (maritime situational awareness) and to warn about potentially dangerous traffic behaviors (anomaly detection). A traffic modeling and anomaly detection prototype system STRAND implements the potential field based method for a collected set of AIS data. A quantitative case study is taken out to evaluate the applicability and performance of the implemented modeling method. The case study focuses on quantifying the detections for varying geographical resolution of the detection process. The potential fields extract and visualize the actual behavior patterns, such as right-hand sailing rule and speed limits, without any prior assumptions or information introduced in advance. The display of patterns of correct (normal) behavior aids the choice of an optimal path, in contrast to the anomaly detection which notifies about possible traffic incidents. A tool visualizing the potential fields may aid traffic surveillance and incident response, help recognize traffic regulation and legislative issues, and facilitate the process of waterways development and maintenance. © Springer International Publishing Switzerland 2015.

  • 316.
    Osekowska, Ewa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Hogskola, S-37179 Karlskrona, Sweden..
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Hogskola, S-37179 Karlskrona, Sweden..
    Learning Maritime Traffic Rules Using Potential Fields2015Inngår i: COMPUTATIONAL LOGISTICS (ICCL 2015), 2015, s. 298-312Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Automatic Identification System (AIS) is used to identify and locate active maritime vessels. Datasets of AIS messages recorded over time make it possible to model ship movements and analyze traffic events. Here, the maritime traffic is modeled using a potential fields method, enabling the extraction of traffic patterns and anomaly detection. A software tool named STRAND, implementing the modeling method, displays real-world ship behavior patterns, and is shown to generate traffic rules spontaneously. STRAND aids maritime situational awareness by displaying patterns of common behaviors and highlighting suspicious events, i.e., abstracting informative content from the raw AIS data and presenting it to the user. In this it can support decisions regarding, e.g., itinerary planning, routing, rescue operations, or even legislative traffic regulation. This study in particular focuses on identification and analysis of traffic rules discovered based on the computed traffic models. The case study demonstrates and compares results from three different areas, and corresponding traffic rules identified in course of the result analysis. The ability to capture distinctive, repetitive traffic behaviors in a quantitative, automatized manner may enhance detection and provide additional information about sailing practices.

  • 317.
    Osekowska, Ewa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grid size optimization for potential field based maritime anomaly detection2014Inngår i: 17TH MEETING OF THE EURO WORKING GROUP ON TRANSPORTATION, EWGT2014 / [ed] Benitez, FG Rossi, R, ELSEVIER SCIENCE BV , 2014, s. 720-729Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This study focuses on improving the potential field based maritime data modeling method, developed to extract traffic patterns and detect anomalies, in a clear, understandable and informative way. The method's novelty lies in employing the concept of a potential field for AIS vessel tracking data abstraction and maritime traffic representation. Unlike the traditional maritime surveillance equipment, such as radar or GPS, the AIS system comprehensively represents the identity and properties of a vessel, as well as its behavior, thus preserving the effects of navigational decisions, based on the skills of experienced seamen. In the developed data modeling process, every vessel generates potential charges, which value represent the vessel's behavior, and drops the charges at locations it passes. Each AIS report is used to assign a potential charge at the reported vessel positions. The method derives three construction elements, which define, firstly, how charges are accumulated, secondly, how a charge decays over time, and thirdly, in what way the potential is distributed around the source charge. The collection of potential fields represents a model of normal behavior, and vessels not conforming to it are marked as anomalous. In the anomaly detection prototype system STRAND, the sensitivity of anomaly detection can be modified by setting a geographical coordinate grid precision to more dense or coarse. The objective of this study is to identify the optimal grid size for two different conditions an open sea and a port area case. A noticeable shift can be observed between the results for the open sea and the port area. The plotted detection rates converge towards an optimal ratio for smaller grid sizes in the port area (60-200 meters), than in the open sea case (300-1000 meters). The effective outcome of the potential filed based anomaly detection is filtering out all vessels behaving normally and presenting a set of anomalies, for a subsequent incident analysis using STRAND as an information visualization tool.

  • 318.
    Osekowska, Ewa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Maritime vessel traffic modeling in the context of concept drift2017Inngår i: Transportation Research Procedia, Elsevier, 2017, Vol. 25, s. 1457-1476Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Maritime traffic modeling serves the purpose of extracting human-readable information and discovering knowledge in the otherwise illegible mass of traffic data. The goal of this study is to examine the presence and character of fluctuations in maritime traffic patterns. The main objective is to identify such fluctuations and capture them in terms of a concept drift, i.e., unforeseen shifts in statistical properties of the modeled target occurring over time. The empirical study is based on a collection of AIS vessel tracking data, spanning over a year. The scope of the study limits the AIS data area to the Baltic region (9-31°E, 53-66°N), which experiences some of the most dense maritime traffic in the world. The investigations employ a novel maritime traffic modeling method based on the potential fields concept, adapted for this study to facilitate the examination of concept drift. The concept drift is made apparent in course of the statistical and visual analysis of the experimental results. This study shows a number of particular cases, in which the maritime traffic is affected by concept drifts of varying extent and character. The visual representations of the traffic models make shifts in the traffic patterns apparent and comprehensible to human eye. Based on the experimental outcomes, the robustness of the modeling method against concept drift in traffic is discussed and improvements are proposed. The outcomes provide insights into regularly reoccurring drifts and irregularities within the traffic data itself that may serve to further optimize the modeling method, and - in turn - the performance of detection based on it. © 2017 The Authors. Published by Elsevier B. V.

  • 319.
    Outadi, Siavash
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Trchalikova, Jana
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance comparison of KVM and XEN for telecommunication services2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    High stability of telecommunication services has a positive e ect on customer satisfaction and thus helps to maintain competitiveness of the product in telecommunication market. Since live migration provides a minimal down- time of virtual machines, it is deployed by telecommunication companies to ensure high availability of services and to prevent service interruptions. The main objective of this research is to assess the performance of various hypervisors in terms of live migration and determine which of them best meets the criteria given by a telecommunication company. Response time and CPU utilization of telecommunication services are measured in non- virtualized and virtualized environments to better understand the impacts of virtualization on the services. Two hypervisors, i.e. KVM and XEN, are used to grasp their characteristic behaviour of handling the services. Furthermore, performance of live migration is assessed for both hypervisors using miscellaneous test cases to identify which one has the best overall performance in terms of downtime and total migration time.

  • 320.
    Padala, Praneel Reddy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Virtualization of Data Centers: study on Server Energy Consumption Performance2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Due to various reasons data centers have become ubiquitous in our society. Energy costs are significant portion of data centers total lifetime costs which also makes financial sense to operators. This increases huge concern towards the energy costs and environmental impacts of data center. Power costs and energy efficiency are the major challenges front of us.From overall cyber energy used, 15% is used by networking portion ofa data center. Its estimated that the energy used by network infrastructure in a data center world wide is 15.6 billion kWh and is expected to increase to around 50%.

    Power costs and Energy Consumption plays a major role through out the life time of a data center, which also leads to the increase in financial costs for data center operators and increased usage of power resources. So, resource utilization has become a major issue in the data centers.

    The main aim of this thesis study is to find the efficient way for utilization of resources and decrease the energy costs to the operators in the data centers using virtualization. Virtualization technology is used to deploy virtual servers on physical servers which uses the same resources and helps to decrease the energy consumption of a data center.

  • 321.
    Paleti, Apuroop
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles: A comparative analysis of A-star algorithm and Mixed Integer Linear Programming2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Unmanned Aerial Vehicles are being widely being used for various scientific and non-scientific purposes. This increases the need for effective and efficient path planning of Unmanned Aerial Vehicles.Two of the most commonly used methods are the A-star algorithm and Mixed Integer Linear Programming.Objectives: Conduct a simulation experiment to determine the performance of A-star algorithm and Mixed Integer Linear Programming for path planning of Unmanned Aerial Vehicle in a simulated environment.Further, evaluate A-star algorithm and Mixed Integer LinearProgramming based computational time and computational space to find out the efficiency. Finally, perform a comparative analysis of A star algorithm and Mixed Integer Linear Programming and analyse the results.Methods: To achieve the objectives, both the methods are studied extensively, and test scenarios were generated for simulation of

    Objectives: Conduct a simulation experiment to determine the performance of A-star algorithm and Mixed Integer Linear Programming for path planning of Unmanned Aerial Vehicle in a simulated environment.Further, evaluate A-star algorithm and Mixed Integer LinearProgramming based computational time and computational space to find out the efficiency. Finally, perform a comparative analysis of A star algorithm and Mixed Integer Linear Programming and analyse the results.Methods: To achieve the objectives, both the methods are studied extensively, and test scenarios were generated for simulation of

    Methods: To achieve the objectives, both the methods are studied extensively, and test scenarios were generated for simulation of these methods. These methods are then implemented on these test scenarios and the computational times for both the scenarios were observed.A hypothesis is proposed to analyse the results. A performance evaluation of these methods is done and they are compared for a better performance in the generated environment.

    Results: It is observed that the efficiency of A-star algorithm andMILP algorithm when no obstacles are considered is 3.005 and 12.03functions per second and when obstacles are encountered is 1.56 and10.59 functions per seconds. The results are statistically tested using hypothesis testing resulting in the inference that there is a significant difference between the computation time of A-star algorithm andMILP. Performance evaluation is done, using these results and the efficiency of algorithms in the generated environment is obtained.Conclusions: The experimental results are analysed, and the

    Conclusions: The experimental results are analysed, and the efficiencies of A-star algorithm and Mixed Integer Linear Programming for a particular environment is measured. The performance analysis of the algorithm provides us with a clear view as to which algorithm is better when used in a real-time scenario. It is observed that Mixed IntegerLinear Programming is significantly better than A-star algorithm.

  • 322.
    Papisetty, Srinivas Divya
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Homomorphic Encryption: Working and Analytical Assessment: DGHV, HElib, Paillier, FHEW and HE in cloud security2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Secrecy has kept researchers spanning over centuries engaged in the creation of data protection techniques. With the growing rate of data breach and intervention of adversaries in confidential data storage and communication, efficient data protection has found to be a challenge. Homomorphic encryption is one such data protection technique in the cryptographic domain which can perform arbitrary computations on the enciphered data without disclosing the original plaintext or message. The first working fully homomorphic encryption scheme was proposed in the year 2009 and since then there has been a tremendous increase in the development of homomorphic encryption schemes such that they can be applied to a wide range of data services that demand security. All homomorphic encryption schemes can be categorized as partially homomorphic (PHE), somewhat homomorphic (SHE), leveled Homomorphic (LHE), and fully homomorphic encryption (FHE). Each encryption algorithm has its own importance and usage in different realms of security. DHGV, Paillier, HElib, and FHEW are the algorithms chosen in this study considering their wide usage and scope for further advancement in this subject area. A public-key algorithm named RSA is also chosen for comparison of the impact of HE and PKE (Public-key encryption) algorithm on the CPU and Memory. The utilization of various homomorphic schemes and concepts in the trending cloud storage systems is a prevailing field of research and can be expanded further by knowing the current state-of-the-art of homomorphic encryption. Hence, the necessity of comprehending the knowledge of homomorphic encryption schemes and their aspect in cloud security becomes vital.

    Objectives: The objective of this study is to analytically assess homomorphic encryption and various homomorphic encryption schemes. A comprehensive investigation on working and performance of the selected HE schemes is another objective of this research. Also, an experiment to run publicly available libraries of DGHV, Paillier, HElib, and FHEW is one of the main objectives. In addition to these, comprehending the impact of HE and PKE on CPU and Memory is also among the objectives of the study. The role and practice of homomorphic encryption in the cloud storage system are among the secondary objectives of this research in terms of securing confidential data. These objectives are set based on the research gap identified by conducting an exhaustive literature review.

    Methods: The objectives of this study are achieved by adopting the methods exhaustive literature review and experiment. Scientific databases such as IEEE Xplore, ACM Digital Library, Inspec, Springer Link etc. are used and literature is accordingly selected based on the relevance to the research topic. An exhaustive literature review is conducted and extensive bibliographic research is done to accomplish the objective of comprehending the working, applications, significance of homomorphic encryption. Apart from literature review, bibliographic research, an experiment is also conducted to run the publicly available homomorphic encryption libraries to evaluate, compare, and analyze the performance of DGHV, Paillier, HElib, and FHEW schemes. Experiment to run publicly available PKE algorithm is also conducted. Finally, the conclusion and outcome by adopting these research methods for accomplishing the objectives are theoretically presented in detail.

    Results: By conducting an exhaustive literature review, the importance, working, application of homomorphic encryption and its schemes is discerned. And by conducting an experiment, the impact of HE and PKE is also discerned. Apart from this, the limitations of HE and selected HE schemes along with the distinction between public and private key cryptography is understood by finding and mapping in connection with each other. From the experiment conducted, it is examined that despite the encryption libraries being publicly available for use, the possibility of running and employing few libraries successfully is remarkably low inferring that there is much improvement needed in this cryptographic discipline.

    Conclusions: From this research, it can be concluded that homomorphic encryption has a wide scope of extending towards efficiency and application in various fields concerned with data protection. It can also me concluded that the experimental assessment of state of the art of few HE schemes libraries that are available online are remarkably impractical for real-time practice. By analyzing the selected ii schemes, it can be concluded few HE schemes do not support any other operations on encrypted data other than addition and multiplication due to which chances of increasing noise for each encryption is relatively high. From the experiment conducted for Paillier encryption (HE) and RSA (PKE) encryption, it is concluded that both the schemes increase linearly with an increase in the input size when CPU and Memory utilization is measured. Apart from these conclusions, it can also be inferred that not all the homomorphic encryption algorithms are IND-CCA1 and IND-CCA2 secure. From this study, it can be deduced that more empirical validation and analysis of HE algorithms is required in terms of their performance and security. In order to address these problems, much research and improvement are required as it inferred from the results of this research that Homomorphic encryption is still in its early stage of development and enormous utility can be anticipated when enhanced correctly.

  • 323.
    Paulauskas, Vytautas
    et al.
    Klaipeda Shipping Research Centre, LTU.
    Henesey, Lawrence
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Paulauskas, Donatas
    Klaipeda University, LTU.
    Ronkaitytė, Leva
    Klaipeda University, LTU.
    Gerlitz, Laima
    Wismar University of Applied Sciences, DEU.
    Jankowski, S.
    Akademia Morska w Szczecinie, POL.
    Canepa, M.
    World Maritime University, SWE.
    LNG bunkering stations location optimization on basis graph theory2018Inngår i: Transport Means - Proceedings of the International Conference, Kaunas University of Technology , 2018, s. 660-664Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As an alternative to traditional fuel and energy source LNG (Liquefied Natural Gas) has many advantages, such as lower emissions while providing a means of energy for trucks, trains and ships. In focusing on the maritime transport sector the reasons for using LNG make a convincing business case, but lead to many discussions on LNG investments. The key issue has been is: “should investment be implemented first on LNG bunkering stations and then wait for the market to build ships to use the facilities or should investment wait until there is a demand?” Obviously, this creates a “chicken-and-egg” situation on when and where to invest for LNG use to take place. The initial experiences in using LNG in maritime and road transport suggest that the transport firms often take the risk themselves by not only investing into the transport units (ships, trucks) themselves but also invest into the infrastructure as well, e.g., developing LNG bunkering facilities. At the same time with these large initial investments for developing LNG bunkering networks there are more and more requests for identifying optimal solutions, often are based on real LNG fuel demand in ports and on the roads. This paper is oriented on the study for optimal bunkering network creation, which is argued to help with improved efficiency in the supply of LNG fuel to transport users. In addition, optimal investments for LNG bunkering networks can be realized. © 2018 Kaunas University of Technology. All rights reserved.

  • 324.
    Paulauskas, Vytautas
    et al.
    Klaipeda University, LTU.
    Paulauskas, Donatas
    Klaipeda University, LTU.
    Placiene, Birutė
    Klaipeda University, LTU.
    Barzdziukas, Raimondas
    Klaipeda University, LTU.
    Maksimavicius, Ričardas
    Klaipeda University, LTU.
    Ronkaityte, I.
    Klaipeda University, LTU.
    Gerlitz, Laima
    Wismar University of Applied Sciences, DEU.
    Madjidian, J.
    World Maritime University, SWE.
    Jankowski, S.
    Akademia Morska w Szczecinie, POL.
    Henesey, Lawrence
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Optimization modelling of LNG supply chains for development: Case study of Lithuania and Latvia2017Inngår i: Transport Means - Proceedings of the International Conference, Kaunas University of Technology , 2017, s. 762-765Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The increasing demand for Liquid Natural Gas (LNG) is causing many challenges for users and suppliers worldwide. Though there is strong interest in using LNG, the research published in this paper indicates there are challenges in developing adequate delivery and distribution chains within the supply chain. Ideally, LNG distribution chains should be created on the basis of user demands and need. In this paper we have articulated an optimisation model that considers the various potential users and their characteristics in order to identify if possibilities and prospects exist in developing an adequate LNG supply chain. The case study of Lithuania and Latvia serves as a model from which we are able to use our tool to help identify the factors for success in creating such LNG supply chains. © 2017 Kaunas University of Technology. All rights reserved.

  • 325.
    Peng, Cong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Hybrid Cloud Approach for Sharing Health Information in Chronic Disease Self-Management2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Health information sharing improves the performance of patient self-management when dealing with challenging chronic disease care. Cloud computing has the potential to provide a more imaginative long-term solution compared with traditional systems. However, there is a need for identifying a suitable way to share patient health information via cloud. Objectives: This study aims to identify what health information is suitable and valuable to share from a type 2 diabetes patient when multiple stakeholders are involved for different purposes, and find out a promising and achievable cloud based solution which enables patients to share the health information what and where they want to share. Methods: To get a clear and deep understanding on the subject area, and identify available knowledge and information on relevant researches, a literature review was performed. And then, a prototype on the case of type 2 diabetes is implemented to prove the feasibility of the proposed solution after analyzing the knowledge acquired from literatures. Finally, professionals and patient were interviewed to evaluate and improve the proposed solution. Results: A hybrid cloud solution is identified as a suitable way to enable patient to share health information for promoting the treatment of chronic disease. Conclusions: Based on the research with type 2 diabetes, it was concluded that most records in daily life such as physiologic measurements, non-physiologic measurements and lifestyle are valuable for the treatment of chronic diseases. It was also concluded that hybrid cloud is suitable and achievable for sharing patient-recorded health information among trusted and semi-trusted stakeholders. Moreover, anonymous and patient opt-in consent model are suitable when sharing to semi-trusted stakeholders.

  • 326.
    Penumetsa, Swetha
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A comparison of energy efficient adaptation algorithms in cloud data centers2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.

     

    Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.

     

    Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.

     

    Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.

     

    From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.

     

    Conclusions:

    This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.

  • 327.
    Persson, Andreas
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Landenstad, Lukas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Explaining change: Comparing network snapshots for vulnerability management2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Background. Vulnerability management makes it easier for companies to find, manage and patch vulnerabilities in a network. This is done by scanning the network for known vulnerabilities. The amount of information collected during the scans can be large and prolong the analysis process of the findings. When presenting the result of found vulnerabilities it is usually represented as a trend of number of found vulnerabilities over time. The trends do not explain the cause of change in found vulnerabilities. 

    Objectives. The objective of this thesis is to investigate how to explain the cause of change in found vulnerabilities, by comparing vulnerability scanning reports from different points in time. Another objective of this thesis is to create an automated system that connects changes in vulnerabilities to specific events in the network.

    Methods. A case study was conducted where three reports, from vulnerability scans of Outpost24's internal test network, were examined in order to understand the structure of the reports and mapping them to events. To complement the case study, an additional simulated test network was set up in order to conduct self defined tests and obtain higher accuracy when identifying the cause of change in found vulnerabilities.

    Results. The observations done in the case study provided us with information on how to parse the data and how to identify the cause of change with a rule-based system. Interpretation of the data was done and the changes were grouped into three categories; added, removed or modified. After conducting the test cases, the results were then interpreted to find signatures in order to identify the cause of change in vulnerabilities. These signatures were then made into rules, implemented into a proof-of-concept tool. The proof of concept tool compared scan reports in pairs in order to find differences. These differences were then matched with the rules and if it did not match any rule, the change in the report was flagged as an ''unexplained'' change. The proof-of-concept tool was then used to investigate the cause of change between the reports from the case study. The framework was validated by evaluating the rules gathered from the simulated test network on the data from the case study. Furthermore, a domain expert verified that the identified causes were accurate by manually comparing the vulnerability reports from the case study.

    Conclusions. It is possible to identify the cause of change in found vulnerabilities from vulnerability scan reports by constructing signatures for events and use these signatures as rules. This can also be implemented automatically, as a software, in order to identify the cause of change faster than manual labor.

  • 328.
    Persson, Linnéa
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Experienced issues with tablet computer interfaces among older adults: An exploratory study using a human centred interaction design approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Older adults’ everyday usages of tablet computers in a home environment are currently left out from today’s research. Current researches include many specialized focuses regarding tablet computer interfaces but a holistic view of how older adults experience usage of tablet computers in their everyday life is still to a large extent missing.

    Objectives: The aim of this study is to apply a use- and user-centred action research approach to explore how older adults experience the usage of tablet computers in their everyday life. Older adults are observed using tablet computers in their home environment in order to explore experienced issues and identify possible improvements for the older adults’ interaction and use of tablet computers.

    Methods: This study is a qualitative explorative case study using a grounded theory approach. The study consists a two week observation period beginning with an introduction and tutorial to tablet computers. Semi-structured interviews were conducted directly before and after as well as a long term follow up interview, few months after the observation period. The research body with the first two interviews and observation period was conducted using grounded theory iteration, allowing for analysis between each participant. The study involved 10 older adults and took place in the older adults’ natural everyday home environment and targeted older adults born 1960 or earlier.

    Results: The study used different ways to introduce tablet computers to the older adults, individually adjusted to their knowledge about information communication technology in general and tablet computers in particular as well as for them familiar subjects. Associations using their familiar subjects were used to help the older adults to remember the functionality of different icons. Some of the interaction issues encountered could be solved on the spot using accessibility options and accessories. The participants’ interest in tablet computers increased during the main body of the study, but later decreased again between the last two interviews.

    Conclusions: Many interaction issues were identified during the study where the main issues were related to accuracy, typing, gesture and terminology. Suggestions are made concerning how to solve some of the issues encountered, like a dynamic grid for icons and text on the tablet computer home screen and an ergonomic version of a touchscreen pen. Although there are interaction issues that are directly related to the interface, there are other important aspects too that affect the experienced interaction issues when interacting with and using a tablet computer in one’s everyday life home environment. Influencers affecting how older adults feel about tablet computers played a very important role. Having the observer functioning as a technical mentor, during the two week observation period, played a bigger role than expected, but the short time the older adults had with the mentor was not enough to keep them interested on a long term basis.

  • 329.
    Persson, Marie
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hvitfeldt-Forsberg, Helena
    Medical Management Centre (MMC), SWE.
    Unbeck, Maria
    Danderyds sjukhus, SWE.
    Sköldenberg, Olof Gustaf
    Danderyds sjukhus, SWE.
    Stark, Andreas
    Danderyds sjukhus, SWE.
    Kelly-Pettersson, Paula
    Danderyds sjukhus, SWE.
    Mazzocato, Pamela
    Medical Management Centre (MMC), SWE.
    Operational strategies to manage non-elective orthopaedic surgical flows: A simulation modelling study2017Inngår i: BMJ Open, ISSN 2044-6055, E-ISSN 2044-6055, Vol. 7, nr 4, artikkel-id e013303Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Objectives To explore the value of simulation modelling in evaluating the effects of strategies to plan and schedule operating room (OR) resources aimed at reducing time to surgery for non-elective orthopaedic inpatients at a Swedish hospital. Methods We applied discrete-event simulation modelling. The model was populated with real world data from a university hospital with a strong focus on reducing waiting time to surgery for patients with hip fracture. The system modelled concerned two patient groups that share the same OR resources: hip-fracture and other non-elective orthopaedic patients in need of surgical treatment. We simulated three scenarios based on the literature and interaction with staff and managers: (1) baseline; (2) reduced turnover time between surgeries by 20â €..min and (3) one extra OR during the day, Monday to Friday. The outcome variables were waiting time to surgery and the percentage of patients who waited longer than 24â €..hours for surgery. Results The mean waiting time in hours was significantly reduced from 16.2â €..hours in scenario 1 (baseline) to 13.3â €..hours in scenario 2 and 13.6â €..hours in scenario 3 for hip-fracture surgery and from 26.0â €..hours in baseline to 18.9â €..hours in scenario 2 and 18.5â €..hours in scenario 3 for other non-elective patients. The percentage of patients who were treated within 24â €..hours significantly increased from 86.4% (baseline) to 96.1% (scenario 2) and 95.1% (scenario 3) for hip-fracture patients and from 60.2% (baseline) to 79.8% (scenario 2) and 79.8% (scenario 3) for patients with other non-elective patients. Conclusions Healthcare managers who strive to improve the timelines of non-elective orthopaedic surgeries may benefit from using simulation modelling to analyse different strategies to support their decisions. In this specific case, the simulation results showed that the reduction of surgery turnover times could yield the same results as an extra OR. © 2017 Published by the BMJ Publishing Group Limited.

  • 330.
    Persson, Oskar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Wermelin, Erik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Theoretical Proposal of Two-Factor Authentication in Smartphones2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. For a user to gain access to a protected resource on the web,the user needs to get authenticated. There are different forms of authenti-cation, among the most common is the ordinary user name and passwordscheme. This scheme is very simple to implement, but it suffers from se-curity vulnerabilities and requires the user to remember passwords to allaccounts. Two-factor authentication could be one answer to increase thesecurity where one-factor authentication is lacking. However, depending onthe implementation, two-factor authentication could still be insecure andeven more user-unfriendly.

    Objectives.  In this study, we investigate if our implementation of two-factor authentication has any advantages to existing ones. Our goal is topresent a secure and user-friendly authentication scheme that uses bothpassword and fingerprint.

    Methods. A literary study was performed in order to collect informationon similar systems and subjects in order to build a comparable authentica-tion model. The collected information and proposed model was then usedto analyze possible drawbacks and to answer research questions.

    Results. Results derive from the comparison between our proposed modeland two Google two-factor authentication solutions.

    Conclusions. The results yielded from the literary study and analysisshows that our proposed model does not add any advantages concerningsecurity. Our model does however provide better ease of use in comparisonwith similar two-factor authentication solutions from Google.

  • 331.
    Petersson, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Train Re-scheduling: A Massively Parallel Approach Using CUDA2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Train re-scheduling during disturbances is a time-consuming task. Modified schedules need to be found, so that trains can meet in suitable locations and delays minimized. Domino effects are difficult to manage. Commercial optimization software has been shown to find optimal solutions, but modied schedules need to be found quickly. Therefore, greedy depth-first algorithms have been designed to find solutions within a limited time-frame. Modern GPUs have a high computational capacity, and have become easier to use for computations unrelated to computer graphics with the development of technologies such as CUDA and OpenCL.

    Objectives. We explore the feasibility of running a re-scheduling algorithm developed specifically for this problem on a GPU using the CUDA toolkit. The main objective is to find a way of exploiting the computational capacity of modern GPUs to find better re-scheduling solutions within a limited time-frame.

    Methods. We develop and adapt a sequential algorithm for use on a GPU and run multiple experiments using 16 disturbance scenarios on the single-tracked iron ore line in northern Sweden.

    Results. Our implementation succeeds in finding re-scheduling solutions without conflicts for all 16 scenarios. The algorithm visits on average 7 times more nodes per time unit than the sequential CPU algorithm when branching at depth 50, and 4 times more when branching at depth 200.

    Conclusions. The computational performance of our parallel algorithm is promising but the approach is not complete. Our experiments only show that multiple solution branches can be explored fast in parallel, but not how to construct a high level algorithm that systematically searches for better schedules within a certain time limit. Further research is needed for that. We also find that multiple threads explore redundant solutions in our approach.

  • 332.
    Petersson, Stefan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improving image quality by SSIM based increase of run-length zeros in GPGPU JPEG encoding2014Inngår i: Conference Record of the Asilomar Conference on Signals Systems and Computers, IEEE Computer Society, 2014, s. 1714-1718Konferansepaper (Fagfellevurdert)
    Abstract [en]

    JPEG encoding is a common technique to compress images. However, since JPEG is a lossy compression certain artifacts may occur in the compressed image. These artifacts typically occur in high frequency or detailed areas of the image. This paper proposes an algorithm based on the SSIM metric to improve the experienced quality in JPEG encoded images. The algorithm improves the quality in detailed areas by up to 1.29 dB while reducing the quality in less detailed areas of the image, thereby increasing the overall experienced quality without increasing the image data size. Further, the algorithm can also be used to decrease the file size (by up to 43%) while preserving the experienced image quality. Finally, an efficient GPU implementation is presented. © 2014 IEEE.

  • 333.
    Petersson, Stefan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Color demosaicing using structural instability2016Inngår i: Proceedings - 2016 IEEE International Symposium on Multimedia, ISM 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 541-544Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper introduces a new metric for approximating structural instability in Bayer image data. We show that the metric can be used to identify and classify validity of color correlation in local image regions. The metric is used to improve interpolation performance of an existing state of-the-art single pass linear demosaicing algorithm, with virtually no impact on computational GPGPU complexity and performance. Using four different image sets, the modification is shown to outperform the original method in terms of visual quality, by having an average increase in PSNR of 0.7 dB in the red, 1.5 dB in the green and 0.6 dB in the blue channel respectively. Because of fewer high-frequency artifacts, the average output data size also decreases by 2.5%. © 2016 IEEE.

  • 334.
    Pham, Phuong
    et al.
    UC Davis, USA.
    Erlandsson, Fredrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Wu, Felix
    UC Davis, USA.
    Social Coordinates: A Scalable Embedding Framework for Online Social Networks2017Inngår i: Proceedings of the 2017 International Conference on Machine Learning and Soft Computing, ACM Digital Library, 2017, s. 191-196Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a scalable framework to embed nodes of a large social network into an Euclidean space such that the proximity between embedded points reflects the similarity between the corresponding graph nodes. Axes of the embedded space are chosen to maximize data variance so that the dimension of the embedded space is a parameter to regulate noise in data. Using recommender system as a benchmark, empirical results show that similarity derived from the embedded coordinates outperforms similarity obtained from the original graph-based measures.

  • 335.
    Podapati, Sasidahr
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sköld, L.
    Telenor, SWE.
    Rosander, Oliver
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sidorova, Yulia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Fuzzy recommendations in marketing campaigns2017Inngår i: Communications in Computer and Information Science / [ed] Darmont J.,Kirikova M.,Norvag K.,Wrembel R.,Papadopoulos G.A.,Gamper J.,Rizzi S., 2017, Vol. 767, s. 246-256Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The population in Sweden is growing rapidly due to immigration. In this light, the issue of infrastructure upgrades to provide telecommunication services is of importance. New antennas can be installed at hot spots of user demand, which will require an investment, and/or the clientele expansion can be carried out in a planned manner to promote the exploitation of the infrastructure in the less loaded geographical zones. In this paper, we explore the second alternative. Informally speaking, the term Infrastructure-Stressing describes a user who stays in the zones of high demand, which are prone to produce service failures, if further loaded. We have studied the Infrastructure-Stressing population in the light of their correlation with geo-demographic segments. This is motivated by the fact that specific geo-demographic segments can be targeted via marketing campaigns. Fuzzy logic is applied to create an interface between big data, numeric methods for its processing, and a manager who wants a comprehensible summary. © 2017, Springer International Publishing AG.

  • 336.
    Popescu, Adrian
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Greening Video Distribution: Networks Energy-Efficient Internet Video Delivery2018Collection/Antologi (Fagfellevurdert)
    Abstract [en]

    This insightful text presents a guide to video distribution networks (VDNs), providing illuminating perspectives on reducing power consumption in IP-based video networks from an authoritative selection of experts in the field. A particular focus is provided on aspects of architectures, models, Internet protocol television (IPTV), over-the-top (OTT) video content, video on demand (VoD) encoding and decoding, mobile terminals, wireless multimedia sensor networks (WMSNs), software defined networking (SDN), and techno-economic issues.

    Topics and features: reviews the fundamentals of video over IP distribution systems, and the trade-offs between network/service performance and energy efficiency in VDNs; describes the characterization of the main elements in a video distribution chain, and techniques to decrease energy consumption in software-based VoD encoding; introduces an approach to reduce power consumption in mobile terminals during video playback, and in data center networks using the SDN paradigm; discusses the strengths and limitations of different methods for measuring the energy consumption of mobile devices; proposes optimization methods to improve the energy efficiency of WMSNs, and a routing algorithm that reduces energy consumption while maintaining the bandwidth; presents an economic analysis of the savings yielded by approaches to minimize energy consumption of IPTV and OTT video content services.

    The broad coverage and practical insights offered in this timely volume will be of great value to all researchers, practitioners and students involved with computer and telecommunication systems.​

  • 337.
    Popescu, Adrian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Chaudhry, Mohammad Asad Rehman
    Soptimizer, CAN.
    Managing the OTT Traffic in NFV Environment2018Inngår i: 2018 12th International Conference on Communications, COMM 2018 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, s. 3-10, artikkel-id 8430182Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The emerging concepts Software Defined Networking (SDN) and Network Functions Virtualisation (NFV) lay the ground for system softwarization of future networks and services, under the umbrella of 5G. In this regard, a special focus is given to Video Distribution Networks (VDN), particularly in relation to existing difficulties in handling demands laid on the network as well as in relation to challenges in providing the expected Quality of Experience (QoE) and other performance demands like minimum energy consumption. The paper is focused on some of the most important elements associated with the virtualisation of Over-The-Top (OTT) video distribution networks and also advancing the system for managing the OTT traffic in Network Function Virtualisation (NFV) environment. The paper provides motivation, problem description, virtualisation target, standardisation efforts, NFV framework as well as short presentation of the main technical issues associated with managing the OTT traffic inNFV scenarios. © 2018 IEEE.

  • 338.
    Popescu, Adrian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yao, Yong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Fiedler, Markus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för teknik och estetik.
    Ducloux, Xavier
    Harmonic, FRA.
    Video Distribution Networks: Models and Performance2018Inngår i: Greening Video Distribution Networks: Energy-Efficient Internet Video Delivery / [ed] Adrian Popescu, Springer, 2018, s. 227-Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    The creation, distribution and delivery of video content is a sophisticated process with elements like video acquisition, preprocessing and encoding, content production and packaging as well as distribution to customers. IP networks are usually used for the transfer of video signals. The treatment of video content is also very complex, and we have a multidimensional process with elements like content acquisition, content exchange and content distribution. The focus of the chapter is on the presentation of models that can be used to characterize the main elements in a video distribution chain. These models are about video coding and compression, video streaming, video traffic models, energy consumption models, system performance, concepts of performance optimization and QoE- and energy-optimal streaming.

  • 339.
    Popescu, Adrian
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yao, Yong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Video Distribution Networks: Architectures and System Requirements2018Inngår i: Greening Video Distribution Networks: Energy-Efficient Internet Video Delivery / [ed] Adrian Popescu, Springer, 2018Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    The creation of video content and its distribution over the Internet Protocol (IP) are sophisticated processes that follow a chain model from the acquisition of the video source, production and packaging, transport, and finally distribution to viewers. Video distribution networks refer to several parts, namely content contribution, primary distribution, secondary distribution, and video consumers. The focus of the chapter is on the presentation of video distribution systems over IP, categories of architectural solutions as well as a short presentation of several important applications associated with video distribution networks.

  • 340.
    Popov, Oleksii Yu
    et al.
    Taras Shevchenko National University of Kyiv, UKR.
    Kuzminykh, Ievgeniia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analysis of methods for reducing topology in wireless sensor networks2018Inngår i: 14th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET 2018 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, s. 529-532Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper describes the phases of deployment of wireless sensor networks, and more attention is given to the phase of reducing topology and to methods by which this can be achieved. A comparison of hierarchy-based topology construction algorithms is presented. Algorithms use a simple, distributed and energy efficient topology mechanism that discovers a optimal connected dominant set (CDS) to disable unnecessary nodes, providing important wireless sensor network characteristics such as full coverage and connectivity. © 2018 IEEE.

  • 341.
    Posse, Oliver
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tomanović, Ognjen
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Evaluation of Data Integrity Methods in Storage: Oracle Database2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. It is very common today that e-commerce systems store sensitiveclient information. The database administrators of these typesof systems have access to this sensitive client information and are ableto manipulate it. Therefore, data integrity is of core importance inthese systems and methods to detect fraudulent behavior need to beimplemented.

    Objectives. The objective of this thesis is to implement and evaluatethe features and performance impact of different methods for achievingdata integrity in a database, Oracle to be more exact.Methods. Five methods for achieving data integrity were tested.The methods were tested in a controlled environment. Three of themwas tested and performance evaluated by a tool emulating a real lifee-commerce scenario. The focus of this thesis is to evaluate the performanceimpact and the fraud detection ability of the implementedmethods.

    Results. This paper evaluates traditional Digital signature, Linkedtimestamping applied to a Merkle hash tree and Auditing performanceimpact and feature impact wise. Two more methods were implementedand tested in a controlled environment, Merkle hash tree and Digitalwatermarking. We showed results from the empirical analysis, dataverification and transaction performance. In our evaluation we provedour hypothesis that traditional Digital signature is faster than Linkedtimestamping.

    Conclusions. In this thesis we conclude that when choosing a dataintegrity method to implement it is of great importance to know whichtype of operation is more frequently used. Our experiments show thatthe Digital signature method performed better than Linked timestampingand Auditing. Our experiments did also conclude that applicationof Digital signature, Linked timestamping and Auditing decreasedthe performance by 4%, 12% and 27% respectively, which is arelatively small price to pay for data integrity.

  • 342.
    Printzell, Dan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Testing scalability of cloud gaming for multiplayer game2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Background. The rendering of games takes a lot of processing power and requires expensivehardware to be able to perform this task in a real-time with an acceptableframe-rate. Games often also require an anti-cheat system that require extrapower to be able to always verify that the game has not been modified. With the help ofgame streaming these disadvantages could be removed from the clients.

    Objectives. The objective of this thesis is to create a game streaming server and client tosee if a game streaming server could scale with the amount of coresit has access to.

    Methods. The research question will be answered using the implementation methodology, and an experiment will be conducted using that implementation. Two programs are implemented, the server program and the client program.The servers implement the management of clients, the game logic, the rendering and the compression. Each client can only be connected to one server and the server and its clients live inside of a game instance. Everyone that is connected to one server play on the same instance.The implementation is implemented in the D programming language, and it uses the ZLib and the SDL2 libraries as the building blocks.With all of these connected an experiment is designed where as many clients as possible connect to the server. With this data a plot is create in the result section.

    Results. The output data shows that the implementation scale and a formula was made-up to match the scalability. The formula is .

    Conclusions. The experiment was successful and showed that the game server successfully scaledbased on the number of cores that where allocated. It does not scale as good as expected,but it is still an success. The test results are limited as it was only tested on one setup. More research is needed to test it on more hardware and to be able find more optimized implementations.

  • 343.
    Provatas, Spyridon
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An Online Machine Learning Algorithm for Heat Load Forecasting in District Heating Systems2014Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Context. Heat load forecasting is an important part of district heating optimization. In particular, energy companies aim at minimizing peak boiler usage, optimizing combined heat and power generation and planning base production. To achieve resource efficiency, the energy companies need to estimate how much energy is required to satisfy the market demand. Objectives. We suggest an online machine learning algorithm for heat load forecasting. Online algorithms are increasingly used due to their computational efficiency and their ability to handle changes of the predictive target variable over time. We extend the implementation of online bagging to make it compatible to regression problems and we use the Fast Incremental Model Trees with Drift Detection (FIMT-DD) as the base model. Finally, we implement and incorporate to the algorithm a mechanism that handles missing values, measurement errors and outliers. Methods. To conduct our experiments, we use two machine learning software applications, namely Waikato Environment for Knowledge Analysis (WEKA) and Massive Online Analysis (MOA). The predictive ability of the suggested algorithm is evaluated on operational data from a part of the Karlshamn District Heating network. We investigate two approaches for aggregating the data from the nodes of the network. The algorithm is evaluated on 100 runs using the repeated measures experimental design. A paired T-test is run to test the hypothesis that the the choice of approach does not have a significant effect on the predictive error of the algorithm. Results. The presented algorithm forecasts the heat load with a mean absolute percentage error of 4.77\%. This means that there is a sufficiently accurate estimation of the actual values of the heat load, which can enable heat suppliers to plan and manage more effectively the heat production. Conclusions. Experimental results show that the presented algorithm can be a viable alternative to state-of-the-art algorithms that are used for heat load forecasting. In addition to its predictive ability, it is memory-efficient and can process data in real time. Robust heat load forecasting is an important part of increased system efficiency within district heating, and the presented algorithm provides a concrete foundation for operational usage of online machine learning algorithms within the domain.

  • 344.
    Pulagam, Sai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    QoE analysis of diffrenet DASH players in adverse network conditions2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Streaming multimedia over the Internet is omnipresent but still in its infancy, specifically when it comes to theadaptation based on bandwidth/throughput measurements, clients competing for limited/shared bandwidth, and thepresence of a caching infrastructure. Nowadays the streaming infrastructure is existing over-the-top (OTT).Interestingly, these services are all delivered over-the-top of the existing networking infrastructure using the HypertextTransfer Protocol (HTTP) which resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP(DASH).Video traffic is over 80% of the total internet traffic. Since Dynamic adaptive streaming over HTTP hasemerged it has a popular technique for video streaming on the internet. DASH allows the video player to adapt thebitrate according to the network conditions. DASH streaming client receives a manifest file, downloaded the referredvideo segments over HTTP, and play them back seamlessly emulated video streaming. This introduces latency of atleast one segment duration which decreases the quality of the user experience. In order to improve users over allQuality of Experience(QoE) a large number of adaptaion schemes were introduced to DASH.With such an importance of QoE on DASH, this thesis work investigates user acceptability of different DASHplayers. These players are exposed to different network conditions and the user analysis is given for each of thestreaming dash video for each player. The results of this thesis work include the analysis of the user rating for threedifferent players (Dash.js, GPAC and Shaka). The conclusion of this thesis is given by the quality of experience of thevideo streaming by the users which concludes that the Shaka player is preferred by the users compared to the othertwo players.

  • 345.
    Putta, Advaith
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Implementation of Augmented Reality applications to recognize Automotive Vehicle using Microsoft HoloLens: Performance comparison of Vuforia 3-D recognition and QR-code recognition Microsoft HoloLens applications2019Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Volvo Construction Equipment is planning to use Microsoft Hololens as a tool for the on-site manager to keep a track on the automotive machines and obtain their corresponding work information. For that, a miniature site has been build at PDRL BTH consisting of three different automotive vehicles. We are developing Augmented Reality applications for Microsoft Hololens to recognize these automotive vehicles. There is a need to identify the most feasible recognition method that can be implemented using Microsoft Hololens. Objectives. In this study, we investigate which among the Vuforia 3-D recognition method and the feasible method is best suited for the Microsoft Hololens and we also find out the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Methods. In this study, we conducted a literature review and the number of articles has been reviewed for IEEE Xplore, ACM Digital Library, Google Scholar and Scopus sources. Seventeen articles were selected for review after reading their titles and abstracts of articles obtained from the search. Two experiments were performed to find out the best recognition method of the Microsoft Hololens and the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Results. QR-code recognition method is the best recognition method to be used by Microsoft Hololens for recognizing automotive vehicles in the range of one to two feet and Vuforia 3-D recognition method is recommended for more than two feet distance. Conclusions. We conclude that the QR-code recognition method is suitable for recognizing vehicles in the close range (1-2 feet) and Vuforia 3-D object recognition is suitable for recognition for distance over two feet. These two methods are different from each other. One used the 3-D scan of the vehicle to recognize the vehicle and the other uses image recognition (using unique QR-codes). We covered effect of distance on the recognition capability of the application and a lot of work has to be done in terms of how does the QR-code size effects the maximum distance at which an automotive vehicle can be recognized. We conclude that there is a need for further experimentation in order to find out the impact of QR-code size on the maximum recognition distance.

  • 346.
    Ramstedt, Linda
    et al.
    Sweco AB, SWE.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Davidsson, Paul
    Malmö Högskola, SWE.
    Movement of people and goods2013Inngår i: Understanding Complex Systems / [ed] Edmonds B., Meyer R. (eds), Springer Verlag , 2013, nr 9783319669472, s. 705-720Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Due to the continuous growth of traffic and transportation and thus an increased urgency to analyze resource usage and system behavior, the use of computer simulation within this area has become more frequent and acceptable. This chapter presents an overview of modeling and simulation of traffic and transport systems and focuses in particular on the imitation of social behavior and individual decision-making in these systems. We distinguish between transport and traffic. Transport is an activity where goods or people are moved between points A and B, while traffic is referred to as the collection of several transports in a common network such as a road network. We investigate to what extent and how the social characteristics of the users of these different traffic and transport systems are reflected in the simulation models and software. Moreover, we highlight some trends and current issues within this field and provide further reading advice. © 2017, Springer International Publishing AG.

  • 347.
    Ramya Sravanam, Ramya
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Empirical Study on Quantitative Measurement Methods for Big Image Data: An Experiment using five quantitative methods2016Independent thesis Advanced level (degree of Master (One Year)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. With the increasing demand for image processing applications in multimedia applications, the importance for research on image quality assessment subject has received great interest. While the goal of Image Quality Assessment is to find the efficient Image Quality Metrics that are closely relative to human visual perception, from the last three decades much effort has been put by the researchers and numerous papers and literature has been developed with emerging Image Quality Assessment techniques. In this regard, emphasis is given to Full-Reference Image Quality Assessment research where analysis of quality measurement algorithms is done based on the referenced original image as that is much closer to perceptual visual quality. Objectives. In this thesis we investigate five mostly used Image Quality Metrics which were selected (which includes Peak Signal to Noise Ratio (PSNR), Structural SIMilarity Index (SSIM), Feature SIMilarity Index (FSIM), Visual Saliency Index (VSI), Universal Quality Index (UQI)) to perform an experiment on a chosen image dataset (of images with different types of distortions due to different image processing applications) and find the most efficient one with respect to the dataset used. This research analysis could possibly be helpful to researchers working on big image data projects where selection of an appropriate Image Quality Metric is of major significance. Our study details the use of dataset taken and the experimental results where the image set highly influences the results.  Methods. The goal of this study is achieved by conducting a Literature Review to investigate the existing Image Quality Assessment research and Image Quality Metrics and by performing an experiment. The image dataset used in the experiment is prepared by obtaining the database from LIVE Image Quality Assessment database. Matlab software engine was used to experiment for image processing applications. Descriptive analysis (includes statistical analysis) was employed to analyze the results obtained from the experiment. Results. For the distortion types involved (JPEG 2000, JPEG compression, White Gaussian Noise, Gaussian Blur) SSIM was efficient to measure the image quality after distortion for JPEG 2000 compressed and white Gaussian noise images and PSNR was efficient for JPEG compression and Gaussian blur images with respect to the original image.  Conclusions. From this study it is evident that SSIM and PSNR are efficient in Image Quality Assessment for the dataset used. Also, that the level of distortions in the image dataset highly influences the results, where in our case SSIM and PSNR perform efficiently for the used database. 

  • 348.
    Rangavajjula, Santosh Bharadwaj
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Design of information tree for support related queries: Axis Communications AB: An exploratory research study in debug suggestions with machine learning at Axis Communications, Lund2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context:

    In today's world, we have access to so much data than at any time in the past with more and more data coming from smartphones, sensors networks, and business processes. But, most of this data is meaningless, if it's not properly formatted and utilized. Traditionally, in service support teams, issues raised by customers are processed locally, made reports and sent over in the support line for resolution. The resolution of the issue then depends on the expertise of the technicians or developers and their experience in handling similar issues which limits the size, speed, and scale of the problems that can be resolved. One solution to this problem is to make relevant information tailored to the issue under investigation to be easily available.

    Objectives:

    The focus of the thesis is to improve turn around time of customer queries using recommendations and evaluate by defining metrics in comparison to existing workflow. As Artificial Intelligence applications can have a broad spectrum, we confine the scope with a relevance in software service and Issue Tracking Systems. Software support is a complicated process as it involves various stakeholders with conflicting interests. During the course of this literary work, we are primarily interested in evaluating different AI solutions specifically in the customer support space customize and compare them.

    Methods:

    The following thesis work has been carried out by making controlled experiments using different datasets and Machine learning models.

    Results:

    We classified Axis data and Bugzilla (eclipse) using Decision Trees, K Nearest Neighbors, Neural Networks, Naive Bayes and evaluated them using precision, recall rate, and F-score. K Nearest Neighbors was having precision 0.11, recall rate 0.11, Decision Trees had precision 0.11, recall rate 0.11, Neural Networks had precision 0.13, recall rate 0.11 and Naive Bayes had precision 0.05, recall rate 0.11. The result shows too many false positives and true negatives for being able to recommend.

    Conclusions:

    In this Thesis work, we have gone through 33 research articles and synthesized them. Existing systems in place and the current state of the art is described. A debug suggestion tool was developed in python with SKlearn. Experiments with different Machine Learning models are run on the tool and highest 0.13 (precision), 0.10 (f-score), 0.11 (recall) are observed with MLP Classification Neural Network.

  • 349.
    Ranjitkar, Hari Sagar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Karki, Sudip
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Comparison of A*, Euclidean and Manhattan distance using Influence map in MS. Pac-Man2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context An influence map and potential fields are used for finding path in domain of Robotics and Gaming in AI. Various distance measures can be used to find influence maps and potential fields. However, these distance measures have not been compared yet.

    ObjectivesIn this paper, we have proposed a new algorithm suitable to find an optimal point in parameters space from random parameter spaces. Finally, comparisons are made among three popular distance measures to find the most efficient.

    Methodology For our RQ1 and RQ2, we have implemented a mix of qualitative and quantitative approach and for RQ3, we have used quantitative approach. Results A* distance measure in influence maps is more efficient compared to Euclidean and Manhattan in potential fields.

    Conclusions Our proposed algorithm is suitable to find optimal point and explores huge parameter space. A* distance in influence maps is highly efficient compared to Euclidean and Manhattan distance in potentials fields. Euclidean and Manhattan distance performed relatively similar whereas A* distance performed better than them in terms of score in Ms. Pac-Man (See Appendix A).

  • 350.
    Rekanar, Kaavya
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Text Classification of Legitimate and Rogue online Privacy Policies: Manual Analysis and a Machine Learning Experimental Approach2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
456789 301 - 350 of 435
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf