Ändra sökning
Avgränsa sökresultatet
2345678 201 - 250 av 435
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 201.
    Khoshniyat, Fahimeh
    et al.
    Linköpings universitet, SWE.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analysis of strengths & weaknesses of a MILP model for revising railway traffic timetables2017Ingår i: OpenAccess Series in Informatics, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , 2017, Vol. 59, artikel-id 131022Konferensbidrag (Refereegranskat)
    Abstract [en]

    A railway timetable is typically planned one year in advance, but may be revised several times prior to the time of operation in order to accommodate on-demand slot requests for inserting additional trains and network maintenance. Revising timetables is a computationally demanding task, given the many dependencies and details to consider. In this paper, we focus on the potential of using optimization-based scheduling approach for revising train timetables during short term planning, from one week to few hours before the actual operation. The approach relies on a MILP (Mixed Integer Linear Program) model which is solved by using the commercial solver Gurobi. In a previous experimental study, the MILP approach was used to revise a significant part of the annual timetable for a sub-network in Southern Sweden to insert additional trains and allocate time slots for urgent maintenance. The results showed that the proposed MILP approach in many cases generates feasible, good solutions rather fast. However, proving optimality was in several cases time-consuming, especially for larger problems. Thus, there is a need to investigate and develop strategies to improve the computational performance. In this paper, we present results from a study, where a number of valid inequalities has been selected and applied to the MILP model with the aim to reduce the computation time. The experimental evaluation of the selected valid inequalities showed that although they can provide a slight improvement with respect to computation time, they are also weakening the LP relaxation of the model.

  • 202.
    Knudson, Dean
    et al.
    North Dakota State University, USA.
    Kalafatis, Stavros
    Texas A and M University, USA.
    Kleiner, Carsten
    Hochschule Hannover, DEU.
    Zahos, Stephen
    University of Illinois at Urbana-Champaign, USA.
    Seegebarth, Barbara
    Technische Universitat Braunschweig, DEU.
    Detterfelt, Jonas
    Linköpings universitet, SWE.
    Avazpour, Iman
    Deakin University, AUS.
    Sandahl, Kristian
    Linköpings universitet, SWE.
    Gorder, Peter
    University of Colorado at Colorado Springs, USA.
    Ginige, Jeewani Anupama
    Western Sydney University, AUS.
    Radermacher, Alex
    North Dakota State University, USA.
    Caballero, Hugo
    Universidad del Norte, COL.
    Gomez, Humberto
    Universidad del Norte, COL.
    Roos, Mikael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Global software engineering experience through international capstone project exchanges2018Ingår i: PROCEEDINGS 2018 ACM/IEEE 13TH INTERNATIONAL CONFERENCE ON GLOBAL SOFTWARE ENGINEERING ICGSE 2018, IEEE Computer Society , 2018, s. 54-58Konferensbidrag (Refereegranskat)
    Abstract [en]

    Today it is very common for software systems to be built by teams located in more than one country. For example, a project team may be located in the US while the team lead resides in Sweden. How then should students be trained for this kind of work? Senior design or capstone projects offer students real-world hands-on experience but rarely while working internationally. One reason is that most instructors do not have international business contacts that allow them to find project sponsors in other countries. Another reason is the fear of having to invest a huge amount of time managing an international project. In this paper we present the general concepts related to "International Capstone Project Exchanges", the basic model behind the exchanges (student teams are led by an industry sponsor residing in a different country) and several alternate models that have been used in practice. We will give examples from projects in the US, Germany, Sweden, Australia, and Colombia. We have extended the model beyond software projects to include engineering projects as well as marketing, and journalism. We conclude with a description of an International Capstone Project Exchange website that we have developed to aid any university in establishing their own international project exchange. © 2018 Authors.

  • 203.
    Kolli, Venkata Sai Siva Reddy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Reducing the Effort on Backward Compatibility in Cloud Servers2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The Big Enterprises all over the world are setting up their services in cloud as this is cheaper and offers a lot of other benefits. These services have to be updated from time to time and for this, the enterprises have to upgrade their systems in the cloud. During these upgrades, the enterprises face a lot of problems known as compatibility issues. The companies are investing big time to avoid these compatibility issues. The investment could be in money, time, labor etc. also known as effort. Therefore, it is in our interest to attempt to reduce this effort. In this study, our main objectives are to calculate the effort required to maintain the backward compatibility during the upgrade process in the cloud and to find ways to reduce this effort. Reducing the effort will help the companies cut down their investment. A hypothesis was introduced saying that the network usage was dependent on the upgrade method chosen.

    We have chosen experimentation to be the suitable research method. To run our experiments, we have created a virtual environment similar to that of Ericsson. The experimental values were recorded. We recorded the values such as code complexity, total time for the upgrade process and the network usage during the experiment. Using these values, we tried to estimate the effort and scale it to real time scenarios. Using ANOVA we proved that our null hypothesis was correct. The results have then been discussed in detail and RQ1 is answered. Later RQ2 is answered based on the answer for RQ1. Through our analysis we were able to get a rough estimate of the effort (labor, time, cost) required to maintain backward compatibility. We propose that the existence of the tool mentioned will reduce the effort considerably. The features of the tool are explained in detail. Some future work as the actual development of the tool has been suggested.

  • 204.
    Kondaveeti, Divya
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Correlatie van Traffic Patterns en Prestatieproblemen in Remote Virtual Desktop Environments2018Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 205.
    Kota, Sai Mohan Harsha
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analysis of Organizational Structure of a Company by Evaluation of Email Communications of Employees: A Case Study2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    There are many aspects that govern the performance of an organization. One of the most important thing is their organizational structure. Having a well-planned organizational structure facilitates good internal communication among the employees, which in turn contributes to the success of the organization. Today, company re-structuring is very common in the industry. When various key employees are re-organized (moved to different hierarchical positions), the company might experience certain incidents which can be damaging or beneficial for the company. To leverage the potential gain, having an efficient organizational structure is very important for a company. The primary objective of this study is to analyze the existing organizational structure of the company by the evaluation of email communications between the employees, and if required suggest the need for re-organization. In this case study, we have applied various cluster validation techniques to evaluate the email communications between the employees. The data (email logs) are provided by the company which have been recorded at different time periods. We have analyzed the organizational structure through the analysis of these email logs. We have then simulated various re-organization scenarios. By applying various cluster validation metrics, we have examined the quality of the existing organizational structure. We have also recorded how re-organization (moving employees from one organizational unit to other) effects the overall quality of the existing organizational structure of the company.

    In this study, we have presented how different cluster validation metrics will be helpful in assessing the quality of the organizational structure by reflecting the different aspects of the organizational structure. We have shown that our approach makes it possible to evaluate the effects of different re-organization scenarios on the internal communication patterns of employees in an organization. All these metrics can be used by the company to improve their existing organizational structure.

  • 206.
    Kotikalapudi, Sai Venkat Naresh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Comparing Live Migration between Linux Containers and Kernel Virtual Machine: Investigation study in terms of parameters2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. Virtualization technologies have been extensively used in various cloud platforms. Hardware replacements and maintenance are occasionally required, which leads to business downtime. Live migration is performed to ensure high availability of services, as it is a major aspect. The performance of live migration in virtualization technologies directly impacts the performance of cloud platforms. Hence comparison is performed in two mainstream virtualization technologies, container and hypervisor based virtualization.

    Objectives. In the present study, the objective is to perform live migration of hypervisor and container based virtualization technologies, Kernel Virtual Machine (KVM) and Linux Containers (LXC) respectively. Measure and compare the downtime, total migration time, CPU utilization and disk utilization of KVM and LXC during live migration.

    Methods. An initial literature is conducted to get in-depth knowledge about live migration in virtualization technologies. An experiment is conducted to perform live migration in KVM and LXC. The live migration process is performed when 100 % and 66% workloads are being generated to Cassandra present in virtual machine and container. The performance of live migration in KVM and LXC is measured in terms of CPU utilization, disk utilization, total migration time and downtime.

    Results. Based on the obtained results from the experiment, graphs are plotted for the performance of KVM and LXC during live migration. The results indicated that KVM has better CPU utilization when compared to LXC. However, downtime, total migration time and disk utilization of LXC are relatively better than KVM. From the obtained results, mean and standard deviation are calculated. Box plotting for downtime and total migration time is performed to illustrate difference between KVM and LXC. The measurable difference between KVM and LXC is calculated using Cohen’s d effect size for downtime, total migration time, CPU and disk utilization.

    Conclusions. The present study concludes that no single hypervisor has better performance when considering all performance metrics. While LXC has better performance when considering downtime, total migration time and disk utilization. However, KVM performs better when CPU usage is considered.

  • 207.
    KOTTUPPARI SRINIVAS, SUSHEEL SAGAR
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Clustering Users Based on Mobility Patterns for Effective Utilization of Cellular Network Infrastructure2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context With the rapidly growing demand for cellular networks’ capacityand coverage, effective planning of Network Infrastructure (NI) has been amajor challenge for the telecom operators. The mobility patterns of different subscriber groups in the networks have been found to be a crucialaspect in the planning of NI. For a telecom operator, it is important to havean estimate of the efficiency (in terms of the Network Capacity - numberof subscribers that the network can handle) of the existing NI. For thispurpose, Lundberg et. al., have developed an optimization based strategycalled as Tetris Strategy (TS), based on the standard subscriber groupingapproach called MOSAIC. The objective of TS is to calculate the upperbound estimate of the efficiency of the NI.

    Objectives The major objective of this thesis is to compare the efficiencyvalue of the NI when the subscribers are grouped (clustered) based on theirmobility patterns (characterized by a mobile trajectory) with the efficiencyvalue obtained when the subscribers are grouped based on the standardsubscriber grouping approach - MOSAIC.

    Methods Literature Review (LR) has been conducted to identify the stateof the art similarity/distance measures and algorithms to cluster trajectory data. Among the identified ones, for conducting experiments, LongestCommon Subsequences has been chosen as a similarity/distance measure,and Spectral and Agglomerative clustering algorithms have been chosen.All the experiments have been conducted on the subscriber trajectory dataprovided by the telecom operator, Telenor. The clusters obtained from theexperiments have been plugged into TS, to calculate the upper bound estimate of the efficiency of the NI.

    Results For the highest radio cell capacity, the network capacity valuesfor Spectral clustering, Agglomerative clustering and MOSAIC groupingsystem are 207234, 148056 and 87584 respectively. For every radio cellcapacity value, the mobility based clusters resulted in a higher network efficiency values than the MOSAIC. However, both spectral and agglomerativealgorithms have generated a very low quality clusters with the silhouettescores of 0.0717 and 0.0543 respectively.

    Conclusions Based on the analysis of the results, it can be concluded that,mobility based grouping of subscribers in the cellular network provide highernetwork efficiency values compared to the standard subscriber grouping systems such as MOSAIC.

  • 208.
    Krantz, Amandus
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lindblom, Petrus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Generating Topic-Based Chatbot Responses2017Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    With the rising popularity of chatbots, not just in entertainment but in e-commerce and online chat support, it’s become increasingly important to be able to quickly set up chatbots that can respond to simple questions. This study examines which of two algorithms for automatic generation of chatbot knowledge bases, First Word Search or Most Significant Word Search, is able to generate the responses that are the most relevant to the topic of a question. It also examines how text corpora might be used as a source from which to generate chatbot knowledge bases. Two chatbots were developed for this project, one for each of the two algorithms that are to be examined. The chatbots are evaluated through a survey where the participants are asked to choose which of the algorithms they thought chose the response that was most relevant to a question. Based on the survey we conclude that Most Significant Word Search is the algorithm that picks the most relevant responses. Most Significant Word Search has a significantly higher chance of generating a response that is relevant to the topic. However, how well a text corpus works as a source for knowledge bases depends entirely on the quality and nature of the corpus. A corpus consisting of written dialogue is likely more suitable for conversion into a knowledge base.

  • 209.
    Krasemann, Johanna Törnquist
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Design of an effective algorithm for fast response to the re-scheduling of railway traffic during disturbances2012Ingår i: Transportation Research Part C: Emerging Technologies, ISSN 0968-090X, E-ISSN 1879-2359, Vol. 20, nr 1, s. 62-78Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An attractive and sustainable railway traffic system is characterized by having a high security, high accessibility, high energy performance and offering reliable services with sufficient punctuality. At the same time, the network is to be utilized to a large extent in a cost-effective way. This requires a continuous balance between maintaining a high utilization and sufficiently high robustness to minimize the sensitivity to disturbances. The occurrence of some disturbances can be prevented to some extent but the occurrence of unpredictable events are unavoidable and their consequences then need to be analyzed, minimized and communicated to the affected users. Valuable information necessary to perform a complete consequence analysis of a disturbance and the re-scheduling is however not always available for the traffic managers. With current conditions, it is also not always possible for the traffic managers to take this information into account since he or she needs to act fast without any decision-support assisting in computing an effective re-scheduling solution. In previous research we have designed an optimization-based approach for re-scheduling which seems promising. However, for certain scenarios it is difficult to find good solutions within seconds. Therefore, we have developed a greedy algorithm which effectively delivers good solutions within the permitted time as a complement to the previous approach. To quickly retrieve a feasible solution the algorithm performs a depth-first search using an evaluation function to prioritise when conflicts arise and then branches according to a set of criteria.

  • 210.
    Kuna, Vignesh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Analysis of end-to-end DTLS and IPsec based communication in IoT systems: Security and Privacy ~ Distributed Systems Security2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 211.
    Kusetogullari, Huseyin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yavariabdi, Amir
    KTO Karatay Univ, TUR.
    Self-Adaptive Hybrid PSO-GA Method for Change Detection Under Varying Contrast Conditions in Satellite Images2016Ingår i: Proceedings of the 2016 SAI Computing Conference (SAI), IEEE, 2016, s. 361-368Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper proposes a new unsupervised satellite change detection method, which is robust to illumination changes. To achieve this, firstly, a preprocessing strategy is used to remove illumination artifacts and results in less false detection than traditional threshold-based algorithms. Then, we use the corrected input data to define a new fitness function based on the difference image. The purpose of using Self-Adaptive Hybrid Particle Swarm Optimization-Genetic Algorithm (SAPSOGA) is to combine two meta-heuristic optimization algorithms to search and find the feasible solution in the NP-hard change detection problem rapidly and efficiently. The hybrid algorithm is employed by letting the GA and PSO run simultaneously and similarities of GA and PSO have been considered to implement the algorithm, i.e. the population. In the SAPSOGA employed, in each iteration/generation the two population based algorithms share different amount of information or individual(s) between themselves. Thus, each algorithm informs each other about their best optimum results (fitness values and solution representations) which are obtained in their own population. The fitness function is minimized by using binary based SAPSOGA approach to produce binary change detection masks in each iteration to obtain the optimal change detection mask between two multi temporal multi spectral landsat images. The proposed approach effectively optimizes the change detection problem and finds the final change detection mask.

  • 212.
    Kusetogullari, Huseyin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yavariabdi, Amir
    KTO Karatay University, TUR.
    Unsupervised Change Detection in Landsat Images with Atmospheric Artifacts: A Fuzzy Multiobjective Approach2018Ingår i: Mathematical problems in engineering (Print), ISSN 1024-123X, E-ISSN 1563-5147, Vol. 2018, s. 1-16, artikel-id 7274141Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A new unsupervised approach based on a hybrid wavelet transform and Fuzzy Clustering Method (FCM) with Multiobjective Particle Swarm Optimization (MO-PSO) is proposed to obtain a binary change mask in Landsat images acquired with different atmospheric conditions. The proposed method uses the following steps: preprocessing,  classification of preprocessed image, and  binary masks fusion. Firstly, a photometric invariant technique is used to transform the Landsat images from RGB to HSV colour space. A hybrid wavelet transform based on Stationary (SWT) and Discrete Wavelet (DWT) Transforms is applied to the hue channel of two Landsat satellite images to create subbands. After that, mean shift clustering method is applied to the subband difference images, computed using the absolute-valued difference technique, to smooth the difference images. Then, the proposed method optimizes iteratively two different fuzzy based objective functions using MO-PSO to evaluate changed and unchanged regions of the smoothed difference images separately. Finally, a fusion approach based on connected component with union technique is proposed to fuse two binary masks to estimate the final solution. Experimental results show the robustness of the proposed method to existence of haze and thin clouds as well as Gaussian noise in Landsat images.

  • 213.
    Kusetogullari, Hüseyin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Inst Technol, Dept Comp Sci & Engn, S-37141 Karlskrona, Sweden..
    Unsupervised Text Binarization in Handwritten Historical Documents Using k-Means Clustering2018Ingår i: PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2 / [ed] Bi, Y Kapoor, S Bhatia, R, SPRINGER INTERNATIONAL PUBLISHING AG , 2018, s. 23-32Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a novel technique for unsupervised text binarization in handwritten historical documents using k-means clustering. In the text binarization problem, there are many challenges such as noise, faint characters and bleed-through and it is necessary to overcome these tasks to increase the correct detection rate. To overcome these problems, preprocessing strategy is first used to enhance the contrast to improve faint characters and Gaussian Mixture Model (GMM) is used to ignore the noise and other artifacts in the handwritten historical documents. After that, the enhanced image is normalized which will be used in the postprocessing part of the proposed method. The handwritten binarization image is achieved by partitioning the normalized pixel values of the handwritten image into two clusters using k-means clustering with k = 2 and then assigning each normalized pixel to the one of the two clusters by using the minimum Euclidean distance between the normalized pixels intensity and mean normalized pixel value of the clusters. Experimental results verify the effectiveness of the proposed approach.

  • 214.
    Kusetogullari, Hüseyin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Handwriting image enhancement using local learning windowing, Gaussian Mixture Model and k-means clustering2016Ingår i: 2016 IEEE International Symposium on Signal Processing and Information Technology, ISSPIT 2016, Institute of Electrical and Electronics Engineers Inc. , 2016, s. 305-310, artikel-id 7886054Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, a new approach is proposed to enhance the handwriting image by using learning-based windowing contrast enhancement and Gaussian Mixture Model (GMM). A fixed size window moves over the handwriting image and two quantitative methods which are discrete entropy (DE) and edge-based contrast measure (EBCM) are used to estimate the quality of each patch. The obtained results are used in the unsupervised learning method by using k-means clustering to assign the quality of handwriting as bad (if it is low contrast) or good (if it is high contrast). After that, if the corresponding patch is estimated as low contrast, a contrast enhancement method is applied to the window to enhance the handwriting. GMM is used as a final step to smoothly exchange information between original and enhanced images to discard the artifacts to represent the final image. The proposed method has been compared with the other contrast enhancement methods for different datasets which are Swedish historical documents, DIBCO2010, DIBCO2012 and DIBCO2013. Results illustrate that proposed method performs well to enhance the handwriting comparing to the existing contrast enhancement methods. © 2016 IEEE.

  • 215.
    Kusetogullari, Hüseyin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yavariabdi, Amir
    Karatay University, TUR.
    Change Detection in Multispectral Landsat Images Using Multiobjective Evolutionary Algorithm2017Ingår i: IEEE Geoscience and Remote Sensing Letters, ISSN 1545-598X, E-ISSN 1558-0571, Vol. 14, nr 3, s. 414-418, artikel-id 10.1109/LGRS.2016.2645742Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this letter, we propose a novel method for unsupervised change detection in multitemporal multispectral Landsat images using multiobjective evolutionary algorithm (MOEA). The proposed method minimizes two different objective functions using MOEA to provide tradeoff between each other. The objective functions are used for evaluating changed and unchanged regions of the difference image separately. The difference image is obtained by using the structural similarity index measure method, which provides combination of the comparisons of luminance, contrast, and structure between two images. By evolving a population of solutions in the MOEA, a set of Pareto optimal solution is estimated in a single run. To find the best solution, a Markov random field fusion approach is used. Experiments on semisynthetic and real-world data sets show the efficiency and effectiveness of the proposed method.

  • 216.
    Kusetogullari, Hüseyin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Yavariabdi, Amir
    Karatay University, TUR.
    Evolutionary multiobjective multiple description wavelet based image coding in the presence of mixed noise in images2018Ingår i: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 73, s. 1039-1052Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, a novel method for generation of multiple description (MD) wavelet based image coding is proposed by using Multi-Objective Evolutionary Algorithms (MOEAs). Complexity of the multimedia transmission problem has been increased for MD coders if an input image is affected by any type of noise. In this case, it is necessary to solve two different problems which are designing the optimal side quantizers and estimating optimal parameters of the denoising filter. Existing MD coding (MDC) generation methods are capable of solving only one problem which is to design side quantizers from the given noise-free image but they can fail reducing any type of noise on the descriptions if they applied to the given noisy image and this will cause bad quality of multimedia transmission in networks. Proposed method is used to overcome these difficulties to provide effective multimedia transmission in lossy networks. To achieve it, Dual Tree-Complex Wavelet Transform (DT-CWT) is first applied to the noisy image to obtain the subbands or set of coefficients which are used as a search space in the optimization problem. After that, two different objective functions are simultaneously employed in the MOEA to find pareto optimal solutions with the minimum costs by evolving the initial individuals through generations. Thus, optimal quantizers are created for MDCs generation and obtained optimum parameters are used in the image filter to remove the mixed Gaussian impulse noise on the descriptions effectively. The results demonstrate that proposed method is robust to the mixed Gaussian impulse noise, and offers a significant improvement of optimal side quantizers for balanced MDCs generation at different bitrates. © 2018 Elsevier B.V.

    Publikationen är tillgänglig i fulltext från 2019-11-19 13:54
  • 217.
    Kuzminykh, Ievgeniia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Avatar Conception for “Thing” Representation in Internet of Things2018Ingår i: Proccedings of SNCNW 2018, Karlskrona, May 31 - June 1, 2018, 2018, s. 46-49Konferensbidrag (Refereegranskat)
    Abstract [en]

    The complexity of ensuring IoT security is that the system is heterogeneous, consists of many assets on each of the architecture layer. Many experts in IoT security focus on threat analysis  and risk assessments to estimate the impact if a security incident or a breach occurs.

    In order to provide the general security requirements for the IoT system using threat risk modelling, the first thing to do is to identify the main security stakeholders, security assets, possible attacks, and, finally, threats for the IoT system. Using this general IoT threat model as a basis you can create a specific set of security objectives for a specific IoT application domain.

    In this paper we will try to highlight the assets that necessary for further analysis of the treat model for Internet of Things. We will also specify the stakeholders who are the connecting link between IoT devices, services and customers, as well as link between transfer and displaying the client commands onto smart things.

    For describing the model of component interaction in IoT system we will use the avatar-oriented approach since it allows us to merge objects into a system of objects. IoT Service has a more complex structure than a single entity. The application can use several services to display all information to end user, can aggregate data from several devices.

    To manipulate data objects the avatar representation approach is most appropriate, then you can easily connect or disconnect microservices, data from things, visual representation of data.

  • 218.
    Kuzminykh, Ievgeniia
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Analysis of Assets for Threat Risk Model in Avatar-Oriented IoT Architecture2018Ingår i: Internet of Things, Smart Spaces, and Next Generation Networks and Systems. NEW2AN 2018, ruSMART 2018. Lecture Notes in Computer Science, vol 11118 / [ed] Galinina O., Andreev S., Balandin S., Koucheryavy Y. (eds), Springer, 2018, Vol. 11118, s. 52-63Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper represents new functional architecture for the Internet of Things systems that use an avatar concept in displaying interaction between components of the architecture. Object-oriented representation of “thing” in the avatar concept allows simplify building and deployment of IoT systems over the web network and bind “things” to such application protocols as HTTP, CoAP, and WebSockets mechanism. The assets and stakeholders for ensuring security in IoT were specified. These assets are needed to isolate the risks associated with each of assets of IoT system. Example of Thing Instance’s description and its functionality using JSON format is shown also in the paper.

  • 219.
    Kuzminykh, Ievgeniia
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Franksson, Robin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Liljegren, Alexander
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Measuring a LoRa Network: Performance, Possibilities and Limitations2018Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Galinina O., Andreev S., Balandin S., Koucheryavy Y., Springer, 2018, Vol. 11118, s. 116-128Konferensbidrag (Refereegranskat)
    Abstract [en]

    Low power wide area (LPWA) technologies becomes popular for IoT use cases because LPWA is enable the broad range communications and allows to transmit small amounts of information in a long distance. Among LPWA technologies there are LTE-M, SigFox, LoRa, Symphony Link, Ingenu RPMA, Weightless, and NB-IoT. Currently all these technologies suffer from lack of documentation about deployment recommendation, have non-investigated limitations that can affect implementations and products using such technologies. This paper is focused on the testing of LPWAN LoRa technology to learn how a LoRa network gets affected by different environmental attributes such as distance, height and surrounding area by measuring the signal strength, signal to noise ratio and any resulting packet loss. The series of experiments for various use cases are conducted using a fully deployed LoRa network made up of a gateway and sensor available through the public network. The results will show the LoRa network limitation for such use cases as forest, city, open space. These results allow to give the recommendation for companies during early analysis and design stages of network life circle, and help to choose properly technology for deployment an IoT application.

  • 220.
    Kuzminykh, Ievgeniia
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Snihurov, Arkadii
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Testing of communication range in ZigBee technology2017Ingår i: 14th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics, CADSM 2017 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 133-136, artikel-id 7916102Konferensbidrag (Refereegranskat)
    Abstract [en]

    - In the rapidly growing Internet of Things (loT) applications from personal electronics to industrial machines and sensors are getting wirelessly connected to the Internet. Many well-known communication technologies such as WiFi, ZigBee, Bluetooth and cellular are used for transfer data in IoT. The choice of corresponding technology or combination of technologies depends on application or other factors such as data requirements, communication range, security and power demands, battery life. In this paper we will focus on ZigBee wireless technology and testing ZigBee end devices in order to see how transmission range impacts on quality parameters. © 2017 IEEE.

  • 221.
    Kuzminykh, Ievgeniia
    et al.
    Kharkiv National University, UKR.
    Snihurov, Arkadii
    Kharkiv National University, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Testing of communication range in ZigBee technology2017Ingår i: 14th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Institute of Electrical and Electronics Engineers (IEEE), 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Abstract:In the rapidly growing Internet of Things (loT) applications from personal electronics to industrial machines and sensors are getting wirelessly connected to the Internet. Many well-known communication technologies such as WiFi, ZigBee, Bluetooth and cellular are used for transfer data in IoT. The choice of corresponding technology or combination of technologies depends on application or other factors such as data requirements, communication range, security and power demands, battery life. In this paper we will focus on ZigBee wireless technology and testing ZigBee end devices in order to see how transmission range impacts on quality parameters.

  • 222.
    Kuzminykh, Ievgeniia
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sokolov, Vladimir
    State University of Telecommunication, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Scheme for dynamic channel allocation with interference reduction in wireless sensor network2017Ingår i: 4th International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T) / [ed] IEEE, IEEE, 2017, s. 564-568Konferensbidrag (Refereegranskat)
    Abstract [en]

    he paper introduces a new scheme of dynamic interference free channel allocation. The scheme is based on additional spectral analyzers in wireless networks IEEE 802.11. Design and implementation is presented.

  • 223.
    Lamorgese, Leonardo
    et al.
    Optrail, ITA.
    Mannino, Carlo
    Universitetet i Oslo, NOR.
    Pacciarelli, Dario
    Universita degli Studi Roma Tre, ITA.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Train Dispatching2018Ingår i: Handbook of Optimization in the Railway Industry / [ed] Borndörfer, R. (et al.), Springer New York LLC , 2018, 268, s. 265-283Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Train rescheduling problems have received significant attention in the operations research community during the past 20–30 years. These are complex problems with many aspects and constraints to consider. This chapter defines the problem and summarizes the variety of model types and solution approaches developed over the years, in order to address and solve the train dispatching problem from the infrastructure manager perspective. Despite all the research efforts, it is, however, only very recently that the railway industry has made significant attempts to explore the large potential in using optimization-based decision-support to facilitate railway traffic disturbance management. This chapter reviews state-of-practice and provides a discussion about the observed slow progress in the application of optimization-based methods in practice. A few successful implementations have been identified, but their performance as well as the lessons learned from the development and implementation of those system are unfortunately only partly available to the research community, or potential industry users. © 2018, Springer International Publishing AG.

  • 224.
    Li, Xin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An Agent-based Coordination Strategy for Information Propagation in Connected Vehicle Systems2014Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context. Connected vehicles use sensors such as cameras or radars to collect data about surrounding environments automatically and share these data with each other or with road side infrastructure using short-range wireless communication. Due to the large amount of information generated, strategies are required to minimize information redundancy when important information is propagated among connected vehicles. Objectives. This research aims to develop an information propagation strategy in connected vehicle systems using software agent-based coordination strategies to reduce unnecessary message broadcast and message propagation delay. Methods. A review of related work is used to acquire a deep insight as well as knowledge of the state-of-the-art and the state-of-practice from relevant studies in the subject area. Based on the review of related work, we propose an agent-based coordination strategy for information propagation in connected vehicle systems, in which connected vehicles coordinate their message broadcast activities using auctions. After that, a simulation experiment is conducted to evaluate the proposed strategy by comparing it with existing representative strategies. Results. Results of simulation experiments and statistical tests show that the proposed agent-based coordination strategy manifest some improvements in reducing unnecessary message broadcast and message propagation delay compared to other strategies involved in the simulation experiments. Conclusions. In this research, we suggest a new strategy to manage the propagation of information in connected vehicle systems. According to the small scale simulation analysis, the use of auctions to select message transmitters enables our proposed strategy to achieve some improvements in reducing unnecessary message broadcast and propagation delay than existing strategies. Thus, with the help of our proposed strategy, unnecessary message broadcast can be minimized and the communication resources of connected vehicle systems can be utilized effectively. Also, important safety messages can be propagated to drivers faster, negative traffic events could be averted.

  • 225.
    Li, Zheng
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Wang, Hua
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Mobile Game for Encouraging Active Listening among Deaf and Hard of Hearing People: Comparing the usage between mobile and desktop game2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. Daily active listening is important for the deaf and hard of hearing people (DHH) of their hearing rehabilitation, but the related hearing activities are usually not enough for them due to kinds of reasons. Although some traditional desktop computer-assisted tools were created for encouraging active listening, the usage rate is not high. Nowadays, mobile smart devices become more and more widely used and easily accessible all around the world. Game applications on these devices are good tools for training related activities. However, in the market, there are limited games designed for the DHH, especially aiming for engaging them in active listening. Therefore, such a game on mobile platform is the inspiration for increasing their everyday active listening.

    Objective. In this study, an audio-based mobile game application called the Music Puzzle was to create on Android operating system, for encouraging the DHH in their active listening. With aim of making the game have good usability and engaging for real use, we were to evaluate the game and conduct experiments on its usage, to see if it could be more used than another traditional hearing game on desktop platform and bring greater amount active listening for the DHH.

    Methods. In this study, overall, methods of literature review, game development, preliminary and evaluation experiments, as well as tracking study were used. In the development phase, interaction design theories and techniques was applied for assisting the design work. Android and Pure Data were employed for the software implementation work. In the evaluation phase, System Usability Scale (SUS) and Intrinsic Motivation Inventory (IMI) questionnaire were used for respectively testing the game usability and engagement. Then a four-week tracking study was conducted to acquire the usage data of the mobile game among the target group. Afterwards, the data was collected and compared with the usage data of the desktop game using statistical method of paired sample t-test.

    Results. From the preliminary experiments results, most of the participants reported their enjoyment with playing Music Puzzle and willingness to use it. Subsequent experiment gave good results on the game usability and engagement. The final tracking study shows that most participants activated and played Music Puzzle during the given time period. Compared with the desktop game, the DHH spent significantly greater amount of time on playing the mobile game.

    Conclusion. The study indicates that the Music Puzzle has good usability and it is engaging. Compared with the desktop game, The Music Puzzle mobile game is a more effective tool for encouraging and increasing the amount of active listening time among the DHH people in their everyday life. 

  • 226.
    Liang, Xusheng
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Comparative study of table layout analysis: Layout analysis solutions study for Swedish historical hand-written document2019Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background. Nowadays, information retrieval system become more and more popular, it helps people retrieve information more efficiently and accelerates daily task. Within this context, Image processing technology play an important role that help transcribing content in printed or handwritten documents into digital data in information retrieval system. This transcribing procedure is called document digitization. In this transcribing procedure, image processing technique such as layout analysis and word recognition are employed to segment the document content and transcribe the image content into words. At this point, a Swedish company (ArkivDigital® AB) has a demand to transcribe their document data into digital data.

    Objectives. In this study, the aim is to find out effective solution to extract document layout regard to the Swedish handwritten historical documents, which are featured by their tabular forms containing the handwritten content. In this case, outcome of application of OCRopus, OCRfeeder, traditional image processing techniques, machine learning techniques on Swedish historical hand-written document is compared and studied.

    Methods. Implementation and experiment are used to develop three comparative solutions in this study. One is Hessian filtering with mask operation; another one is Gabor filtering with morphological open operation; the last one is Gabor filtering with machine learning classification. In the last solution, different alternatives were explored to build up document layout extraction pipeline. Hessian filter and Gabor filter are evaluated; Secondly, filter images with the better filter evaluated at previous stage, then refine the filtered image with Hough line transform method. Third, extract transfer learning feature and custom feature. Fourth, feed classifier with previous extracted features and analyze the result. After implementing all the solutions, sample set of the Swedish historical handwritten document is applied with these solutions and compare their performance with survey.

    Results. Both open source OCR system OCRopus and OCRfeeder fail to deliver the outcome due to these systems are designed to handle general document layout instead of table layout. Traditional image processing solutions work in more than a half of the cases, but it does not work well. Combining traditional image process technique and machine leaning technique give the best result, but with great time cost.

    Conclusions. Results shows that existing OCR system cannot carry layout analysis task in our Swedish historical handwritten document. Traditional image processing techniques are capable to extract the general table layout in these documents. By introducing machine learning technique, better and more accurate table layout can be extracted, but comes with a bigger time cost. 

  • 227.
    Linder, Magnus
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Palm, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Asynchronous Shading in Object Space Lighting Compared to Forward Rendering2017Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: Rendering 3D scenes in real-time applications is becoming more computationally heavy all the time. Applications are demanded to render high quality graphics without going below a satisfactory frame rate. A huge part of the computation of graphics goes toward complex lighting of 3D models that has to be recomputed every frame. Object Space Lighting (OSL) is a recent technique that is able to store lighting data in between frames. This thesis researches how storing all shading data can impact the performance of an application. Objective: An OSL lighting application will be tested against a standard Forward rendering application in terms of performance, and image quality and perceptual deviations. Experiments are conducted using a scene that can have either still or moving lights and produces the results for the research. Results: Analysing the images from the results indicate that OSL is capable of rendering almost identical images as Forward rendering. The images are not perceptually different either. In terms of performance the hardware used for the experiments determines which application performs better rendering a scene with non-moving. Our OSL application shows clear weaknesses when rendering a scene with moving lights however. Conclusion: Finally, saving all lighting data with OSL is an interesting technique that with further research in the field could prove to be useful in a real time application under certain conditions.

  • 228.
    Liu, F.
    et al.
    Shandong Normal Univ, Sch Management Sci & Engn, Jinan 250014, Peoples R China..
    Wang, L.
    Shandong Normal Univ, Sch Management Sci & Engn, Jinan 250014, Peoples R China..
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Zhao, H.
    Univ Calif Davis, Dept Comp Sci, Davis, CA 95616 USA..
    Analysis of network trust dynamics based on the evolutionary game2015Ingår i: SCIENTIA IRANICA, ISSN 1026-3098, Vol. 22, nr 6, s. 2548-2557Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Trust, as a multi-disciplinary research domain, is of high importance in the area of network security and it has increasingly become an important mechanism to solve the issues of distributed network security. Trust is also an effective mechanism to simplify complex society, and is the source to promote personal or social cooperation. From the perspective of network ecological evolution, we propose the model of the P2P Social Ecological Network. Based on game theory, we also put forward network trust dynamics and network eco-evolution by analysis of network trust and the development of the dynamics model. In this article, we further analyze the dynamic equation, and the evolutionary trend of the trust relationship between nodes using the replicator dynamics principle. Finally, we reveal the law of trust evolution dynamics, and the simulation results clearly describe that the dynamics of trust can be effective in promoting the stability and evolution of networks. (C) 2015 Sharif University of Technology. All rights reserved.

  • 229.
    Liu, Fengming
    et al.
    Shandong Normal University, CHI.
    Zhu, Xiaoqian
    Shandong Normal University, CHI.
    Hu, Yuxi
    UC Davis, USA.
    Ren, Lehua
    Shandong Normal University, CHI.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A cloud theory-based trust computing model in social networks2017Ingår i: Entropy, ISSN 1099-4300, Vol. 19, nr 1, artikel-id 11Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    How to develop a trust management model and then to efficiently control and manage nodes is an important issue in the scope of social network security. In this paper, a trust management model based on a cloud model is proposed. The cloud model uses a specific computation operator to achieve the transformation from qualitative concepts to quantitative computation. Additionally, this can also be used to effectively express the fuzziness, randomness and the relationship between them of the subjective trust. The node trust is divided into reputation trust and transaction trust. In addition, evaluation methods are designed, respectively. Firstly, the two-dimension trust cloud evaluation model is designed based on node's comprehensive and trading experience to determine the reputation trust. The expected value reflects the average trust status of nodes. Then, entropy and hyper-entropy are used to describe the uncertainty of trust. Secondly, the calculation methods of the proposed direct transaction trust and the recommendation transaction trust involve comprehensively computation of the transaction trust of each node. Then, the choosing strategies were designed for node to trade based on trust cloud. Finally, the results of a simulation experiment in P2P network file sharing on an experimental platform directly reflect the objectivity, accuracy and robustness of the proposed model, and could also effectively identify the malicious or unreliable service nodes in the system. In addition, this can be used to promote the service reliability of the nodes with high credibility, by which the stability of the whole network is improved. © 2016 by the authors.

  • 230.
    Ljung, Alexander
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Knutsson, Hannes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Cost-Effective Positioning based on WiFi-Probes: A Quantitative Study: Deriving the Position of a Smartphone using the Signal Strength of WiFi-Probes2018Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In the modern society, almost everyone has a smartphone. These devices tend to almost always use WiFi-networking. For the device to identify nearby WiFi access points it has to send out WiFi probing broadcasts. Nearby access points respond to these broadcasts in order to let the device know that they are within reach. This technique is called active scanning. This paper aims to answer if it is possible to use the signal strength of these broadcasts to localize the device transmitting them. We are interested in the possibility of creating this kind of system and the accuracy that it would be able to provide.

    This is a quantitative study where we produce our results based on experiments, measurements and observations. The experiments are set in a large square shaped area. A sensor was placed at each corner of the area that the smartphone will be tracked within. The smartphone will be sending WiFi probing broadcasts that will be monitored and measured by the sensors. The strength of the broadcast signal will be converted into the relative distance between the devices position and the sensors. These four distances, collected from each of the sensors, will further be converted into a position within the area by using trilateration. To measure the accuracy of the system, the true position of the device will be compared against the calculated position from the system using only the signal strength. Further, a deviation in the distance between the two locations will be calculated.

    The experiments resulted in a positioning system that was able to estimate positions within an 80 x 80m area. Fourteen location positions were taken which resulted in a mean deviation of 16.6 meters from the true location and a root mean squared error of 19.5 meters. We concluded that more readings within the same position gave a significant increase in accuracy, to the expense of time. Using single measurements would be more practical, but would not produce reliable positions.

    Keywords: WiFi, Probe Broadcast, Local Positioning System, Trilateration, RSSI.

  • 231.
    Llewellyn, Tim
    et al.
    nVISO SA, CHE.
    Milagro Fernández Carrobles, María del
    University of Castilla-La Mancha, ESP.
    Deniz, Oscar
    University of Castilla-La Mancha, ESP.
    Fricker, Samuel
    i4Ds Centre for Requirements Engineering, CHE.
    Storkey, Amos
    University of Edinburgh, GBR.
    Pazos, Nuria
    Haute Ecole Specialisee de Suisse, CHE.
    Velikic, Gordana
    RT-RK, SRB.
    Leufgen, Kirsten
    SCIPROM SARL, CHE.
    Dahyot, Rozenn
    Trinity College Dublin, IRL.
    Koller, Sebastian
    Technical University Munich, DEU.
    Goumas, Georgios
    Technical University of Athens, GRC.
    Leitner, Peter
    SYNYO GmbH, AUT.
    Dasika, Ganesh
    ARM Ltd., GBR.
    Wang, Lei
    ZF Friedrichshafen AG, DEU.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    BONSEYES: Platform for Open Development of Systems of Artificial Intelligence2017Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.

  • 232.
    Lokan, Chris
    et al.
    UNSW Canberra, Australia.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Investigating the use of moving windows to improve software effort prediction: a replicated study2017Ingår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, nr 2, s. 716-767Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.

  • 233.
    Lokby, Patrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Jönsson, Manfred
    Preventing SQL Injections by Hashing the Query Parameter Data2017Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. Many applications today use databases to store user informationor other data for their applications. This information can beaccessed through various different languages depending on what typeof database it is. Databases that use SQL can maliciously be exploitedwith SQL injection attacks. This type of attack involves inserting SQLcode in the query parameter. The injected code sent from the clientwill then be executed on the database. This can lead to unauthorizedaccess to data or other modifications within the database.

    Objectives. In this study we investigate if a system can be builtwhich prevents SQL injection attacks from succeeding on web applicationsthat is connected with a MySQL database. In the intendedmodel, a proxy is placed between the web server and the database.The purpose of the proxy is to hash the SQL query parameter dataand remove any characters that the database will interpret as commentsyntax. By processing each query before it reaches its destination webelieve we can prevent vulnerable SQL injection points from being exploited.

    Methods. A literary study is conducted the gain the knowledgeneeded to accomplish the objectives for this thesis. A proxy is developedand tested within a system containing a web server and database.The tests are analyzed to arrive at a conclusion that answers ours researchquestions.

    Results. Six tests are conducted which includes detection of vulnerableSQL injection points and the delay difference on the system withand without the proxy. The result is presented and analyzed in thethesis.

    Conclusions. We conclude that the proxy prevents SQL injectionpoints to be vulnerable on the web application. Vulnerable SQL injectionpoints is still reported even with the proxy deployed in thesystem. The web server is able to process more http requests that requiresa database query when the proxy is not used within the system.More studies are required since there is still vulnerable SQL injectionspoints.

  • 234.
    Lopez-Rojas, Edgar Alonso
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Applying Simulation to the Problem of Detecting Financial Fraud2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis introduces a financial simulation model covering two related financial domains: Mobile Payments and Retail Stores systems.

     

    The problem we address in these domains is different types of fraud. We limit ourselves to isolated cases of relatively straightforward fraud. However, in this thesis the ultimate aim is to introduce our approach towards the use of computer simulation for fraud detection and its applications in financial domains. Fraud is an important problem that impact the whole economy. Currently, there is a lack of public research into the detection of fraud. One important reason is the lack of transaction data which is often sensitive. To address this problem we present a mobile money Payment Simulator (PaySim) and Retail Store Simulator (RetSim), which allow us to generate synthetic transactional data that contains both: normal customer behaviour and fraudulent behaviour. 

     

    These simulations are Multi Agent-Based Simulations (MABS) and were calibrated using real data from financial transactions. We developed agents that represent the clients and merchants in PaySim and customers and salesmen in RetSim. The normal behaviour was based on behaviour observed in data from the field, and is codified in the agents as rules of transactions and interaction between clients and merchants, or customers and salesmen. Some of these agents were intentionally designed to act fraudulently, based on observed patterns of real fraud. We introduced known signatures of fraud in our model and simulations to test and evaluate our fraud detection methods. The resulting behaviour of the agents generate a synthetic log of all transactions as a result of the simulation. This synthetic data can be used to further advance fraud detection research, without leaking sensitive information about the underlying data or breaking any non-disclose agreements.

     

    Using statistics and social network analysis (SNA) on real data we calibrated the relations between our agents and generate realistic synthetic data sets that were verified against the domain and validated statistically against the original source.

     

    We then used the simulation tools to model common fraud scenarios to ascertain exactly how effective are fraud techniques such as the simplest form of statistical threshold detection, which is perhaps the most common in use. The preliminary results show that threshold detection is effective enough at keeping fraud losses at a set level. This means that there seems to be little economic room for improved fraud detection techniques.

     

    We also implemented other applications for the simulator tools such as the set up of a triage model and the measure of cost of fraud. This showed to be an important help for managers that aim to prioritise the fraud detection and want to know how much they should invest in fraud to keep the loses below a desired limit according to different experimented and expected scenarios of fraud.

  • 235.
    Lopez-Rojas, Edgar Alonso
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Extending the RetSim Simulator for Estimating the Cost of fraud in the Retail Store Domain2015Ingår i: Proceedings of the European Modeling and Simulation Symposium, 2015, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    RetSim is a multi-agent based simulator (MABS) calibrated with real transaction data from one of the largest shoe retailers in Scandinavia. RetSim allows us to generate synthetic transactional data that can be publicly shared and studied without leaking business sensitive information, and still preserve the important characteristics of the data.

    In this paper we extended the fraud model of RetSim to cover more cases of internal fraud perpetrated by the staff and allow inventory control to flag even more suspicious activity. We also generated sufficient number of runs using a range of fraud parameters to cover a vast number of fraud scenarios that can be studied. We then use RetSim to simulate some of the more common retail fraud scenarios to ascertain exactly the cost of fraud using different fraud parameters for each case.

  • 236.
    Lopez-Rojas, Edgar Alonso
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On the Simulation of Financial Transactions for Fraud Detection Research2014Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis introduces a financial simulation model covering two related financial domains: Mobile Payments and Retail Stores systems. The problem we address in these domains is different types of fraud. We limit ourselves to isolated cases of relatively straightforward fraud. However, in this thesis the ultimate aim is to cover more complex types of fraud, such as money laundering, that comprises multiple organisations and domains. Fraud is an important problem that impact the whole economy. Currently, there is a general lack of public research into the detection of fraud. One important reason is the lack of transaction data which is often sensitive. To address this problem we present a Mobile Money Simulator (PaySim) and Retail Store Simulator (RetSim), which allow us to generate synthetic transactional data. These simulations are based on real transaction data. These simulations are multi agent based simulations. Hence, we developed agents that represent the clients in PaySim and customers and salesmen in RetSim. The normal behaviour was based on behaviour observed in data from the field, and is codified in the agents as rules of transactions and interaction between clients, or customers and salesmen. Some of these agents were intentionally designed to act fraudulently, based on observed patterns of real fraud. We introduced known signatures of fraud in our model and simulations to test and evaluate our fraud detection results. The resulting behaviour of the agents generate a synthetic log of all transactions as a result of the simulation. This synthetic data can be used to further advance fraud detection research, without leaking sensitive information about the underlying data. Using statistics and social network analysis (SNA) on real data we could calibrate the relations between staff and customers and generate realistic synthetic data sets that were validated statistically against the original. We then used RetSim to model two common retail fraud scenarios to ascertain exactly how effective the simplest form of statistical threshold detection commonly in use could be. The preliminary results show that threshold detection is effective enough at keeping fraud losses at a set level, that there seems to be little economic room for improved fraud detection techniques.

  • 237.
    Lopez-Rojas, Edgar Alonso
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Using the RetSim simulator for fraud detection research2015Ingår i: International Journal of Simulation and Process Modelling, ISSN 1740-2123, E-ISSN 1740-2131, Vol. 10, nr 2Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Managing fraud is important for business, retail and financialalike. One method to manage fraud is by \emph{detection}, wheretransactions etc. are monitored and suspicious behaviour is flaggedfor further investigation. There is currently a lack of publicresearch in this area. The main reason is the sensitive nature of thedata. Publishing real financial transaction data would seriouslycompromise the privacy of both customers, and companies alike. Wepropose to address this problem by building RetSim, a multi-agentbased simulator (MABS) calibrated with real transaction data from oneof the largest shoe retailers in Scandinavia. RetSim allows us togenerate synthetic transactional data that can be publicly shared andstudied without leaking business sensitive information, and stillpreserve the important characteristics of the data.

    We then use RetSim to model two common retail fraud scenarios toascertain exactly how effective the simplest form of statisticalthreshold detection could be. The preliminary results of our testedfraud detection method show that the threshold detection is effectiveenough at keeping fraud losses at a set level, that there is littleeconomic room for improved techniques.

  • 238.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Social Simulation of Commercial and Financial Behaviour for Fraud Detection Research2014Ingår i: Advances in Computational Social Science and Social Simulation / [ed] Miguel, Amblard, Barceló & Madella, Barcelona, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a social simulation model that covers three main financialservices: Banks, Retail Stores, and Payments systems. Our aim is toaddress the problem of a lack of public data sets for fraud detectionresearch in each of these domains, and provide a variety of fraudscenarios such as money laundering, sales fraud (based on refunds anddiscounts), and credit card fraud. Currently, there is a general lackof public research concerning fraud detection in the financial domainsin general and these three in particular. One reason for this is thesecrecy and sensitivity of the customers data that is needed toperform research. We present PaySim, RetSim, and BankSim asthree case studies of social simulations for financial transactionsusing agent-based modelling. These simulators enable us to generatesynthetic transaction data of normal behaviour of customers, and alsoknown fraudulent behaviour. This synthetic data can be used to furtheradvance fraud detection research, without leaking sensitiveinformation about the underlying data. Using statistics and socialnetwork analysis (SNA) on real data we can calibrate the relationsbetween staff and customers, and generate realistic synthetic datasets. The generated data represents real world scenarios that arefound in the original data with the added benefit that this data canbe shared with other researchers for testing similar detection methodswithout concerns for privacy and other restrictions present when usingthe original data.

  • 239.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Gjovik University College.
    Using the RetSim Fraud Simulation Tool to set Thresholds for Triage of Retail Fraud2015Ingår i: SECURE IT SYSTEMS, NORDSEC 2015 / [ed] Sonja Buchegger, Mads Dam, Springer, 2015, Vol. 9417, s. 156-171Konferensbidrag (Refereegranskat)
    Abstract [en]

    The investigation of fraud in business has been a staple for the digital forensics practitioner since the introduction of computers in business. Much of this fraud takes place in the retail industry. When trying to stop losses from insider retail fraud, triage, i.e. the quick identification of sufficiently suspicious behaviour to warrant further investigation, is crucial, given the amount of normal, or insignificant behaviour. It has previously been demonstrated that simple statistical threshold classification is a very successful way to detect fraud~\cite{Lopez-Rojas2015}. However, in order to do triage successfully the thresholds have to be set correctly. Therefore, we present a method based on simulation to aid the user in accomplishing this, by simulating relevant fraud scenarios that are foreseeing as possible and expected, to calculate optimal threshold limits. This method gives the advantage over arbitrary thresholds that it reduces the amount of labour needed on false positives and gives additional information, such as the total cost of a specific modelled fraud behaviour, to set up a proper triage process. With our method we argue that we contribute to the allocation of resources for further investigations by optimizing the thresholds for triage and estimating the possible total cost of fraud. Using this method we manage to keep the losses below a desired percentage of sales, which the manager consider acceptable for keeping the business properly running.

  • 240.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Gorton, Dan
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    RetSim: A ShoeStore Agent-Based Simulation for Fraud Detection2013Ingår i: 25th European Modeling and Simulation Symposium, EMSS 2013, 2013, s. 25-34Konferensbidrag (Refereegranskat)
    Abstract [en]

    RetSim is an agent-based simulator of a shoe store basedon the transactional data of one of the largest retail shoesellers in Sweden. The aim of RetSim is the generationof synthetic data that can be used for fraud detection re-search. Statistical and a Social Network Analysis (SNA)of relations between staff and customers was used to de-velop and calibrate the model. Our ultimate goal is forRetSim to be usable to model relevant scenarios to gen-erate realistic data sets that can be used by academia, andothers, to develop and reason about fraud detection meth-ods without leaking any sensitive information about theunderlying data. Synthetic data has the added benefit ofbeing easier to acquire, faster and at less cost, for exper-imentation even for those that have access to their owndata. We argue that RetSim generates data that usefullyapproximates the relevant aspects of the real data.

  • 241.
    Lopez-Rojas, Edgar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Norwegian University of Science and Technology, NOR.
    A review of computer simulation for fraud detection research in financial datasets2016Ingår i: FTC 2016 - Proceedings of Future Technologies Conference, IEEE, 2016, s. 932-935, artikel-id 7821715Konferensbidrag (Refereegranskat)
    Abstract [en]

    The investigation of fraud in the financial domain has been restricted to those who have access to relevant data. However, customer financial records are protected by law and internal policies, therefore they are not available for most of the researchers in the area of fraud detection. This paper aims to present the work of those researchers who have had access to data and present an interesting approach to fraud detection research; which is the generation of a synthetic data set to work on fraud detection research. Some of the domains covered in this review include mobile money payments, e-payments, retail stores, online bank services and credit card payments. We also cover some of the most relevant surveys in the field and point out the impossibility to compare this work due to the lack of common public data set to test different results.

  • 242.
    Lopez-Rojas, Edgar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Elmir, Ahmad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Axelsson, Stefan
    Norges Teknisk-Naturvitenskapelige Universitet, NOR.
    Paysim: A financial mobile money simulator for fraud detection2016Ingår i: 28th European Modeling and Simulation Symposium, EMSS 2016 / [ed] Bruzzone A.G.,Jimenez E.,Louca L.S.,Zhang L.,Longo F., Dime University of Genoa , 2016, s. 249-255Konferensbidrag (Refereegranskat)
    Abstract [en]

    The lack of legitimate datasets on mobile money transactions to perform research on in the domain of fraud detection is a big problem today in the scientific community. Part of the problem is the intrinsic private nature of financial transactions, that leads to no public available data sets. This will leave the researchers with the burden of first harnessing the dataset before performing the actual research on it. This paper propose an approach to such a problem that we named the PaySim simulator. PaySim is a financial simulator that simulates mobile money transactions based on an original dataset. In this paper, we present a solution to ultimately yield the possibility to simulate mobile money transactions in such a way that they become similar to the original dataset. With technology frameworks such as Agent-Based simulation techniques, and the application of mathematical statistics, we show in this paper that the simulated data can be as prudent as the original dataset for research.

  • 243.
    Lundberg, Lars
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kommunikationssystem.
    Melander, Christian
    Compuverde AB.
    Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data2015Ingår i: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, s. 537-546Konferensbidrag (Refereegranskat)
    Abstract [en]

    Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

  • 244.
    MADHUKAR, ENUGURTHI
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: The goal of this research is to form a correlation between code packages and test cases which is done by using automated weak mutation. The correlations formed is used as the statistical test data for selecting relevant tests from the test suite which decreases the size of the test suite and speed up the process.

    Objectives: In this study, we have done an investigation of existing methods for reducing the computational cost of automatic mutation testing. After the investigation, we build an open source automatic mutation tool that mutates the source code to run on the test cases of the mutated code that maps the failed test to the part of the code that was changed. The failed test cases give the correlation between the test and the source code which is collected as data for future use of the test selection.

    Methods: Literature review and Experimentation is chosen for this research. It was a controlled experiment done at the Swedish ICT company to mutate the camera codes and test them using the regression test suite. The camera codes provided are from the continuous integration of historical data. We have chosen experimentation as our research because as this method of research is more focused on analyzing the data and implementing a tool using historical data. A literature review is done to know what kind of mutation testing reduces the computational cost of the testing process. The implementation of this process is done by using experimentation

    Results: The comparative results obtained after mutating the source code with regular mutants and weak mutants we have found that regular mutants and weak mutants are compared with their correlation accuracy and we found that on regular mutation operators we got 62.1% correlation accuracy and coming to weak mutation operators we got 85% of the correlation accuracy.

    Conclusions: This research on experimentation to form the correlations in generating test selection statistics using automated mutation testing in the continuous integration environment for improving test cases selection in regression testing

  • 245.
    Magda, Mateusz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    EMG onset detection – development and comparison of algorithms2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. EMG (Electromyographic) signal is a response of a neuromuscular system for an electrical stimulus generated either by brain or by spinal cord. This thesis concerns the subject of onset detection in the context of a muscle activity. Estimation is based on an EMG signal observed during a muscle activity.

    Objectives. The aim of this research is to propose new onset estimation algorithms and compare them with solutions currently existing in the academia. Two benchmarks are being considered to evaluate the algorithms’ results- a muscle torque signal synchronized with  an EMG signal and a specialist’s assessment. Bias, absolute value of a mean error and standard deviation are the criteria taken into account.

    Methods. The research is based on EMG data collected in the physiological laboratory at Wroclaw University of Physical Education. Empty samples were cut off the dataset. Proposed estimation algorithms were constructed basing on the EMG signal analysis and review on state of the art solutions. In order to collate them with existing solutions a simple comparison have been conducted.

    Results. Two new onset detection methods are proposed. They are compared to two estimators taken from the literature review (sAGLR & Komi). One of presented solutions seems to give promising results.

    Conclusions. One of presented solutions- Sign Changes algorithm can be widely applied in the area of EMG signal processing. It is more accurate and less parameter-sensitive than three other methods. This estimator can be recommended as a part of ensembled algorithms solution in further development.

  • 246.
    Mahadevamangalam, Srivasthav
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Energy-aware adaptation in Cloud datacenters2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: Cloud computing is providing services and resources to customers based on pay-per-use. As the services increasing, Cloud computing using a vast number of data centers like thousands of data centers which consumes high energy. The power consumption for cooling the data centers is very high. So, recent research going on to implement the best model to reduce the energy consumption by the data centers. This process of minimizing the energy consumption can be done using dynamic Virtual Machine Consolidation (VM Consolidation) in which there will be a migration of VMs from one host to another host so that energy can be saved. 70% of energy consumption will be reduced/ saved when the host idle mode is switched to sleep mode, and this is done by migration of VM from one host to another host. There are many energy adaptive heuristics algorithms for the VM Consolidation. Host overload detection, host underload detection and VM selection using VM placement are the heuristics algorithms of VM Consolidation which results in less consumption of the energy in the data centers while meeting Quality of Service (QoS). In this thesis, we proposed new heuristic algorithms to reduce energy consumption.

    Objectives: The objective of this research is to provide an energy efficient model to reduce energy consumption. And proposing a new heuristics algorithms of VM Consolidationtechnique in such a way that it consumes less energy. Presenting the advantages and disadvantages of the proposed heuristics algorithms is also considered as objectives of our experiment.

    Methods: Literature review was performed to gain knowledge about the working and performances of existing algorithms using VM Consolidation technique. Later, we have proposed a new host overload detection, host underload detection, VM selection, and VM placement heuristic algorithms. In our work, we got 32 combinations from the host overload detection and VM selection, and two VM placement heuristic algorithms. We proposed dynamic host underload detection algorithm which is used for all the 32 combinations. The other research method chosen is experimentation, to analyze the performances of both proposed and existing algorithms using workload traces of PlanetLab. This simulation is done usingCloudSim.

    Results: To compare and get the results, the following parameters had been considered: Energy consumption, No. of migrations, Performance Degradation due to VM Migrations (PDM),Service Level Agreement violation Time per Active Host (SLATAH), SLA Violation (SLAV),i.e. from a combination of the PDM, SLATAH, Energy consumption and SLA Violation (ESV).We have conducted T-test and Cohen’s d effect size to measure the significant difference and effect size between algorithms respectively. For analyzing the performance, the results obtained from proposed algorithms and existing algorithm were compared. From the 32 combinations of the host overload detection and VM Selection heuristic algorithms, MADmedian_MaxR (Mean Absolute Deviation around median (MADmedian) and Maximum Requested RAM (MaxR))using Modified Worst Fit Decreasing (MWFD) VM Placement algorithm, andMADmean_MaxR (Mean Absolute Deviation around mean (MADmean), and MaximumRequested RAM (MaxR)) using Modified Second Worst Fit Decreasing (MSWFD) VM placement algorithm respectively gives the best results which consume less energy and with minimum SLA Violation.

    Conclusion: By analyzing the comparisons, it is concluded that proposed algorithms perform better than the existing algorithm. As our aim is to propose the better energy- efficient model using the VM Consolidation techniques to minimize the power consumption while meeting the SLAs. Hence, we proposed the energy- efficient algorithms for VM Consolidation technique and compared with the existing algorithm and proved that our proposed algorithm performs better than the other algorithm. We proposed 32 combinations of heuristics algorithms (host overload detection and VM selection) with two adaptive heuristic VM placement algorithms. We have proposed a dynamic host underload detection algorithm, and it is used for all 32 combinations. When the proposed algorithms are compared with the existing algorithm, we got 22 combinations of host overload detection and VM Selection heuristic algorithms with MWFD(Modified Worst Fit Decreasing) VM placement and 20 combinations of host overload detection and VM Selection heuristic algorithms with MSWFD (Modified Second Worst FitDecreasing) VM placement algorithm which shows the better performance than existing algorithm. Thus, our proposed heuristic algorithms give better results with minimum energy consumption with less SLA violation.

  • 247.
    Maksimov, Yuliyan
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. FHNW University of Applied Sciences and Arts Northwestern Switzerland, CHE.
    Fricker, Samuel
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Artifact Compatibility for Enabling Collaboration in the Artificial Intelligence Ecosystem2018Ingår i: Lecture Notes in Business Information Processing, Springer, 2018, Vol. 336, s. 56-71Konferensbidrag (Refereegranskat)
    Abstract [en]

    Different types of software components and data have to be combined to solve an artificial intelligence challenge. An emerging marketplace for these components will allow for their exchange and distribution. To facilitate and boost the collaboration on the marketplace a solution for finding compatible artifacts is needed. We propose a concept to define compatibility on such a marketplace and suggest appropriate scenarios on how users can interact with it to support the different types of required compatibility. We also propose an initial architecture that derives from and implements the compatibility principles and makes the scenarios feasible. We matured our concept in focus group workshops and interviews with potential marketplace users from industry and academia. The results demonstrate the applicability of the concept in a real-world scenario.

  • 248.
    Mallavajjala, Rahul
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Micro-Interactions on Smartphones: An email notification redesign2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. The mental effort offered by a user to identify an important email from the email message is considered significant or impossible. A user must enter the e-mail application to determine what type of e-mail they have received. To let the user identify the type of email via smartphone notification, micro-interactions are utilized. De augmentatie in UX of de e-mail notificaties is dan getest met de modificaties gemaakt aan de micro-interacties. Objectives. This study explores the microinteractions in an email notification of a smartphone that requires the modification of micro-interactions. These modifications have been implemented based on the design principles and usability heuristics of the existing literature. With this implementation, the augmentation in user experience of the modified design is determined. Methods. On conducting a systematic literature review, the experimental design of micro-interactions is prototyped. These prototypes are subjected to user interaction by conducting interviews that include performing certain tasks on the prototype and obtaining the user's perspectives regarding usability with an SUS scale. The data collected has been analyzed to obtain data on user experience increase. Since this data is tested for statistical significance to prove the theory and the evidence reinforce each other. Results. User interviews resulted in 100% of the users being able to perform the tasks successfully during the interaction with the prototypes. The modified design or email notifications achieve job performance time that is much less than the time used by the user for the existing design. Meanwhile, The SUS score for the modified design of micro-interactions with the existing design achieved the best imaginable score which reflects the rise in user experience of the e-mail notifications on a smartphone. On analysis, these achieved scores showed their statistical significance of the claim of increase in UX.Conclusions. This study concludes the reduction in cognitive load in a smartphone's email notifications by effective application or design principles in microinteractions. Regardless of the increased number of email arrivals on a device, a smartphone user can now identify the important e-mails on a smartphone directly from the email notifications. Following the Google's Material Design Principles resulted in increased user experience of e-mail notifications on a smartphone. However,

  • 249.
    Mara, Nikhil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Generic Deployment Tools for Telecom Apps In Cloud: terraforming2018Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Network function virtualization is gaining acceptance as modern approach enabling telecom equipments to run as software modules known as VirtualNetwork Functions(VNFs) on IT hardware on top of cloud.To host these modules, virtual infrastructure is needed within the cloud.For this purpose,cloud orchestrators are used. These are cloud specific and usually one cloud or-chestrator may not be compatible with other clouds. Investigating on generic orchestrators which are compatible with any cloud platform will reduce complexity and provides a single approach for creating virtual infrastructure.Our goal is to investigate on how generic orchestrators can be used to deployVNFs on cloud. The detailed analysis of cloud agnostic orchestrators over cloud native orchestrators is done. Resources that are needed for a VNFare described in a template supported by generic orchestrators and compare it with template of cloud native orchestrator. Results are analyzed by verifying whether the orchestration engines, Cloudify and Terraform can use those templates to create various resources on cloud environment. We sum-mate that both orchestrators can be used for deploying VNFs on cloud. TheVNF description for Cloudify is based on TOSCA which is slightly complex compared to Terraform. Cloudify using TOSCA related syntax is becoming standard. Terraform though uses HCL syntax similar to JSON makes it simpler for VNF description. Same study can be done on other cloud platforms such as VMware.

    Keywords:Terraform, Cloudify, Virtual Network Function

  • 250.
    Marakani, Sindhusha
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Design and Implementation of a Tool for Automating Cluster Configuration: For a Software Defined Storage System2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context Traditional storage systems are proving to be inefficient to handle the growing storage need of a modern IT organization. The need for a cost effective and scalable storage framework has led to the development of a Software Defined Storage (SDS) solution. SDS can be defined as an enterprise class distributed storage solution that uses standard hardware, with all the important storage and management functions performed by an intelligent software. Configuring and maintenance of these storage clusters require converting an SDS from any unknown state to a predefined, known state. This configuration of the SDS is best done with minimal human intervention, to ensure minimal errors and save the man hours spent in the configuration process.

    Objectives A tool for automatic configuration of a SDS storage cluster has been designed and implemented. The tool has later been used to study the man hours saved in the configuration of the SDS cluster. The study also involves a cost-benefit analysis to estimate the break-even point for such a tool to motivate the automation of a SDS cluster configuration process.

    Methodology In this study, experts from the field of Software Defined Storage have been interviewed to identify interesting and most common states of a SDS cluster. Later a tool was build such that it communicates with the underlying SDS storage cluster to configure it into one of the identified final states. This tool built was later used to conduct experiments wherein the amount of man hours saved by automating the process of cluster configuration was calculated.   

    Results The tool built was validated through results obtained from the experiments which show that the work time involved in the process of cluster configuration is reduced by 90% - 96% (based on the complexity of the cluster configuration). Also, the lead times of the configuration process are similar when configuring simple states but is greatly reduced by automation when performing complex configurations.

    Conclusions Similar to any other software automation, the process of automating the configuration of a distributed storage cluster has proven to be beneficial. Automating the process of cluster configuration saves time, reduces human errors induced in the configuration process and improves repeatability of the configuration process. Through the cost-benefit analysis of the complete process, the use of the tool beyond 20 days is deemed profitable for the organization. 

2345678 201 - 250 av 435
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf