Change search
Refine search result
12 1 - 50 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Trend analysis to automatically identify heat program changes2017In: Energy Procedia, Elsevier, 2017, Vol. 116, p. 407-415Conference paper (Refereed)
    Abstract [en]

    The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

  • 2.
    Adamov, Alexander
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cloud incident response model2016In: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

  • 3.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radioelectronics, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    The state of ransomware: Trends and mitigation techniques2017In: Proceedings of 2017 IEEE East-West Design and Test Symposium, EWDTS 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, article id 8110056Conference paper (Refereed)
    Abstract [en]

    This paper contains an analysis of the payload of the popular ransomware for Windows, Android, Linux, and MacOSX platforms. Namely, VaultCrypt (CrypVault), TeslaCrypt, NanoLocker, Trojan-Ransom.Linux.Cryptor, Android Simplelocker, OSX/KeRanger-A, WannaCry, Petya, NotPetya, Cerber, Spora, Serpent ransomware were put under the microscope. A set of characteristics was proposed to be used for the analysis. The purpose of the analysis is generalization of the collected data that describes behavior and design trends of modern ransomware. The objective is to suggest ransomware threat mitigation techniques based on the obtained information. The novelty of the paper is the analysis methodology based on the chosen set of 13 key characteristics that helps to determine similarities and differences thorough the list of ransomware put under analysis. Most of the ransomware samples presented were manually analyzed by the authors eliminating contradictions in descriptions of ransomware behavior published by different malware research laboratories through verification of the payload of the latest versions of ransomware. © 2017 IEEE.

  • 4. Arlebrink, Ludvig
    et al.
    Linde, Fredrik
    Image Quality-Driven Level of Detail Selection on a Triangle Budget2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Level of detail is an optimization technique used by several modern games. The level of detail systems uses simplified triangular meshes to determine the optimal combinations of 3D-models to use in order to meet a user-defined criterion for achieving fast performance. Prior work has also pre-computed level of detail settings to only apply the most optimal settings for any given view in a 3D scene.

    Objectives. The aim of this thesis is to determine the difference in image quality between a custom level of detail pre-preprocessing approach proposed in this paper, and the level of detail system built in the game engine Unity. This is investigated by implementing a framework in Unity for the proposed level of detail pre-preprocessing approach in this paper and designing representative test scenes to collect all data samples. Once the data is collected, the image quality produced by the proposed level of detail pre-preprocessing approach is compared to Unity's existing level of detail approach using perceptual-based metrics.

    Methods. The method used is an experiment. Unity's method was chosen because of the popularity of the engine, and it was decided to implement the proposed level of detail pre-preprocessing approach also in Unity to have the most fair comparison with Unity's implementation. The two approaches will only differ in how the level of detail is selected, the rest of the rendering pipeline will be exactly the same.

    Results. The pre-preprocessing time ranged between 13 to 30 hours. The results showed only a small difference in image quality between the two approaches, Unity's built-in system provides a better overall image quality in two out of three test scenes.

    Conclusions. Due to the pre-processing time and no overall improvement, it was concluded that the proposed level of detail pre-preprocessing approach is not feasible.

  • 5.
    Avdic, Adnan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ekholm, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

    Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

    Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

    Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

    Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

  • 6.
    Bakhtyar, Shoaib
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing Electronic Waybill Solutions for Road Freight Transport2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

    A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

  • 7.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Electronic Waybill Solutions: A Systemtic ReviewIn: Journal of Special Topics in Information Technology and Management, ISSN 1385-951X, E-ISSN 1573-7667Article in journal (Other academic)
    Abstract [en]

    A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

  • 8.
    Bjäreholt, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.

  • 9.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation.

    Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field.

    Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis.

    Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation.

    Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.

  • 10.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    Spindox S.p.A, ITA.
    Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, p. 207-214, article id 8064125Conference paper (Refereed)
    Abstract [en]

    Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

  • 11.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

  • 12.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

  • 13.
    Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, p. 47-52, article id 8073567Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

  • 14.
    Danielsson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Comparing Two Generations of Embedded GPUs Running a Feature Detection AlgorithmManuscript (preprint) (Other academic)
    Abstract [en]

    Graphics processing units (GPUs) in embedded mobile platforms are reaching performance levels where they may be useful for computer vision applications. We compare two generations of embedded GPUs for mobile devices when run- ning a state-of-the-art feature detection algorithm, i.e., Harris- Hessian/FREAK. We compare architectural differences, execu- tion time, temperature, and frequency on Sony Xperia Z3 and Sony Xperia XZ mobile devices. Our results indicate that the performance soon is sufficient for real-time feature detection, the GPUs have no temperature problems, and support for large work-groups is important.

  • 15.
    Fricker, Samuel A.
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Schneider, KurtLeibniz University of Hannover, Germany.
    Proceedings of the 21st International Working Conference on Requirements Engineering: Foundation for Software Quality2015Conference proceedings (editor) (Refereed)
  • 16.
    Hörnlund, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Schenström, Rasmus
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Indoor Location Surveillance: Utilizing Wi-Fi and Bluetooth Signals2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Personal information nowadays have become valuable for many stakeholders. We want to find out how much information someone can gather from our daily devices such as a smartphone, using some budget devices together with some programming knowledge. Can we gather enough information to be able to determine a location to a target device? The main objectives of our bachelor thesis is to determine the accuracy of positioning for nearby personal devices using trilateration of short-distance communications (Wi-Fi vs Bluetooth). But also, how much and what information our devices leak without us knowing with respect to personal integrity. We collected Wi-Fi and Bluetooth data in total from four target devices. Two different experiments were executed, calibration experiment and visualization experiment. The data were collected by capturing the Wi-Fi and Bluetooth Received Signal Strength Indication(RSSI) transmitted wirelessly from target devices. We then apply a method called trilateration to be able to pinpoint a target to a location. In theory, Bluetooth signals are twice as accurate as Wi-Fi signals. In practise, we were able to locate a target device with an accuracy of 5 - 10 meters. Bluetooth signals are stable but have long response time while Wi-Fi signals have short response time but high fluctuation in the RSSI values. The idea itself, being able to determine a handheld device position is not impossible, as can be seen from our results. It may though require more powerful hardware to secure an acceptable accuracy. On the other hand, achieving this kind of results from such a cheap hardware as Raspberry Pi:s are truly amazing.

  • 17.
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    The Effects of Emotions and Their Regulation on Decision-making Performance in Affective Serious Games2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Emotions are thought to be one of the key factors that critically influence human decision-making. Emotion-regulation can help to mitigate emotion-related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to practicing emotion-regulation, where meaningful biofeedback information communicates player's affective states to a series of informed gameplay choices. These findings motivate the notion that in the decision context of serious games, one would benefit from awareness and regulation of such emerging emotions.

    This thesis explores the design and evaluation methods for creating serious games where emotion-regulation can be practiced using physiological biofeedback measures. Furthermore, it investigates emotions and the effect of emotion-regulation on decision performance in serious games. Using the psychophysiological methods in the design of such games, emotions and their underlying neural mechanism have been explored.

    The results showed the benefits of practicing emotion-regulation in serious games, where decision-making performance was increased for the individuals who down-regulated high levels of arousal while having an experience of positive valence. Moreover, it increased also for the individuals who received the necessary biofeedback information. The results also suggested that emotion-regulation strategies (i.e., cognitive reappraisal) are highly dependent on the serious game context. Therefore, the reappraisal strategy was shown to benefit the decision-making tasks investigated in this thesis. The results further suggested that using psychophysiological methods in emotionally arousing serious games, the interplay between sympathetic and parasympathetic pathways could be mapped through the underlying emotions which activate those two pathways. Following this conjecture, the results identified the optimal arousal level for increased performance of an individual on a decision-making task, by carefully balancing the activation of those two pathways. The investigations also validated these findings in the collaborative serious game context, where the robot collaborators were found to elicit diverse affect in their human partners, influencing performance on a decision-making task. Furthermore, the evidence suggested that arousal is equally or more important than valence for the decision-making performance, but once optimal arousal has been reached, a further increase in performance may be achieved by regulating valence. Furthermore, the results showed that serious games designed in this thesis elicited high physiological arousal and positive valence. This makes them suitable as research platforms for the investigation of how these emotions influence the activation of sympathetic and parasympathetic pathways and influence performance on a decision-making task.

    Taking these findings into consideration, the serious games designed in this thesis allowed for the training of cognitive reappraisal emotion-regulation strategy on the decision-making tasks. This thesis suggests that using evaluated design and development methods, it is possible to design and develop serious games that provide a helpful environment where individuals could practice emotion-regulation through raising awareness of emotions, and subsequently improve their decision-making performance.

  • 18.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, School of Computing.
    Astor, Philipp J
    FZI Forschungszentrum Informatik, DEU.
    Adam, Marc
    Karlsruhe Institute of Technology, DEU.
    Hilborn, Olle
    Blekinge Institute of Technology, School of Computing.
    Schaff, Kristina
    FZI Forschungszentrum Informatik, DEU.
    Lindley, Craig
    Blekinge Institute of Technology, School of Computing.
    Sennersten, Charlotte
    Blekinge Institute of Technology, School of Computing.
    Eriksson, Jeanette
    Blekinge Institute of Technology, School of Computing.
    A Serious Game using Physiological Interfaces for Emotion Regulation Training in the context of Financial Decision-Making2012In: ECIS 2012 - Proceedings of the 20th European Conference on Information Systems, AIS Electronic Library (AISeL) , 2012, p. 1-14Conference paper (Refereed)
    Abstract [en]

    Research on financial decision-making shows that traders and investors with high emotion regulation capabilities perform better in trading. But how can the others learn to regulate their emotions? ‘Learning by doing’ sounds like a straightforward approach. But how can one perform ‘learning by doing’ when there is no feedback? This problem particularly applies to learning emotion regulation, because learners can get practically no feedback on their level of emotion regulation. Our research aims at providing a learning environment that can help decision-makers to improve their emotion regulation. The approach is based on a serious game with real-time biofeedback. The game is settled in a financial context and the decision scenario is directly linked to the individual biofeedback of the learner’s heart rate data. More specifically, depending on the learner’s ability to regulate emotions, the decision scenario of the game continuously adjusts and thereby becomes more (or less) difficult. The learner wears an electrocardiogram sensor that transfers the data via Bluetooth to the game. The game itself is evaluated at several levels.

  • 19.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, School of Computing.
    Cederholm, Henrik
    Blekinge Institute of Technology, School of Computing.
    The Future of Brain-Computer Interface for Games and Interaction Design2010Report (Other academic)
  • 20.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Practicing Emotion-Regulation Through Biofeedback on the Decision-Making Performance in the Context of Serious Games: a Systematic Review2019In: Entertainment Computing, ISSN 1875-9521, E-ISSN 1875-953X, Vol. 29, p. 75-86Article in journal (Refereed)
    Abstract [en]

    Evidence shows that emotions critically influence human decision-making. Therefore, emotion-regulation using biofeedback has been extensively investigated. Nevertheless, serious games have emerged as a valuable tool for such investigations set in the decision-making context. This review sets out to investigate the scientific evidence regarding the effects of practicing emotion-regulation through biofeedback on the decision-making performance in the context of serious games. A systematic search of five electronic databases (Scopus, Web of Science, IEEE, PubMed Central, Science Direct), followed by the author and snowballing investigation, was conducted from a publication's year of inception to October 2018. The search identified 16 randomized controlled experiment/quasi-experiment studies that quantitatively assessed the performance on decision-making tasks in serious games, involving students, military, and brain-injured participants. It was found that the participants who raised awareness of emotions and increased the skill of emotion-regulation were able to successfully regulate their arousal, which resulted in better decision performance, reaction time, and attention scores on the decision-making tasks. It is suggested that serious games provide an effective platform validated through the evaluative and playtesting studies, that supports the acquisition of the emotion-regulation skill through the direct (visual) and indirect (gameplay) biofeedback presentation on decision-making tasks.

  • 21.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wen, Wei
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics. Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hagelbäck, Johan
    Linnéuniversitetet, SWE.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    The Effect of Emotions and Social Behavior on Performance in a Collaborative Serious Game Between Humans and Autonomous Robots2018In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, no 1, p. 115-129Article in journal (Refereed)
    Abstract [en]

    The aim of this paper is to investigate performance in a collaborative human–robot interaction on a shared serious game task. Furthermore, the effect of elicited emotions and perceived social behavior categories on players’ performance will be investigated. The participants collaboratively played a turn-taking version of the Tower of Hanoi serious game, together with the human and robot collaborators. The elicited emotions were analyzed in regards to the arousal and valence variables, computed from the Geneva Emotion Wheel questionnaire. Moreover, the perceived social behavior categories were obtained from analyzing and grouping replies to the Interactive Experiences and Trust and Respect questionnaires. It was found that the results did not show a statistically significant difference in participants’ performance between the human or robot collaborators. Moreover, all of the collaborators elicited similar emotions, where the human collaborator was perceived as more credible and socially present than the robot one. It is suggested that using robot collaborators might be as efficient as using human ones, in the context of serious game collaborative tasks.

  • 22. Johansson, E.
    et al.
    Gahlin, C.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Crime Hotspots: An Evaluation of the KDE Spatial Mapping Technique2015In: Proceedings - 2015 European Intelligence and Security Informatics Conference, EISIC 2015 / [ed] Brynielsson J.,Yap M.H., IEEE Computer Society, 2015, p. 69-74Conference paper (Refereed)
    Abstract [en]

    Residential burglaries are increasing. By visualizing patterns as spatial hotspots, law-enforcement agents can get a better understanding of crime distributions and trends. Two aspects are investigated, first, measuring the accuracy and performance of the KDE algorithm using small data sets. Secondly, investigation of the amount of crime data needed to compute accurate and reliable hotspots. The Prediction Accuracy Index is used to effectively measure the accuracy of the algorithm. The data from three geographical areas in Sweden, including Stockholm, Gothenburg and Malmö are analyzed and evaluated over a one year. The results suggest that the usage of the KDE algorithm to predict residential burglaries performs well overall when having access to enough crimes, but is capable with small data sets as well

  • 23. Johansson, Fredrik
    Attacking the Manufacturing Execution System: Leveraging a Programmable Logic Controller on the Shop Floor2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Automation in production has become a necessity for producing companies to keep up with the demand created by their customers. One way to automate a process is to use a piece of hardware called a programmable logic controller (PLC). A PLC is a small computer capable of being programmed to process a set of inputs, from e.g. sensors, and create outputs, to e.g. actuators, from that. This eliminates the risk of human errors while at the same time speeding up the production rate of the now near identical products. To improve the automation process on the shop floor and the production process in general a special software system is used. This system is known as the manufacturing execution system (MES), and it is connected to the PLCs and other devices on the shop floor. The MES have different functionalities and one of these is that it can manage instructions. Theses instructions can be aimed to both employees and devices such as the PLCs. Would the MES suffer from an error, e.g. in the instructions sent to the shop floor, the company could suffer from a negative impact both economical and in reputation. Since the PLC is a computer and it is connected to the MES it might be possible to attack the system using the PLC as leverage. Objectives. Examine if it is possible to attack the MES using a PLC as the attack origin. Methods. A literature study was performed to see what types of attacks and vulnerabilities that has been disclosed related to PLCs between 2010 and 2018. Secondly a practical experiment was done, trying to perform attacks targeting the MES. Results. The results are that there are many different types of attacks and vulnerabilities that has been found related to PLCs and the attacks done in the practical experiment failed to induce negative effects in the MES used. Conclusions. The conclusion of the thesis is that two identified PLC attack techniques seems likely to be used to attack the MES layer. The methodology that was used to attack the MES layer in the practical experiment failed to affect the MES in a negative way. However, it was possible to affect the log file of the MES in one of the test cases. So, it does not rule out that other MES types are not vulnerable or that the two PLC attacks identified will not work to affect the MES.

  • 24.
    Josyula, Sai Prashanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On the Applicability of a Cache Side-Channel Attack on ECDSA Signatures: The Flush+Reload attack on the point multiplication in ECDSA signature generation process2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Digital counterparts of handwritten signatures are known as Digital Signatures. The Elliptic Curve Digital Signature Algorithm (ECDSA) is an Elliptic Curve Cryptography (ECC) primitive, which is used for generating and verifying digital signatures. The attacks that target an implementation of a cryptosystem are known as side-channel attacks. The Flush+Reload attack is a cache side-channel attack that relies on cache hits/misses to recover secret information from the target program execution. In elliptic curve cryptosystems, side-channel attacks are particularly targeted towards the point multiplication step. The Gallant-Lambert-Vanstone (GLV) method for point multiplication is a special method that speeds up the computation for elliptic curves with certain properties.

    Objectives. In this study, we investigate the applicability of the Flush+Reload attack on ECDSA signatures that employ the GLV method to protect point multiplication.

    Methods. We demonstrate the attack through an experiment using the curve secp256k1. We perform a pair of experiments to estimate both the applicability and the detection rate of the attack in capturing side-channel information.

    Results. Through our attack, we capture side-channel information about the decomposed GLV scalars.

    Conclusions. Based on an analysis of the results, we conclude that for certain implementation choices, the Flush+Reload attack is applicable on ECDSA signature generation process that employs the GLV method. The practitioner should be aware of the implementation choices which introduce vulnerabilities, and avoid the usage of such ECDSA implementations.

  • 25.
    Kamma, Aditya
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Approach to Language Modelling for Intelligent Document Retrieval System2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 26.
    Karlsson, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cooperative Behaviors BetweenTwo Teaming RTS Bots in StarCraft2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Video games are a big entertainment industry. Many video games let players play against or together. Some video games also make it possible for players to play against or together with computer controlled players, called bots. Artificial Intelligence (AI) is used to create bots.

    Objectives. This thesis aims to implement cooperative behaviors between two bots and determine if the behaviors lead to an increase in win ratio. This means that the bots should be able to cooperate in certain situations, such as when they are attacked or when they are attacking.

    Methods. The bots win ratio will be tested with a series of quantitative experiments where in each experiment two teaming bots with cooperative behavior will play against two teaming bots without any cooperative behavior. The data will be analyzed with a t-test to determine if the data are statistical significant.

    Results and Conclusions. The results show that cooperative behavior can increase performance of two teaming Real Time Strategy bots against a non-cooperative team with two bots. However, the performance could either be increased or decreased depending on the situation. In three cases there were an increase in performance and in one the performance was decreased. In three cases there was no difference in performance. This suggests that more research is needed for these cases.

  • 27.
    Kelkkanen, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Implementation and Evaluation of Positional Voice Chat in a Massively Multiplayer Online Role Playing Game2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Computer games, especially Massively Multiplayer Online Role Playing Games, have elements where communication between players is of great need. This communication is generally conducted through in-game text chats, in-game voice chats or external voice programs. In-game voice chats can be constructed to work in a similar way as talking does in real life. When someone talks, anyone close enough to that person can hear what is said, with a volume depending on distance. This is called positional or spatial voice chat in games. This differs from the commonly implemented voice chat where participants in conversations are statically defined by a team or group belonging. Positional voice chat has been around for quite some time in games and it seems to be of interest for a lot of users, despite this, it is still not very common.

    This thesis investigates impacts of implementing a positional voice chat in the existing MMORPG Mortal Online by Star Vault. How is it built, what are the costs, how many users can it support and what do the users think of it? These are some of the questions answered within this project.

    The design science research method has been selected as scientific method. A product in form of a positional voice chat library has been constructed. This library has been integrated into the existing game engine and its usage has been evaluated by the game’s end users.

    Results show a positional voice system that in theory supports up to 12500 simultaneous users can be built from scratch and be patched into an existing game in less than 600 man-hours. The system needs third-party libraries for threading, audio input/output, audio compression, network communication and mathematics. All libraries used in the project are free for use in commercial products and do not demand code using them become open source.

    Based on a survey taken by more than 200 users, the product received good ratings on Quality of Experience and most users think having a positional voice chat in a game like Mortal Online is important. Results show a trend of young and less experienced users giving the highest average ratings on quality, usefulness and importance of the positional voice chat, suggesting it may be a good tool to attract new players to a game.

  • 28.
    Kuruganti, NSR Sankaran
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Distributed databases for Multi Mediation: Scalability, Availability & Performance2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Multi Mediation is a process of collecting data from network(s) & network elements, pre-processing this data and distributing it to various systems like Big Data analysis, Billing Systems, Network Monitoring Systems, and Service Assurance etc. With the growing demand for networks and emergence of new services, data collected from networks is growing. There is need for efficiently organizing this data and this can be done using databases. Although RDBMS offers Scale-up solutions to handle voluminous data and concurrent requests, this approach is expensive. So, alternatives like distributed databases are an attractive solution. Suitable distributed database for Multi Mediation, needs to be investigated.

    Objectives: In this research we analyze two distributed databases in terms of performance, scalability and availability. The inter-relations between performance, scalability and availability of distributed databases are also analyzed. The distributed databases that are analyzed are MySQL Cluster 7.4.4 and Apache Cassandra 2.0.13. Performance, scalability and availability are quantified, measurements are made in the context of Multi Mediation system.

    Methods: The methods to carry out this research are both qualitative and quantitative. Qualitative study is made for the selection of databases for evaluation. A benchmarking harness application is designed to quantitatively evaluate the performance of distributed database in the context of Multi Mediation. Several experiments are designed and performed using the benchmarking harness on the database cluster.

    Results: Results collected include average response time & average throughput of the distributed databases in various scenarios. The average throughput & average INSERT response time results favor Apache Cassandra low availability configuration. MySQL Cluster average SELECT response time is better than Apache Cassandra for greater number of client threads, in high availability and low availability configurations.Conclusions: Although Apache Cassandra outperforms MySQL Cluster, the support for transaction and ACID compliance are not to be forgotten for the selection of database. Apart from the contextual benchmarks, organizational choices, development costs, resource utilizations etc. are more influential parameters for selection of database within an organization. There is still a need for further evaluation of distributed databases.

  • 29.
    Llewellyn, Tim
    et al.
    nVISO SA, CHE.
    Milagro Fernández Carrobles, María del
    University of Castilla-La Mancha, ESP.
    Deniz, Oscar
    University of Castilla-La Mancha, ESP.
    Fricker, Samuel
    i4Ds Centre for Requirements Engineering, CHE.
    Storkey, Amos
    University of Edinburgh, GBR.
    Pazos, Nuria
    Haute Ecole Specialisee de Suisse, CHE.
    Velikic, Gordana
    RT-RK, SRB.
    Leufgen, Kirsten
    SCIPROM SARL, CHE.
    Dahyot, Rozenn
    Trinity College Dublin, IRL.
    Koller, Sebastian
    Technical University Munich, DEU.
    Goumas, Georgios
    Technical University of Athens, GRC.
    Leitner, Peter
    SYNYO GmbH, AUT.
    Dasika, Ganesh
    ARM Ltd., GBR.
    Wang, Lei
    ZF Friedrichshafen AG, DEU.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    BONSEYES: Platform for Open Development of Systems of Artificial Intelligence2017Conference paper (Other academic)
    Abstract [en]

    The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.

  • 30.
    Lopez-Rojas, Edgar Alonso
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Applying Simulation to the Problem of Detecting Financial Fraud2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis introduces a financial simulation model covering two related financial domains: Mobile Payments and Retail Stores systems.

     

    The problem we address in these domains is different types of fraud. We limit ourselves to isolated cases of relatively straightforward fraud. However, in this thesis the ultimate aim is to introduce our approach towards the use of computer simulation for fraud detection and its applications in financial domains. Fraud is an important problem that impact the whole economy. Currently, there is a lack of public research into the detection of fraud. One important reason is the lack of transaction data which is often sensitive. To address this problem we present a mobile money Payment Simulator (PaySim) and Retail Store Simulator (RetSim), which allow us to generate synthetic transactional data that contains both: normal customer behaviour and fraudulent behaviour. 

     

    These simulations are Multi Agent-Based Simulations (MABS) and were calibrated using real data from financial transactions. We developed agents that represent the clients and merchants in PaySim and customers and salesmen in RetSim. The normal behaviour was based on behaviour observed in data from the field, and is codified in the agents as rules of transactions and interaction between clients and merchants, or customers and salesmen. Some of these agents were intentionally designed to act fraudulently, based on observed patterns of real fraud. We introduced known signatures of fraud in our model and simulations to test and evaluate our fraud detection methods. The resulting behaviour of the agents generate a synthetic log of all transactions as a result of the simulation. This synthetic data can be used to further advance fraud detection research, without leaking sensitive information about the underlying data or breaking any non-disclose agreements.

     

    Using statistics and social network analysis (SNA) on real data we calibrated the relations between our agents and generate realistic synthetic data sets that were verified against the domain and validated statistically against the original source.

     

    We then used the simulation tools to model common fraud scenarios to ascertain exactly how effective are fraud techniques such as the simplest form of statistical threshold detection, which is perhaps the most common in use. The preliminary results show that threshold detection is effective enough at keeping fraud losses at a set level. This means that there seems to be little economic room for improved fraud detection techniques.

     

    We also implemented other applications for the simulator tools such as the set up of a triage model and the measure of cost of fraud. This showed to be an important help for managers that aim to prioritise the fraud detection and want to know how much they should invest in fraud to keep the loses below a desired limit according to different experimented and expected scenarios of fraud.

  • 31.
    Lopez-Rojas, Edgar Alonso
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Extending the RetSim Simulator for Estimating the Cost of fraud in the Retail Store Domain2015In: Proceedings of the European Modeling and Simulation Symposium, 2015, 2015Conference paper (Refereed)
    Abstract [en]

    RetSim is a multi-agent based simulator (MABS) calibrated with real transaction data from one of the largest shoe retailers in Scandinavia. RetSim allows us to generate synthetic transactional data that can be publicly shared and studied without leaking business sensitive information, and still preserve the important characteristics of the data.

    In this paper we extended the fraud model of RetSim to cover more cases of internal fraud perpetrated by the staff and allow inventory control to flag even more suspicious activity. We also generated sufficient number of runs using a range of fraud parameters to cover a vast number of fraud scenarios that can be studied. We then use RetSim to simulate some of the more common retail fraud scenarios to ascertain exactly the cost of fraud using different fraud parameters for each case.

  • 32.
    Lopez-Rojas, Edgar Alonso
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Using the RetSim simulator for fraud detection research2015In: International Journal of Simulation and Process Modelling, ISSN 1740-2123, E-ISSN 1740-2131, Vol. 10, no 2Article in journal (Refereed)
    Abstract [en]

    Managing fraud is important for business, retail and financialalike. One method to manage fraud is by \emph{detection}, wheretransactions etc. are monitored and suspicious behaviour is flaggedfor further investigation. There is currently a lack of publicresearch in this area. The main reason is the sensitive nature of thedata. Publishing real financial transaction data would seriouslycompromise the privacy of both customers, and companies alike. Wepropose to address this problem by building RetSim, a multi-agentbased simulator (MABS) calibrated with real transaction data from oneof the largest shoe retailers in Scandinavia. RetSim allows us togenerate synthetic transactional data that can be publicly shared andstudied without leaking business sensitive information, and stillpreserve the important characteristics of the data.

    We then use RetSim to model two common retail fraud scenarios toascertain exactly how effective the simplest form of statisticalthreshold detection could be. The preliminary results of our testedfraud detection method show that the threshold detection is effectiveenough at keeping fraud losses at a set level, that there is littleeconomic room for improved techniques.

  • 33.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Axelsson, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Social Simulation of Commercial and Financial Behaviour for Fraud Detection Research2014In: Advances in Computational Social Science and Social Simulation / [ed] Miguel, Amblard, Barceló & Madella, Barcelona, 2014Conference paper (Refereed)
    Abstract [en]

    We present a social simulation model that covers three main financialservices: Banks, Retail Stores, and Payments systems. Our aim is toaddress the problem of a lack of public data sets for fraud detectionresearch in each of these domains, and provide a variety of fraudscenarios such as money laundering, sales fraud (based on refunds anddiscounts), and credit card fraud. Currently, there is a general lackof public research concerning fraud detection in the financial domainsin general and these three in particular. One reason for this is thesecrecy and sensitivity of the customers data that is needed toperform research. We present PaySim, RetSim, and BankSim asthree case studies of social simulations for financial transactionsusing agent-based modelling. These simulators enable us to generatesynthetic transaction data of normal behaviour of customers, and alsoknown fraudulent behaviour. This synthetic data can be used to furtheradvance fraud detection research, without leaking sensitiveinformation about the underlying data. Using statistics and socialnetwork analysis (SNA) on real data we can calibrate the relationsbetween staff and customers, and generate realistic synthetic datasets. The generated data represents real world scenarios that arefound in the original data with the added benefit that this data canbe shared with other researchers for testing similar detection methodswithout concerns for privacy and other restrictions present when usingthe original data.

  • 34.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Axelsson, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Gjovik University College.
    Using the RetSim Fraud Simulation Tool to set Thresholds for Triage of Retail Fraud2015In: SECURE IT SYSTEMS, NORDSEC 2015 / [ed] Sonja Buchegger, Mads Dam, Springer, 2015, Vol. 9417, p. 156-171Conference paper (Refereed)
    Abstract [en]

    The investigation of fraud in business has been a staple for the digital forensics practitioner since the introduction of computers in business. Much of this fraud takes place in the retail industry. When trying to stop losses from insider retail fraud, triage, i.e. the quick identification of sufficiently suspicious behaviour to warrant further investigation, is crucial, given the amount of normal, or insignificant behaviour. It has previously been demonstrated that simple statistical threshold classification is a very successful way to detect fraud~\cite{Lopez-Rojas2015}. However, in order to do triage successfully the thresholds have to be set correctly. Therefore, we present a method based on simulation to aid the user in accomplishing this, by simulating relevant fraud scenarios that are foreseeing as possible and expected, to calculate optimal threshold limits. This method gives the advantage over arbitrary thresholds that it reduces the amount of labour needed on false positives and gives additional information, such as the total cost of a specific modelled fraud behaviour, to set up a proper triage process. With our method we argue that we contribute to the allocation of resources for further investigations by optimizing the thresholds for triage and estimating the possible total cost of fraud. Using this method we manage to keep the losses below a desired percentage of sales, which the manager consider acceptable for keeping the business properly running.

  • 35.
    Lopez-Rojas, Edgar Alonso
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Gorton, Dan
    Axelsson, Stefan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    RetSim: A ShoeStore Agent-Based Simulation for Fraud Detection2013In: 25th European Modeling and Simulation Symposium, EMSS 2013, 2013, p. 25-34Conference paper (Refereed)
    Abstract [en]

    RetSim is an agent-based simulator of a shoe store basedon the transactional data of one of the largest retail shoesellers in Sweden. The aim of RetSim is the generationof synthetic data that can be used for fraud detection re-search. Statistical and a Social Network Analysis (SNA)of relations between staff and customers was used to de-velop and calibrate the model. Our ultimate goal is forRetSim to be usable to model relevant scenarios to gen-erate realistic data sets that can be used by academia, andothers, to develop and reason about fraud detection meth-ods without leaking any sensitive information about theunderlying data. Synthetic data has the added benefit ofbeing easier to acquire, faster and at less cost, for exper-imentation even for those that have access to their owndata. We argue that RetSim generates data that usefullyapproximates the relevant aspects of the real data.

  • 36.
    Marculescu, Bogdan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Interactive Search-Based Software Testing: Development, Evaluation, and Deployment2017Doctoral thesis, comprehensive summary (Other academic)
  • 37.
    Marculescu, Bogdan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Practitioner-Oriented Visualization in an Interactive Search-Based Software Test Creation Tool2013Conference paper (Refereed)
    Abstract [en]

    Search-based software testing uses meta-heuristic search techniques to automate or partially automate testing tasks, such as test case generation or test data generation. It uses a fitness function to encode the quality characteristics that are relevant, for a given problem, and guides the search to acceptable solutions in a potentially vast search space. From an industrial perspective, this opens up the possibility of generating and evaluating lots of test cases without raising costs to unacceptable levels. First, however, the applicability of search-based software engineering in an industrial setting must be evaluated. In practice, it is difficult to develop a priori a fitness function that covers all practical aspects of a problem. Interaction with human experts offers access to experience that is otherwise unavailable and allows the creation of a more informed and accurate fitness function. Moreover, our industrial partner has already expressed a view that the knowledge and experience of domain specialists are more important to the overall quality of the systems they develop than software engineering expertise. In this paper we describe our application of Interactive Search Based Software Testing (ISBST) in an industrial setting. We used SBST to search for test cases for an industrial software module and based, in part, on interaction with a human domain specialist. Our evaluation showed that such an approach is feasible, though it also identified potential difficulties relating to the interaction between the domain specialist and the system.

  • 38.
    Marculescu, Bogdan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Chalmers, SWE.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Transferring Interactive Search-Based Software Testing to Industry2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 142, p. 156-170Article in journal (Refereed)
    Abstract [en]

    Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engineering (SBSE), is the application of optimization algorithms to problems in software testing, and software engineering, respectively. New algorithms, methods, and tools are being developed and validated on benchmark problems. In previous work, we have also implemented and evaluated Interactive Search-Based Software Testing (ISBST) tool prototypes, with a goal to successfully transfer the technique to industry. Objective: While SBST and SBSE solutions are often validated on benchmark problems, there is a need to validate them in an operational setting, and to assess their performance in practice. The present paper discusses the development and deployment of SBST tools for use in industry, and reflects on the transfer of these techniques to industry. Method: In addition to previous work discussing the development and validation of an ISBST prototype, a new version of the prototype ISBST system was evaluated in the laboratory and in industry. This evaluation is based on an industrial System under Test (SUT) and was carried out with industrial practitioners. The Technology Transfer Model is used as a framework to describe the progression of the development and evaluation of the ISBST system, as it progresses through the first five of its seven steps. Results: The paper presents a synthesis of previous work developing and evaluating the ISBST prototype, as well as presenting an evaluation, in both academia and industry, of that prototype's latest version. In addition to the evaluation, the paper also discusses the lessons learned from this transfer. Conclusions: This paper presents an overview of the development and deployment of the ISBST system in an industrial setting, using the framework of the Technology Transfer Model. We conclude that the ISBST system is capable of evolving useful test cases for that setting, though improvements in the means the system uses to communicate that information to the user are still required. In addition, a set of lessons learned from the project are listed and discussed. Our objective is to help other researchers that wish to validate search-based systems in industry, and provide more information about the benefits and drawbacks of these systems.

  • 39.
    Mbiydzenyuy, Gideon
    Blekinge Institute of Technology, School of Computing.
    Strategic Service Selection Problem for Transport Telematic Services- An Optimization Approach2014In: 2014 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING (SCC 2014), IEEE Computer Society, 2014, p. 520-527Conference paper (Refereed)
    Abstract [en]

    The selection, composition and integration of Transport Telematic Services (TTSs) is crucial for achieving cooperative Intelligent Transport Systems (ITS). To enable future adaptation, models for selecting and composing TTSs needs to take into account possible future modifications, upgrades or downgrades of different TTSs without tipping off the benefit edge. To achieve this, a Strategic Service Selection Problem (SSSP) for TTSs is presented in this article. The problem involves selecting a set of TTSs that maximizes net societal benefits over a strategic time period, e.g., 10 years. The formulation of the problem offers possibilities to study design alternatives taking into account future modifications, extensions, upgrades or downgrades of different TTSs. Two decisive factors affecting the choices and modifications of TTSs are studied: 1) the effect of using Governmental policies to mandate the introduction of TTSs, e.g., Road User Charging and eCall, and 2) the effects of allowing market forces to drive the choices of TTSs. Case study results indicate that in determining combinations of TTSs that can be deployed in a period of 10 years, enforcing too many TTSs can retard the ability of the market to generate net benefits even though the results could be that more TTSs will be deployed.

  • 40.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Masood, Sohaib
    UIIT PMAS Arid Agriculture University, PAK.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nadeem, Aamer
    Capital University of Science and Technology, PAK.
    A Systematic Mapping of Test Case Generation Techniques Using UML Interaction Diagram2018In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481Article in journal (Refereed)
    Abstract [en]

    Testing plays a vital role for assuring software quality. Among the activities performed during testing process, test cases generation is a challenging and labor intensive task. Test case generation techniques based on UML models are getting the attention of researchers and practitioners. This study provides a systematic mapping of test case generation techniques based on interaction diagrams. The study compares the test case generation techniques, regarding their capabilities and limitations, and it also assesses the reporting quality of the primary studies. It has been revealed that UML interaction diagrams based techniques are mainly used for integration testing. The majority of the techniques are using sequence diagrams as input models, while some are using collaboration. A notable number of techniques are using interaction diagram along with some other UML diagram for test case generation. These techniques are mainly focusing on interaction, scenario, operational, concurrency, synchronization and deadlock related faults.

    From the results of this study, we can conclude that the studies presenting test case generation techniques using UML interaction diagrams failed to illustrate the use of rigorous methodology, and these techniques did not demonstrate the empirical evaluation in an industrial context. Our study revealed the need for tool support to facilitate the transfer of solutions to industry.

  • 41.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regression testing goals: View of practitioners and researchers2017In: 24th Asia-Pacific Software Engineering Conference Workshops (APSECW), IEEE, 2017, p. 25-32Conference paper (Refereed)
    Abstract [en]

    Context: Regression testing is a well-researched area. However, the majority regression testing techniques proposed by the researchers are not getting the attention of the practitioners. Communication gaps between industry and academia and disparity in the regression testing goals are the main reasons. Close collaboration can help in bridging the communication gaps and resolving the disparities.Objective: The study aims at exploring the views of academics and practitioners about the goals of regression testing. The purpose is to investigate the commonalities and differences in their viewpoints and defining some common goals for the success of regression testing.Method: We conducted a focus group study, with 7 testing experts from industry and academia. 4 testing practitioners from 2companies and 3 researchers from 2 universities participated in the study. We followed GQM approach, to elicit the regression testing goals, information needs, and measures.Results: 43 regression testing goals were identified by the participants, which were reduced to 10 on the basis of similarity among the identified goals. Later during the priority assignment process, 5 goals were discarded, because the priority assigned to these goals was very low. Participants identified 47 information needs/questions required to evaluate the success of regression testing with reference to goal G5 (confidence). Which were then reduced to10 on the basis of similarity. Finally, we identified measures to gauge those information needs/questions, which were corresponding to the goal (G5).Conclusions: We observed that participation level of practitioners and researchers during the elicitation of goals and questions was same. We found a certain level of agreement between the participants regarding the regression testing definitions and goals.But there was some level of disagreement regarding the priorities of the goals. We also identified the need to implement a regression testing evaluation framework in the participating companies.

  • 42.
    Mohammadi Kho'i, Felix
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Jahid, Jawed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparing Native and Hybrid Applications with focus on Features2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Nowadays smartphones and smartphone-applications are a part of our daily life. There are variety of different operating systems in the market that are unalike, which are an obstacle to developers when it comes to developing a single application for different operating system. Furthermore, hybrid application development has become a potential substitute. The evolution of new hybrid approach has made companies consider hybrid approach as a viable alternative when producing mobile applications. This research paper aims to compare native and hybrid application development on a feature level to provide scientific evidence for researchers and companies choosing application development approach as well as providing vital information about both native and hybrid applications.This study is based on both a literature study and an empirical study. The sources used are Summon@BTH, Google Scholar and IEEE Xplore. To select relevant articles, the Snowballing approach was used, with Inclusion and Exclusion criteria’s.The authors concluded that native development is a better way to develop more advanced applications which uses more device-hardware, while hybrid is a perfectly viable choice when developing content-centric applications.

  • 43.
    Moraes, Ana Luiza Dallora
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Supplementary Material of: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.2016Other (Other academic)
    Abstract [en]

     This document contains the supplementary material regarding the systematic literature review entitled: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.

  • 44. Morscheuser, Tobias
    et al.
    Kozma, Felix
    Cloud Service Environment PostgreSQL vs. Cassandra2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 45.
    Motyka, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Impact of Usability for Particle Accelerator Software Tools Analyzing Availability and Reliability2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The importance of considering usability when developing software is widely recognized in literature. This non-functional system aspect focuses on the ease, effectiveness and efficiency of handling a system. However, usability cannot be defined as a specific system aspect since it depends on the field of application. In this work, the impact of usability for accelerator tools targeting availability and reliability analysis is investigated by further developing the already existing software tool Availsim. The tool, although proven to be unique by accounting for special accelerator complexities not possible to model with commercial software, is not used across facilities due to constraints caused by previous modifications. The study was conducted in collaboration with the European Spallation Source ERIC, a multidisciplinary research center based on the world’s most powerful neutron source, currently being built in Lund, Sweden. The work was conducted in the safety group within the accelerator division, where the availability and reliability studies were performed. Design Science Research was used as research methodology to answer how the proposed tool can help improving the usability for the analysis domain, along with to identify existing usability issues in the field. To obtain an overview of the current field, three questionnaires were sent out and one interview was conducted, listing important properties to consider for the tool to be developed along with how usability is perceived in the accelerator field of analysis. The developed software tool was evaluated with After Scenario Questionnaire and the System Usability Scale, two standardized ways of measuring usability along with custom made statements, explicitly targeting important attributes found when questioning the researchers. The result highlighted issues in the current field, listing multiple tools used for the analysis along with their positive and negative aspects, indicating a lengthy and tedious process in obtaining the required analysis results. It was also found that the adapted Availsim version improves usability of the previous versions, listing specific attributes that could be identified as correlating to the improved usability, fulfilling the purpose of the study. However, results indicate existing commercial tools obtained higher scores regarding the standardized tests targeting usability compared to the new Availsim version, pointing towards room for improvements.

  • 46.
    Musatoiu, Mihai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An approach to choosing the right distributed file system: Microsoft DFS vs. Hadoop DFS2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. An important goal of most IT groups is to manage server resources in such a way that their users are provided with fast, reliable and secure access to files. The modern needs of organizations imply that resources are often distributed geographically, asking for new design solutions for the file systems to remain highly available and efficient. This is where distributed file systems (DFSs) come into the picture. A distributed file system (DFS), as opposed to a "classical", local, file system, is accessible across some kind of network and allows clients to access files remotely as if they were stored locally.

    Objectives. This paper has the goal of comparatively analyzing two distributed file systems, Microsoft DFS (MSDFS) and Hadoop DFS (HDFS). The two systems come from different "worlds" (proprietary - Microsoft DFS - vs. open-source - Hadoop DFS); the abundance of solutions and the variety of choices that exist today make such a comparison more relevant. Methods. The comparative analysis is done on a cluster of 4 computers running dual-installations of Microsoft Windows Server 2012 R2 (the MSDFS environment) and Linux Ubuntu 14.04 (the HDFS environment). The comparison is done on read and write operations on files and sets of files of increasing sizes, as well as on a set of key usage scenarios.

    Results. Comparative results are produced for reading and writing operations of files of increasing size - 1 MB, 2 MB, 4 MB and so on up to 4096 MB - and of sets of small files (64 KB each) amounting to totals of 128 MB, 256 MB and so on up to 4096 MB. The results expose the behavior of the two DFSs on different types of stressful activities (when the size of the transferred file increases, as well as when the quantity of data is divided into (tens of) thousands of many small files). The behavior in the case of key usage scenarios is observed and analyzed.

    Conclusions. HDFS performs better at writing large files, while MSDFS is better at writing many small files. At read operations, the two show similar performance, with a slight advantage for MSDFS. In the key usage scenarios, HDFS shows more flexibility, but MSDFS could be the better choice depending on the needs of the users (for example, most of the common functions can be configured through the graphical user interface).

  • 47.
    Navarro, Diego
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Simplifying Game Mechanics: Gaze as an Implicit Interaction Method2017In: SIGGRAPH Asia 2017 Technical Briefs, SA 2017, ACM Digital Library, 2017, article id 132534Conference paper (Refereed)
  • 48.
    Nilsson, Adam
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Assessment of the Microsoft Kinect v1 RGB-D Sensor and 3D Object Recognition as a Means of Drift Correction in Head mounted Virtual Reality Systems2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The release of the Oculus Rift Development Kits and other similar hardware has led to something of a resurrection of interest in virtual reality hardware, but such hardware has problems in the form of, for example, drift errors, where the estimated forward direction veers of in an arbitrary direction. These drift errors accumulate over time and can lead to reduced user immersion and a need for the user to continuously calibrate the hardware. Objective. This study aims to investigate the possibility of drift error correction through the use of the Microsoft Kinect v1 RGB-D sensor and 3D object tracking technology. Method. Through the creation of a prototype application that utilizes object recognition to find the Oculus Rift DK1 on the user’s head and calculates its estimated six degrees of freedom, a "real world" forward vector can be produced. This vector’s authenticity can then be evaluated through an experiment that compares it to the forward direction reported by the Oculus Rift DK1. Result. The result is an application that can successfully recognize the Oculus Rift DK1 in a scene and deduce what its forward direction is. Due to what is suspected to be hardware malfunctions in the Oculus Rift DK1 the test results are not satisfactory, although observations still indicate that the application would perform within the set boundaries should the malfunctions not be present. Conclusion. Even though the experiment’s results are not ideal, the application shows promise and should be studied further with more accurate measuring equipment.

  • 49.
    Nilsson, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Att hindra en Notpetya- och WannaCry-attack: Redogörelse med förebyggande metoder och tekniker2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    WannaCry och NotPetya är två ransomware-program som använder sig av National Security Agency (NSA) läckta penetreringsverktyg EternalBlue för att få operativsystemsbehörighet över ett Windowssystem som tillåter kommunikation med dess Service Message Block (SMB) server. WannaCry och NotPetya utnyttjar detta genom att söka igenom systemets alla lagringsmedier efter användarfiler och krypterar sedan dessa med både symmetriska och asymmetriska krypteringsalgoritmer.

    För att få tillgång till den nyckel som används för att dekryptera filerna krävs det att offret betalar förövaren en specifik summa, vanligtvis i Bitcoin. Det finns ingen garanti att filerna återfås efter betalning utan endast förövarens ord, uttryckt i ett utpressningsmeddelande som först uppenbarar sig efter att alla filer krypterats.

    Det finns flera metoder och tekniker som kan användas för att bygga ett försvar mot att ransomware infekterar eller kryptera data. En metod för att förhindra att NotPetya och WannaCry infektera ett system är att blockera all kommunikation med Windows-systemets SMB-server. Eftersom detta förhindrar alla program från att kommunicera med systemet genom SMB protokollet utgör denna metod endast ett alternativ om systemet inte är beroende av funktioner så som fil och skrivardelning.

    En metod för att förhindra att data försvinner vid en eventuell infektion är att kontinuerligt säkerhetskopiera sina filer till externa lagringsmedier så som till CD-skivor, USB-minnen och hårddiskar. Detta gör det möjligt att återfå data efter en infektion och offret behöver därför inte att förlita sig på förövaren för att få tillbaka sina filer.

  • 50.
    Nilsson, Jim
    et al.
    Blekinge Institute of Technology.
    Valtersson, Peter
    Blekinge Institute of Technology.
    Machine Vision Inspection of the Lapping Process in the Production of Mass Impregnated High Voltage Cables2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Mass impregnated high voltage cables are used in, for example, submarine electric power transmission. One of the production steps of such cables is the lapping process in which several hundred layers of special purpose paper are wrapped around the conductor of the cable. It is important for the mechanical and electrical properties of the finished cable that the paper is applied correctly, however there currently exists no reliable way of continuously ensuring that the paper is applied correctly.

    Objective. The objective of this thesis is to develop a prototype of a cost-effective machine vision system which monitors the lapping process and detects and records any errors that may occur during the process; with an accuracy of at least one tenth of a millimetre.

    Methods. The requirements of the system are specified and suitable hardware is identified. Using a method where the images are projected down to one axis as well as other signal processing methods, the errors are measured. Experiments are performed where the accuracy and performance of the system is tested in a controlled environment.

    Results. The results show that the system is able to detect and measure errors accurately down to one tenth of a millimetre while operating at a frame rate of 40 frames per second. The hardware cost of the system is less than €200.

    Conclusions. A cost-effective machine vision system capable of performing measurements accurate down to one tenth of a millimetre can be implemented using the inexpensive Raspberry Pi 3 and Raspberry Pi Camera Module V2. Th

12 1 - 50 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf