Endre søk
Begrens søket
123 1 - 50 of 102
Referera
Referensformat
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Annet format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annet språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
Treff pr side
• 5
• 10
• 20
• 50
• 100
• 250
Sortering
• Standard (Relevans)
• Forfatter A-Ø
• Forfatter Ø-A
• Tittel A-Ø
• Tittel Ø-A
• Type publikasjon A-Ø
• Type publikasjon Ø-A
• Eldste først
• Nyeste først
• Skapad (Eldste først)
• Skapad (Nyeste først)
• Senast uppdaterad (Eldste først)
• Senast uppdaterad (Nyeste først)
• Disputationsdatum (tidligste først)
• Disputationsdatum (siste først)
• Standard (Relevans)
• Forfatter A-Ø
• Forfatter Ø-A
• Tittel A-Ø
• Tittel Ø-A
• Type publikasjon A-Ø
• Type publikasjon Ø-A
• Eldste først
• Nyeste først
• Skapad (Eldste først)
• Skapad (Nyeste først)
• Senast uppdaterad (Eldste først)
• Senast uppdaterad (Nyeste først)
• Disputationsdatum (tidligste først)
• Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
• 1.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. NODA Intelligent Systems AB, SWE. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Trend analysis to automatically identify heat program changes2017Inngår i: Energy Procedia, Elsevier, 2017, Vol. 116, s. 407-415Konferansepaper (Fagfellevurdert)

The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

• 2.
Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Cloud incident response model2016Inngår i: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Konferansepaper (Fagfellevurdert)

This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

• 3.
Kharkiv National University of Radioelectronics, UKR.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
The state of ransomware: Trends and mitigation techniques2017Inngår i: Proceedings of 2017 IEEE East-West Design and Test Symposium, EWDTS 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, artikkel-id 8110056Konferansepaper (Fagfellevurdert)

This paper contains an analysis of the payload of the popular ransomware for Windows, Android, Linux, and MacOSX platforms. Namely, VaultCrypt (CrypVault), TeslaCrypt, NanoLocker, Trojan-Ransom.Linux.Cryptor, Android Simplelocker, OSX/KeRanger-A, WannaCry, Petya, NotPetya, Cerber, Spora, Serpent ransomware were put under the microscope. A set of characteristics was proposed to be used for the analysis. The purpose of the analysis is generalization of the collected data that describes behavior and design trends of modern ransomware. The objective is to suggest ransomware threat mitigation techniques based on the obtained information. The novelty of the paper is the analysis methodology based on the chosen set of 13 key characteristics that helps to determine similarities and differences thorough the list of ransomware put under analysis. Most of the ransomware samples presented were manually analyzed by the authors eliminating contradictions in descriptions of ransomware behavior published by different malware research laboratories through verification of the payload of the latest versions of ransomware. © 2017 IEEE.

• 4.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey2019Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367Artikkel i tidsskrift (Fagfellevurdert)

Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.

Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.

Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.

Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.

Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.

• 5.
Tecnológico de Monterrey, MEX.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. Tecnológico de Monterrey, MEX.
Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model2019Konferansepaper (Fagfellevurdert)
• 6. Arlebrink, Ludvig
Image Quality-Driven Level of Detail Selection on a Triangle Budget2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave

Background. Level of detail is an optimization technique used by several modern games. The level of detail systems uses simplified triangular meshes to determine the optimal combinations of 3D-models to use in order to meet a user-defined criterion for achieving fast performance. Prior work has also pre-computed level of detail settings to only apply the most optimal settings for any given view in a 3D scene.

Objectives. The aim of this thesis is to determine the difference in image quality between a custom level of detail pre-preprocessing approach proposed in this paper, and the level of detail system built in the game engine Unity. This is investigated by implementing a framework in Unity for the proposed level of detail pre-preprocessing approach in this paper and designing representative test scenes to collect all data samples. Once the data is collected, the image quality produced by the proposed level of detail pre-preprocessing approach is compared to Unity's existing level of detail approach using perceptual-based metrics.

Methods. The method used is an experiment. Unity's method was chosen because of the popularity of the engine, and it was decided to implement the proposed level of detail pre-preprocessing approach also in Unity to have the most fair comparison with Unity's implementation. The two approaches will only differ in how the level of detail is selected, the rest of the rendering pipeline will be exactly the same.

Results. The pre-preprocessing time ranged between 13 to 30 hours. The results showed only a small difference in image quality between the two approaches, Unity's built-in system provides a better overall image quality in two out of three test scenes.

Conclusions. Due to the pre-processing time and no overall improvement, it was concluded that the proposed level of detail pre-preprocessing approach is not feasible.

• 7.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models: An unsupervised learning approach in time-series data2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave

Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system.

Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise.

Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data.

Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives.

Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.

• 8.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Designing Electronic Waybill Solutions for Road Freight Transport2016Doktoravhandling, med artikler (Annet vitenskapelig)

In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

• 9.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Electronic Waybill Solutions: A Systemtic ReviewInngår i: Journal of Special Topics in Information Technology and Management, ISSN 1385-951X, E-ISSN 1573-7667Artikkel i tidsskrift (Annet vitenskapelig)

A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

• 10.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Finding a needle in a haystack -  A comparative study of IPv6 scanning methods2019Inngår i: Proceeding of The 6th International Symposium on Networks, Computers and Communications (ISNCC 2019), IEEE, 2019Konferansepaper (Fagfellevurdert)

It has previously been assumed that the size of anIPv6 network would make it impossible to scan the network forvulnerable hosts. Recent work has shown this to be false, andseveral methods for scanning IPv6 networks have been suggested.However, most of these are based on external information likeDNS, or pattern inference which requires large amounts of knownIP addresses. In this paper, DeHCP, a novel approach based ondelimiting IP ranges with closely clustered hosts, is presentedand compared to three previously known scanning methods. Themethod is shown to work in an experimental setting with resultscomparable to that of the previously suggested methods, and isalso shown to have the advantage of not being limited to a specificprotocol or probing method. Finally we show that the scan canbe executed across multiple VLANs.

• 11.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave

RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.

• 12.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects2015Licentiatavhandling, med artikler (Annet vitenskapelig)

Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation.

Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field.

Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis.

Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation.

Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.

• 13.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Spindox S.p.A, ITA.
Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017Inngår i: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, s. 207-214, artikkel-id 8064125Konferansepaper (Fagfellevurdert)

Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

• 14.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
University of Rome, ITA.
Measuring Docker Performance: What a Mess!!!2017Inngår i: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, s. 11-16Konferansepaper (Fagfellevurdert)

Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

• 15.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

• 16.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Object recognition using shape growth pattern2017Inngår i: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, s. 47-52, artikkel-id 8073567Konferansepaper (Fagfellevurdert)

This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

• 17.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för teknik och estetik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för matematik och naturvetenskap. Sony Mobile Communications AB, SWE.
Comparing Two Generations of Embedded GPUs Running a Feature Detection AlgorithmManuskript (preprint) (Annet vitenskapelig)

Graphics processing units (GPUs) in embedded mobile platforms are reaching performance levels where they may be useful for computer vision applications. We compare two generations of embedded GPUs for mobile devices when run- ning a state-of-the-art feature detection algorithm, i.e., Harris- Hessian/FREAK. We compare architectural differences, execu- tion time, temperature, and frequency on Sony Xperia Z3 and Sony Xperia XZ mobile devices. Our results indicate that the performance soon is sufficient for real-time feature detection, the GPUs have no temperature problems, and support for large work-groups is important.

• 18.
Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för hälsa.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för teknik och estetik. Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för hälsa. Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för hälsa. Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för maskinteknik.
Design of a Semi-Automated and Continuous Evaluation System: Customized for Application in e-HealthManuskript (preprint) (Annet vitenskapelig)

Background and Objectives

Survey-based evaluation of a system, such as measuring user’s satisfaction or patient-reported outcomes, entails a set of burdens that limits the feasibility, frequency, extendability, and continuity of the evaluation. Automating the evaluation process, that is reducing the burden of evaluators in questionnaire curation or minimizing the need for explicit user attention when collecting their attitudes, can make the evaluation more feasible, repeatable, extendible, continuous, and even flexible for improvement. An automated evaluation process can be enhanced to include features, such as the ability to handle heterogeneity in evaluation cases. Here, we represent the design of a system that makes it possible to have a semi-automated evaluation system. The design is presented and partially implemented in the context of health information systems, but it can be applied to other contexts of information system usages as well.

Method

The system was divided into four components. We followed a design research methodology to design the system, where each component reached a certain level of maturity. Already implemented and validated methods from previous studies were embedded within components, while they were extended with improved automation proposals or new features.

Results

A system was designed, comprised of four major components: Evaluation Aspects Elicitation, User Survey, Benchmark Path Model, and Alternative Metrics Replacement. All components have the essential maturity of identification of the problem, identification of solution objectives, and the overall design. In the overall design, the primary flow, process-entities, data-entities, and events for each component are identified and illustrated. Parts of some components have been already verified and demonstrated in real-world cases.

Conclusion

A system can be developed to minimize human burden, both for the evaluators and respondants, in survey-based evaluation. This system automates finding items to evaluate, creating questionnaire based on those items, surveying the users' attitude about those items, modeling the relations between the evaluation items, and incrementally changing the model to rely on automatically collected metrics, usually implicit indicators, collected from the users, instead of requiring their explicit expression of their attitudes. The system provides the possibility of minimal human burden, frequent repetition, continuity and real-time reporting, incremental upgrades regarding environmental changes, proper handling of heterogeneity, and a higher degree of objectivity.

• 19.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
Extracting Customer Sentiments from Email Support Tickets: A case for email support ticket prioritisation2019Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Background

Daily, companies generate enormous amounts of customer support tickets which are grouped and placed in specialised queues, based on some characteristics, from where they are resolved by the customer support personnel (CSP) on a first-in-first-out basis. Given that these tickets require different levels of urgency, a logical next step to improving the effectiveness of the CSPs is to prioritise the tickets based on business policies. Among the several heuristics that can be used in prioritising tickets is sentiment polarity.

Objectives

This study investigates how machine learning methods and natural language techniques can be leveraged to automatically predict the sentiment polarity of customer support tickets using.

Methods

Using a formal experiment, the study examines how well Support Vector Machine (SVM), Naive Bayes (NB) and Logistic Regression (LR) based sentiment polarity prediction models built for the product and movie reviews, can be used to make sentiment predictions on email support tickets. Due to the limited size of annotated email support tickets, Valence Aware Dictionary and sEntiment Reasoner (VADER) and cluster ensemble - using k-means, affinity propagation and spectral clustering, is investigated for making sentiment polarity prediction.

Results

Compared to NB and LR, SVM performs better, scoring an average f1-score of .71 whereas NB scores least with a .62 f1-score. SVM, combined with the presence vector, outperformed the frequency and TF-IDF vectors with an f1-score of .73 while NB records an f1-score of .63. Given an average f1-score of .23, the models transferred from the movie and product reviews performed inadequately even when compared with a dummy classifier with an f1-score average of .55. Finally, the cluster ensemble method outperformed VADER with an f1-score of .61 and .53 respectively.

Conclusions

Given the results, SVM, combined with a presence vector of bigrams and trigrams is a candidate solution for extracting sentiments from email support tickets. Additionally, transferring sentiment models from the movie and product reviews domain to the email support tickets is not possible. Finally, given that there exists a limited dataset for conducting sentiment analysis studies in the Swedish and the customer support context, a cluster ensemble is recommended as a sample selection method for generating annotated data.

• 20.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Schneider, KurtLeibniz University of Hannover, Germany.
Proceedings of the 21st International Working Conference on Requirements Engineering: Foundation for Software Quality2015Konferanseproceedings (Fagfellevurdert)
• 21.
Fraunhofer ISI, GER.
Önen, MelekEURECOM, FRA.Lievens, EvaGhent University, BEL.Krenn, StephanAustrian Institute of Technology, AUT.Fricker, SamuelBlekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. FHNW, CHE.
Privacy and Identity Management: Data for Better Living: AI and Privacy2019Collection/Antologi (Fagfellevurdert)

This book contains selected papers presented at the 14th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School on Privacy and Identity Management, held in Windisch, Switzerland, in August 2019.

The 22 full papers included in this volume were carefully reviewed and selected from 31 submissions. Also included are reviewed papers summarizing the results of workshops and tutorials that were held at the Summer School as well as papers contributed by several of the invited speakers. The papers combine interdisciplinary approaches to bring together a host of perspectives, which are reflected in the topical sections: language and privacy; law, ethics and AI; biometrics and privacy; tools supporting data protection compliance; privacy classification and security assessment; privacy enhancing technologies in specific contexts.

• 22.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap. Blekinge institute of Technology.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datavetenskap.
On Performance of Prioritized Appointment Scheduling for Healthcare2019Inngår i: Journal of Service Science and Management, ISSN 1940-9893, E-ISSN 1940-9907, Vol. 12, s. 589-604Artikkel i tidsskrift (Fagfellevurdert)

Designing the appointment scheduling is a challenging task for the development of healthcare system. The efficient solution approach can provide high-quality healthcare service between care providers (CP)s and care receivers (CR)s. In this paper, we consider the healthcare system with the heterogeneous CRs in terms of urgent and routine CRs. Our suggested model assumes that the system gives the service priority to the urgent CRs by allowing them to interrupt the ongoing routine appointments. An appointment handoff scheme is suggested for the interrupted routine appointments, and thus the routine CRs can attempt to re-establish the appointment scheduling with other available CPs. With these considerations, we study the scheduling performance of the system by using the Markov chains based modeling approach. The numerical analysis is reported and the simulation experiment is conducted to validate the numerical results.

• 23.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Indoor Location Surveillance: Utilizing Wi-Fi and Bluetooth Signals2019Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave

Personal information nowadays have become valuable for many stakeholders. We want to find out how much information someone can gather from our daily devices such as a smartphone, using some budget devices together with some programming knowledge. Can we gather enough information to be able to determine a location to a target device? The main objectives of our bachelor thesis is to determine the accuracy of positioning for nearby personal devices using trilateration of short-distance communications (Wi-Fi vs Bluetooth). But also, how much and what information our devices leak without us knowing with respect to personal integrity. We collected Wi-Fi and Bluetooth data in total from four target devices. Two different experiments were executed, calibration experiment and visualization experiment. The data were collected by capturing the Wi-Fi and Bluetooth Received Signal Strength Indication(RSSI) transmitted wirelessly from target devices. We then apply a method called trilateration to be able to pinpoint a target to a location. In theory, Bluetooth signals are twice as accurate as Wi-Fi signals. In practise, we were able to locate a target device with an accuracy of 5 - 10 meters. Bluetooth signals are stable but have long response time while Wi-Fi signals have short response time but high fluctuation in the RSSI values. The idea itself, being able to determine a handheld device position is not impossible, as can be seen from our results. It may though require more powerful hardware to secure an acceptable accuracy. On the other hand, achieving this kind of results from such a cheap hardware as Raspberry Pi:s are truly amazing.

• 24.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
The Effects of Emotions and Their Regulation on Decision-making Performance in Affective Serious Games2019Doktoravhandling, med artikler (Annet vitenskapelig)

Emotions are thought to be one of the key factors that critically influence human decision-making. Emotion-regulation can help to mitigate emotion-related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to practicing emotion-regulation, where meaningful biofeedback information communicates player's affective states to a series of informed gameplay choices. These findings motivate the notion that in the decision context of serious games, one would benefit from awareness and regulation of such emerging emotions.

This thesis explores the design and evaluation methods for creating serious games where emotion-regulation can be practiced using physiological biofeedback measures. Furthermore, it investigates emotions and the effect of emotion-regulation on decision performance in serious games. Using the psychophysiological methods in the design of such games, emotions and their underlying neural mechanism have been explored.

The results showed the benefits of practicing emotion-regulation in serious games, where decision-making performance was increased for the individuals who down-regulated high levels of arousal while having an experience of positive valence. Moreover, it increased also for the individuals who received the necessary biofeedback information. The results also suggested that emotion-regulation strategies (i.e., cognitive reappraisal) are highly dependent on the serious game context. Therefore, the reappraisal strategy was shown to benefit the decision-making tasks investigated in this thesis. The results further suggested that using psychophysiological methods in emotionally arousing serious games, the interplay between sympathetic and parasympathetic pathways could be mapped through the underlying emotions which activate those two pathways. Following this conjecture, the results identified the optimal arousal level for increased performance of an individual on a decision-making task, by carefully balancing the activation of those two pathways. The investigations also validated these findings in the collaborative serious game context, where the robot collaborators were found to elicit diverse affect in their human partners, influencing performance on a decision-making task. Furthermore, the evidence suggested that arousal is equally or more important than valence for the decision-making performance, but once optimal arousal has been reached, a further increase in performance may be achieved by regulating valence. Furthermore, the results showed that serious games designed in this thesis elicited high physiological arousal and positive valence. This makes them suitable as research platforms for the investigation of how these emotions influence the activation of sympathetic and parasympathetic pathways and influence performance on a decision-making task.

Taking these findings into consideration, the serious games designed in this thesis allowed for the training of cognitive reappraisal emotion-regulation strategy on the decision-making tasks. This thesis suggests that using evaluated design and development methods, it is possible to design and develop serious games that provide a helpful environment where individuals could practice emotion-regulation through raising awareness of emotions, and subsequently improve their decision-making performance.

• 25.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
FZI Forschungszentrum Informatik, DEU. Karlsruhe Institute of Technology, DEU. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. FZI Forschungszentrum Informatik, DEU. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
A Serious Game using Physiological Interfaces for Emotion Regulation Training in the context of Financial Decision-Making2012Inngår i: ECIS 2012 - Proceedings of the 20th European Conference on Information Systems, AIS Electronic Library (AISeL) , 2012, s. 1-14Konferansepaper (Fagfellevurdert)

Research on financial decision-making shows that traders and investors with high emotion regulation capabilities perform better in trading. But how can the others learn to regulate their emotions? âLearning by doing’ sounds like a straightforward approach. But how can one perform âlearning by doing’ when there is no feedback? This problem particularly applies to learning emotion regulation, because learners can get practically no feedback on their level of emotion regulation. Our research aims at providing a learning environment that can help decision-makers to improve their emotion regulation. The approach is based on a serious game with real-time biofeedback. The game is settled in a financial context and the decision scenario is directly linked to the individual biofeedback of the learner’s heart rate data. More specifically, depending on the learner’s ability to regulate emotions, the decision scenario of the game continuously adjusts and thereby becomes more (or less) difficult. The learner wears an electrocardiogram sensor that transfers the data via Bluetooth to the game. The game itself is evaluated at several levels.

• 26.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
The Future of Brain-Computer Interface for Games and Interaction Design2010Rapport (Annet vitenskapelig)
• 27.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
Practicing Emotion-Regulation Through Biofeedback on the Decision-Making Performance in the Context of Serious Games: a Systematic Review2019Inngår i: Entertainment Computing, ISSN 1875-9521, E-ISSN 1875-953X, Vol. 29, s. 75-86Artikkel i tidsskrift (Fagfellevurdert)

Evidence shows that emotions critically influence human decision-making. Therefore, emotion-regulation using biofeedback has been extensively investigated. Nevertheless, serious games have emerged as a valuable tool for such investigations set in the decision-making context. This review sets out to investigate the scientific evidence regarding the effects of practicing emotion-regulation through biofeedback on the decision-making performance in the context of serious games. A systematic search of five electronic databases (Scopus, Web of Science, IEEE, PubMed Central, Science Direct), followed by the author and snowballing investigation, was conducted from a publication's year of inception to October 2018. The search identified 16 randomized controlled experiment/quasi-experiment studies that quantitatively assessed the performance on decision-making tasks in serious games, involving students, military, and brain-injured participants. It was found that the participants who raised awareness of emotions and increased the skill of emotion-regulation were able to successfully regulate their arousal, which resulted in better decision performance, reaction time, and attention scores on the decision-making tasks. It is suggested that serious games provide an effective platform validated through the evaluative and playtesting studies, that supports the acquisition of the emotion-regulation skill through the direct (visual) and indirect (gameplay) biofeedback presentation on decision-making tasks.

• 28.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för teknik och estetik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier. Linnéuniversitetet, SWE. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
The Effect of Emotions and Social Behavior on Performance in a Collaborative Serious Game Between Humans and Autonomous Robots2018Inngår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, nr 1, s. 115-129Artikkel i tidsskrift (Fagfellevurdert)

The aim of this paper is to investigate performance in a collaborative human–robot interaction on a shared serious game task. Furthermore, the effect of elicited emotions and perceived social behavior categories on players’ performance will be investigated. The participants collaboratively played a turn-taking version of the Tower of Hanoi serious game, together with the human and robot collaborators. The elicited emotions were analyzed in regards to the arousal and valence variables, computed from the Geneva Emotion Wheel questionnaire. Moreover, the perceived social behavior categories were obtained from analyzing and grouping replies to the Interactive Experiences and Trust and Respect questionnaires. It was found that the results did not show a statistically significant difference in participants’ performance between the human or robot collaborators. Moreover, all of the collaborators elicited similar emotions, where the human collaborator was perceived as more credible and socially present than the robot one. It is suggested that using robot collaborators might be as efficient as using human ones, in the context of serious game collaborative tasks.

• 29. Johansson, E.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Crime Hotspots: An Evaluation of the KDE Spatial Mapping Technique2015Inngår i: Proceedings - 2015 European Intelligence and Security Informatics Conference, EISIC 2015 / [ed] Brynielsson J.,Yap M.H., IEEE Computer Society, 2015, s. 69-74Konferansepaper (Fagfellevurdert)

Residential burglaries are increasing. By visualizing patterns as spatial hotspots, law-enforcement agents can get a better understanding of crime distributions and trends. Two aspects are investigated, first, measuring the accuracy and performance of the KDE algorithm using small data sets. Secondly, investigation of the amount of crime data needed to compute accurate and reliable hotspots. The Prediction Accuracy Index is used to effectively measure the accuracy of the algorithm. The data from three geographical areas in Sweden, including Stockholm, Gothenburg and Malmö are analyzed and evaluated over a one year. The results suggest that the usage of the KDE algorithm to predict residential burglaries performs well overall when having access to enough crimes, but is capable with small data sets as well

• 30. Johansson, Fredrik
Attacking the Manufacturing Execution System: Leveraging a Programmable Logic Controller on the Shop Floor2019Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Background. Automation in production has become a necessity for producing companies to keep up with the demand created by their customers. One way to automate a process is to use a piece of hardware called a programmable logic controller (PLC). A PLC is a small computer capable of being programmed to process a set of inputs, from e.g. sensors, and create outputs, to e.g. actuators, from that. This eliminates the risk of human errors while at the same time speeding up the production rate of the now near identical products. To improve the automation process on the shop floor and the production process in general a special software system is used. This system is known as the manufacturing execution system (MES), and it is connected to the PLCs and other devices on the shop floor. The MES have different functionalities and one of these is that it can manage instructions. Theses instructions can be aimed to both employees and devices such as the PLCs. Would the MES suffer from an error, e.g. in the instructions sent to the shop floor, the company could suffer from a negative impact both economical and in reputation. Since the PLC is a computer and it is connected to the MES it might be possible to attack the system using the PLC as leverage. Objectives. Examine if it is possible to attack the MES using a PLC as the attack origin. Methods. A literature study was performed to see what types of attacks and vulnerabilities that has been disclosed related to PLCs between 2010 and 2018. Secondly a practical experiment was done, trying to perform attacks targeting the MES. Results. The results are that there are many different types of attacks and vulnerabilities that has been found related to PLCs and the attacks done in the practical experiment failed to induce negative effects in the MES used. Conclusions. The conclusion of the thesis is that two identified PLC attack techniques seems likely to be used to attack the MES layer. The methodology that was used to attack the MES layer in the practical experiment failed to affect the MES in a negative way. However, it was possible to affect the log file of the MES in one of the test cases. So, it does not rule out that other MES types are not vulnerable or that the two PLC attacks identified will not work to affect the MES.

• 31.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
On the Applicability of a Cache Side-Channel Attack on ECDSA Signatures: The Flush+Reload attack on the point multiplication in ECDSA signature generation process2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Context. Digital counterparts of handwritten signatures are known as Digital Signatures. The Elliptic Curve Digital Signature Algorithm (ECDSA) is an Elliptic Curve Cryptography (ECC) primitive, which is used for generating and verifying digital signatures. The attacks that target an implementation of a cryptosystem are known as side-channel attacks. The Flush+Reload attack is a cache side-channel attack that relies on cache hits/misses to recover secret information from the target program execution. In elliptic curve cryptosystems, side-channel attacks are particularly targeted towards the point multiplication step. The Gallant-Lambert-Vanstone (GLV) method for point multiplication is a special method that speeds up the computation for elliptic curves with certain properties.

Objectives. In this study, we investigate the applicability of the Flush+Reload attack on ECDSA signatures that employ the GLV method to protect point multiplication.

Methods. We demonstrate the attack through an experiment using the curve secp256k1. We perform a pair of experiments to estimate both the applicability and the detection rate of the attack in capturing side-channel information.

Results. Through our attack, we capture side-channel information about the decomposed GLV scalars.

Conclusions. Based on an analysis of the results, we conclude that for certain implementation choices, the Flush+Reload attack is applicable on ECDSA signature generation process that employs the GLV method. The practitioner should be aware of the implementation choices which introduce vulnerabilities, and avoid the usage of such ECDSA implementations.

• 32.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
An Approach to Language Modelling for Intelligent Document Retrieval System2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
• 33.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Cooperative Behaviors BetweenTwo Teaming RTS Bots in StarCraft2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Context. Video games are a big entertainment industry. Many video games let players play against or together. Some video games also make it possible for players to play against or together with computer controlled players, called bots. Artificial Intelligence (AI) is used to create bots.

Objectives. This thesis aims to implement cooperative behaviors between two bots and determine if the behaviors lead to an increase in win ratio. This means that the bots should be able to cooperate in certain situations, such as when they are attacked or when they are attacking.

Methods. The bots win ratio will be tested with a series of quantitative experiments where in each experiment two teaming bots with cooperative behavior will play against two teaming bots without any cooperative behavior. The data will be analyzed with a t-test to determine if the data are statistical significant.

Results and Conclusions. The results show that cooperative behavior can increase performance of two teaming Real Time Strategy bots against a non-cooperative team with two bots. However, the performance could either be increased or decreased depending on the situation. In three cases there were an increase in performance and in one the performance was decreased. In three cases there was no difference in performance. This suggests that more research is needed for these cases.

• 34.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Implementation and Evaluation of Positional Voice Chat in a Massively Multiplayer Online Role Playing Game2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Computer games, especially Massively Multiplayer Online Role Playing Games, have elements where communication between players is of great need. This communication is generally conducted through in-game text chats, in-game voice chats or external voice programs. In-game voice chats can be constructed to work in a similar way as talking does in real life. When someone talks, anyone close enough to that person can hear what is said, with a volume depending on distance. This is called positional or spatial voice chat in games. This differs from the commonly implemented voice chat where participants in conversations are statically defined by a team or group belonging. Positional voice chat has been around for quite some time in games and it seems to be of interest for a lot of users, despite this, it is still not very common.

This thesis investigates impacts of implementing a positional voice chat in the existing MMORPG Mortal Online by Star Vault. How is it built, what are the costs, how many users can it support and what do the users think of it? These are some of the questions answered within this project.

The design science research method has been selected as scientific method. A product in form of a positional voice chat library has been constructed. This library has been integrated into the existing game engine and its usage has been evaluated by the game’s end users.

Results show a positional voice system that in theory supports up to 12500 simultaneous users can be built from scratch and be patched into an existing game in less than 600 man-hours. The system needs third-party libraries for threading, audio input/output, audio compression, network communication and mathematics. All libraries used in the project are free for use in commercial products and do not demand code using them become open source.

Based on a survey taken by more than 200 users, the product received good ratings on Quality of Experience and most users think having a positional voice chat in a game like Mortal Online is important. Results show a trend of young and less experienced users giving the highest average ratings on quality, usefulness and importance of the positional voice chat, suggesting it may be a good tool to attract new players to a game.

• 35.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kommunikationssystem.
Distributed databases for Multi Mediation: Scalability, Availability & Performance2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave

Context: Multi Mediation is a process of collecting data from network(s) & network elements, pre-processing this data and distributing it to various systems like Big Data analysis, Billing Systems, Network Monitoring Systems, and Service Assurance etc. With the growing demand for networks and emergence of new services, data collected from networks is growing. There is need for efficiently organizing this data and this can be done using databases. Although RDBMS offers Scale-up solutions to handle voluminous data and concurrent requests, this approach is expensive. So, alternatives like distributed databases are an attractive solution. Suitable distributed database for Multi Mediation, needs to be investigated.

Objectives: In this research we analyze two distributed databases in terms of performance, scalability and availability. The inter-relations between performance, scalability and availability of distributed databases are also analyzed. The distributed databases that are analyzed are MySQL Cluster 7.4.4 and Apache Cassandra 2.0.13. Performance, scalability and availability are quantified, measurements are made in the context of Multi Mediation system.

Methods: The methods to carry out this research are both qualitative and quantitative. Qualitative study is made for the selection of databases for evaluation. A benchmarking harness application is designed to quantitatively evaluate the performance of distributed database in the context of Multi Mediation. Several experiments are designed and performed using the benchmarking harness on the database cluster.

Results: Results collected include average response time & average throughput of the distributed databases in various scenarios. The average throughput & average INSERT response time results favor Apache Cassandra low availability configuration. MySQL Cluster average SELECT response time is better than Apache Cassandra for greater number of client threads, in high availability and low availability configurations.Conclusions: Although Apache Cassandra outperforms MySQL Cluster, the support for transaction and ACID compliance are not to be forgotten for the selection of database. Apart from the contextual benchmarks, organizational choices, development costs, resource utilizations etc. are more influential parameters for selection of database within an organization. There is still a need for further evaluation of distributed databases.

• 36.
nVISO SA, CHE.
University of Castilla-La Mancha, ESP. University of Castilla-La Mancha, ESP. i4Ds Centre for Requirements Engineering, CHE. University of Edinburgh, GBR. Haute Ecole Specialisee de Suisse, CHE. RT-RK, SRB. SCIPROM SARL, CHE. Trinity College Dublin, IRL. Technical University Munich, DEU. Technical University of Athens, GRC. SYNYO GmbH, AUT. ARM Ltd., GBR. ZF Friedrichshafen AG, DEU. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
BONSEYES: Platform for Open Development of Systems of Artificial Intelligence2017Konferansepaper (Annet vitenskapelig)

The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.

• 37.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Applying Simulation to the Problem of Detecting Financial Fraud2016Doktoravhandling, med artikler (Annet vitenskapelig)

This thesis introduces a financial simulation model covering two related financial domains: Mobile Payments and Retail Stores systems.

The problem we address in these domains is different types of fraud. We limit ourselves to isolated cases of relatively straightforward fraud. However, in this thesis the ultimate aim is to introduce our approach towards the use of computer simulation for fraud detection and its applications in financial domains. Fraud is an important problem that impact the whole economy. Currently, there is a lack of public research into the detection of fraud. One important reason is the lack of transaction data which is often sensitive. To address this problem we present a mobile money Payment Simulator (PaySim) and Retail Store Simulator (RetSim), which allow us to generate synthetic transactional data that contains both: normal customer behaviour and fraudulent behaviour.

These simulations are Multi Agent-Based Simulations (MABS) and were calibrated using real data from financial transactions. We developed agents that represent the clients and merchants in PaySim and customers and salesmen in RetSim. The normal behaviour was based on behaviour observed in data from the field, and is codified in the agents as rules of transactions and interaction between clients and merchants, or customers and salesmen. Some of these agents were intentionally designed to act fraudulently, based on observed patterns of real fraud. We introduced known signatures of fraud in our model and simulations to test and evaluate our fraud detection methods. The resulting behaviour of the agents generate a synthetic log of all transactions as a result of the simulation. This synthetic data can be used to further advance fraud detection research, without leaking sensitive information about the underlying data or breaking any non-disclose agreements.

Using statistics and social network analysis (SNA) on real data we calibrated the relations between our agents and generate realistic synthetic data sets that were verified against the domain and validated statistically against the original source.

We then used the simulation tools to model common fraud scenarios to ascertain exactly how effective are fraud techniques such as the simplest form of statistical threshold detection, which is perhaps the most common in use. The preliminary results show that threshold detection is effective enough at keeping fraud losses at a set level. This means that there seems to be little economic room for improved fraud detection techniques.

We also implemented other applications for the simulator tools such as the set up of a triage model and the measure of cost of fraud. This showed to be an important help for managers that aim to prioritise the fraud detection and want to know how much they should invest in fraud to keep the loses below a desired limit according to different experimented and expected scenarios of fraud.

• 38.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Extending the RetSim Simulator for Estimating the Cost of fraud in the Retail Store Domain2015Inngår i: Proceedings of the European Modeling and Simulation Symposium, 2015, 2015Konferansepaper (Fagfellevurdert)

RetSim is a multi-agent based simulator (MABS) calibrated with real transaction data from one of the largest shoe retailers in Scandinavia. RetSim allows us to generate synthetic transactional data that can be publicly shared and studied without leaking business sensitive information, and still preserve the important characteristics of the data.

In this paper we extended the fraud model of RetSim to cover more cases of internal fraud perpetrated by the staff and allow inventory control to flag even more suspicious activity. We also generated sufficient number of runs using a range of fraud parameters to cover a vast number of fraud scenarios that can be studied. We then use RetSim to simulate some of the more common retail fraud scenarios to ascertain exactly the cost of fraud using different fraud parameters for each case.

• 39.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Using the RetSim simulator for fraud detection research2015Inngår i: International Journal of Simulation and Process Modelling, ISSN 1740-2123, E-ISSN 1740-2131, Vol. 10, nr 2Artikkel i tidsskrift (Fagfellevurdert)

Managing fraud is important for business, retail and financialalike. One method to manage fraud is by \emph{detection}, wheretransactions etc. are monitored and suspicious behaviour is flaggedfor further investigation. There is currently a lack of publicresearch in this area. The main reason is the sensitive nature of thedata. Publishing real financial transaction data would seriouslycompromise the privacy of both customers, and companies alike. Wepropose to address this problem by building RetSim, a multi-agentbased simulator (MABS) calibrated with real transaction data from oneof the largest shoe retailers in Scandinavia. RetSim allows us togenerate synthetic transactional data that can be publicly shared andstudied without leaking business sensitive information, and stillpreserve the important characteristics of the data.

We then use RetSim to model two common retail fraud scenarios toascertain exactly how effective the simplest form of statisticalthreshold detection could be. The preliminary results of our testedfraud detection method show that the threshold detection is effectiveenough at keeping fraud losses at a set level, that there is littleeconomic room for improved techniques.

• 40.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Social Simulation of Commercial and Financial Behaviour for Fraud Detection Research2014Inngår i: Advances in Computational Social Science and Social Simulation / [ed] Miguel, Amblard, Barceló & Madella, Barcelona, 2014Konferansepaper (Fagfellevurdert)

We present a social simulation model that covers three main financialservices: Banks, Retail Stores, and Payments systems. Our aim is toaddress the problem of a lack of public data sets for fraud detectionresearch in each of these domains, and provide a variety of fraudscenarios such as money laundering, sales fraud (based on refunds anddiscounts), and credit card fraud. Currently, there is a general lackof public research concerning fraud detection in the financial domainsin general and these three in particular. One reason for this is thesecrecy and sensitivity of the customers data that is needed toperform research. We present PaySim, RetSim, and BankSim asthree case studies of social simulations for financial transactionsusing agent-based modelling. These simulators enable us to generatesynthetic transaction data of normal behaviour of customers, and alsoknown fraudulent behaviour. This synthetic data can be used to furtheradvance fraud detection research, without leaking sensitiveinformation about the underlying data. Using statistics and socialnetwork analysis (SNA) on real data we can calibrate the relationsbetween staff and customers, and generate realistic synthetic datasets. The generated data represents real world scenarios that arefound in the original data with the added benefit that this data canbe shared with other researchers for testing similar detection methodswithout concerns for privacy and other restrictions present when usingthe original data.

• 41.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Gjovik University College.
Using the RetSim Fraud Simulation Tool to set Thresholds for Triage of Retail Fraud2015Inngår i: SECURE IT SYSTEMS, NORDSEC 2015 / [ed] Sonja Buchegger, Mads Dam, Springer, 2015, Vol. 9417, s. 156-171Konferansepaper (Fagfellevurdert)

The investigation of fraud in business has been a staple for the digital forensics practitioner since the introduction of computers in business. Much of this fraud takes place in the retail industry. When trying to stop losses from insider retail fraud, triage, i.e. the quick identification of sufficiently suspicious behaviour to warrant further investigation, is crucial, given the amount of normal, or insignificant behaviour. It has previously been demonstrated that simple statistical threshold classification is a very successful way to detect fraud~\cite{Lopez-Rojas2015}. However, in order to do triage successfully the thresholds have to be set correctly. Therefore, we present a method based on simulation to aid the user in accomplishing this, by simulating relevant fraud scenarios that are foreseeing as possible and expected, to calculate optimal threshold limits. This method gives the advantage over arbitrary thresholds that it reduces the amount of labour needed on false positives and gives additional information, such as the total cost of a specific modelled fraud behaviour, to set up a proper triage process. With our method we argue that we contribute to the allocation of resources for further investigations by optimizing the thresholds for triage and estimating the possible total cost of fraud. Using this method we manage to keep the losses below a desired percentage of sales, which the manager consider acceptable for keeping the business properly running.

• 42.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
RetSim: A ShoeStore Agent-Based Simulation for Fraud Detection2013Inngår i: 25th European Modeling and Simulation Symposium, EMSS 2013, 2013, s. 25-34Konferansepaper (Fagfellevurdert)

RetSim is an agent-based simulator of a shoe store basedon the transactional data of one of the largest retail shoesellers in Sweden. The aim of RetSim is the generationof synthetic data that can be used for fraud detection re-search. Statistical and a Social Network Analysis (SNA)of relations between staff and customers was used to de-velop and calibrate the model. Our ultimate goal is forRetSim to be usable to model relevant scenarios to gen-erate realistic data sets that can be used by academia, andothers, to develop and reason about fraud detection meth-ods without leaking any sensitive information about theunderlying data. Synthetic data has the added benefit ofbeing easier to acquire, faster and at less cost, for exper-imentation even for those that have access to their owndata. We argue that RetSim generates data that usefullyapproximates the relevant aspects of the real data.

• 43.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Interactive Search-Based Software Testing: Development, Evaluation, and Deployment2017Doktoravhandling, med artikler (Annet vitenskapelig)
• 44.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Practitioner-Oriented Visualization in an Interactive Search-Based Software Test Creation Tool2013Konferansepaper (Fagfellevurdert)

Search-based software testing uses meta-heuristic search techniques to automate or partially automate testing tasks, such as test case generation or test data generation. It uses a fitness function to encode the quality characteristics that are relevant, for a given problem, and guides the search to acceptable solutions in a potentially vast search space. From an industrial perspective, this opens up the possibility of generating and evaluating lots of test cases without raising costs to unacceptable levels. First, however, the applicability of search-based software engineering in an industrial setting must be evaluated. In practice, it is difficult to develop a priori a fitness function that covers all practical aspects of a problem. Interaction with human experts offers access to experience that is otherwise unavailable and allows the creation of a more informed and accurate fitness function. Moreover, our industrial partner has already expressed a view that the knowledge and experience of domain specialists are more important to the overall quality of the systems they develop than software engineering expertise. In this paper we describe our application of Interactive Search Based Software Testing (ISBST) in an industrial setting. We used SBST to search for test cases for an industrial software module and based, in part, on interaction with a human domain specialist. Our evaluation showed that such an approach is feasible, though it also identified potential difficulties relating to the interaction between the domain specialist and the system.

• 45.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Chalmers, SWE. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Transferring Interactive Search-Based Software Testing to Industry2018Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 142, s. 156-170Artikkel i tidsskrift (Fagfellevurdert)

Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engineering (SBSE), is the application of optimization algorithms to problems in software testing, and software engineering, respectively. New algorithms, methods, and tools are being developed and validated on benchmark problems. In previous work, we have also implemented and evaluated Interactive Search-Based Software Testing (ISBST) tool prototypes, with a goal to successfully transfer the technique to industry. Objective: While SBST and SBSE solutions are often validated on benchmark problems, there is a need to validate them in an operational setting, and to assess their performance in practice. The present paper discusses the development and deployment of SBST tools for use in industry, and reflects on the transfer of these techniques to industry. Method: In addition to previous work discussing the development and validation of an ISBST prototype, a new version of the prototype ISBST system was evaluated in the laboratory and in industry. This evaluation is based on an industrial System under Test (SUT) and was carried out with industrial practitioners. The Technology Transfer Model is used as a framework to describe the progression of the development and evaluation of the ISBST system, as it progresses through the first five of its seven steps. Results: The paper presents a synthesis of previous work developing and evaluating the ISBST prototype, as well as presenting an evaluation, in both academia and industry, of that prototype's latest version. In addition to the evaluation, the paper also discusses the lessons learned from this transfer. Conclusions: This paper presents an overview of the development and deployment of the ISBST system in an industrial setting, using the framework of the Technology Transfer Model. We conclude that the ISBST system is capable of evolving useful test cases for that setting, though improvements in the means the system uses to communicate that information to the user are still required. In addition, a set of lessons learned from the project are listed and discussed. Our objective is to help other researchers that wish to validate search-based systems in industry, and provide more information about the benefits and drawbacks of these systems.

• 46.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Strategic Service Selection Problem for Transport Telematic Services- An Optimization Approach2014Inngår i: 2014 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING (SCC 2014), IEEE Computer Society, 2014, s. 520-527Konferansepaper (Fagfellevurdert)

The selection, composition and integration of Transport Telematic Services (TTSs) is crucial for achieving cooperative Intelligent Transport Systems (ITS). To enable future adaptation, models for selecting and composing TTSs needs to take into account possible future modifications, upgrades or downgrades of different TTSs without tipping off the benefit edge. To achieve this, a Strategic Service Selection Problem (SSSP) for TTSs is presented in this article. The problem involves selecting a set of TTSs that maximizes net societal benefits over a strategic time period, e.g., 10 years. The formulation of the problem offers possibilities to study design alternatives taking into account future modifications, extensions, upgrades or downgrades of different TTSs. Two decisive factors affecting the choices and modifications of TTSs are studied: 1) the effect of using Governmental policies to mandate the introduction of TTSs, e.g., Road User Charging and eCall, and 2) the effects of allowing market forces to drive the choices of TTSs. Case study results indicate that in determining combinations of TTSs that can be deployed in a period of 10 years, enforcing too many TTSs can retard the ability of the market to generate net benefits even though the results could be that more TTSs will be deployed.

• 47.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
UIIT PMAS Arid Agriculture University, PAK. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Capital University of Science and Technology, PAK.
A Systematic Mapping of Test Case Generation Techniques Using UML Interaction Diagrams2019Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, artikkel-id e2235Artikkel i tidsskrift (Fagfellevurdert)

Testing plays a vital role for assuring software quality. Among the activities performed during testing process, test cases generation is a challenging and labor intensive task. Test case generation techniques based on UML models are getting the attention of researchers and practitioners. This study provides a systematic mapping of test case generation techniques based on interaction diagrams. The study compares the test case generation techniques, regarding their capabilities and limitations, and it also assesses the reporting quality of the primary studies. It has been revealed that UML interaction diagrams based techniques are mainly used for integration testing. The majority of the techniques are using sequence diagrams as input models, while some are using collaboration. A notable number of techniques are using interaction diagram along with some other UML diagram for test case generation. These techniques are mainly focusing on interaction, scenario, operational, concurrency, synchronization and deadlock related faults.

From the results of this study, we can conclude that the studies presenting test case generation techniques using UML interaction diagrams failed to illustrate the use of rigorous methodology, and these techniques did not demonstrate the empirical evaluation in an industrial context. Our study revealed the need for tool support to facilitate the transfer of solutions to industry.

• 48.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Regression testing goals: View of practitioners and researchers2017Inngår i: 24th Asia-Pacific Software Engineering Conference Workshops (APSECW), IEEE, 2017, s. 25-32Konferansepaper (Fagfellevurdert)

Context: Regression testing is a well-researched area. However, the majority regression testing techniques proposed by the researchers are not getting the attention of the practitioners. Communication gaps between industry and academia and disparity in the regression testing goals are the main reasons. Close collaboration can help in bridging the communication gaps and resolving the disparities.Objective: The study aims at exploring the views of academics and practitioners about the goals of regression testing. The purpose is to investigate the commonalities and differences in their viewpoints and defining some common goals for the success of regression testing.Method: We conducted a focus group study, with 7 testing experts from industry and academia. 4 testing practitioners from 2companies and 3 researchers from 2 universities participated in the study. We followed GQM approach, to elicit the regression testing goals, information needs, and measures.Results: 43 regression testing goals were identified by the participants, which were reduced to 10 on the basis of similarity among the identified goals. Later during the priority assignment process, 5 goals were discarded, because the priority assigned to these goals was very low. Participants identified 47 information needs/questions required to evaluate the success of regression testing with reference to goal G5 (confidence). Which were then reduced to10 on the basis of similarity. Finally, we identified measures to gauge those information needs/questions, which were corresponding to the goal (G5).Conclusions: We observed that participation level of practitioners and researchers during the elicitation of goals and questions was same. We found a certain level of agreement between the participants regarding the regression testing definitions and goals.But there was some level of disagreement regarding the priorities of the goals. We also identified the need to implement a regression testing evaluation framework in the participating companies.

• 49.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Comparing Native and Hybrid Applications with focus on Features2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave

Nowadays smartphones and smartphone-applications are a part of our daily life. There are variety of different operating systems in the market that are unalike, which are an obstacle to developers when it comes to developing a single application for different operating system. Furthermore, hybrid application development has become a potential substitute. The evolution of new hybrid approach has made companies consider hybrid approach as a viable alternative when producing mobile applications. This research paper aims to compare native and hybrid application development on a feature level to provide scientific evidence for researchers and companies choosing application development approach as well as providing vital information about both native and hybrid applications.This study is based on both a literature study and an empirical study. The sources used are Summon@BTH, Google Scholar and IEEE Xplore. To select relevant articles, the Snowballing approach was used, with Inclusion and Exclusion criteria’s.The authors concluded that native development is a better way to develop more advanced applications which uses more device-hardware, while hybrid is a perfectly viable choice when developing content-centric applications.

• 50.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
Supplementary Material of: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.2016Annet (Annet vitenskapelig)

This document contains the supplementary material regarding the systematic literature review entitled: “Prognosis of dementia with machine learning and microssimulation techniques: a systematic literature review”.

123 1 - 50 of 102
Referera
Referensformat
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Annet format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annet språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf