Change search
Refine search result
123456 1 - 50 of 289
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    ABBAS, FAHEEM
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Intelligent Container Stacking System at Seaport Container Terminal2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: The workload at seaport container terminal is increasing gradually. We need to improve the performance of terminal to fulfill the demand. The key section of the container terminal is container stacking yard which is an integral part of the seaside and the landside. So its performance has the effects on both sides. The main problem in this area is unproductive moves of containers. However, we need a well-planned stacking area in order to increase the performance of terminal and maximum utilization of existing resources.

    Objectives: In this work, we have analyzed the existing container stacking system at Helsingborg seaport container terminal, Sweden, investigated the already provided solutions of the problem and find the best optimization technique to get the best possible solution. After this, suggest the solution, test the proposed solution and analyzed the simulation based results with respect to the desired solution.

    Methods: To identify the problem, methods and proposed solutions of the given problem in the domain of container stacking yard management, a literature review has been conducted by using some e-resources/databases. A GA with best parametric values is used to get the best optimize solution. A discrete event simulation model for container stacking in the yard has been build and integrated with genetic algorithm. A proposed mathematical model to show the dependency of cost minimization on the number of containers’ moves.

    Results: The GA has been achieved the high fitness value versus generations for 150 containers to storage at best location in a block with 3 tier levels and to minimize the unproductive moves in the yard. A comparison between Genetic Algorithm and Tabu Search has been made to verify that the GA has performed better than other algorithm or not. A simulation model with GA has been used to get the simulation based results and to show the container handling by using resources like AGVs, yard crane and delivery trucks and container stacking and retrieval system in the yard. The container stacking cost is directly proportional to the number of moves has been shown by the mathematical model.

    Conclusions: We have identified the key factor (unproductive moves) that is the base of other key factors (time & cost) and has an effect on the performance of the stacking yard and overall the whole seaport terminal. We have focused on this drawback of stacking system and proposed a solution that makes this system more efficient. Through this, we can save time and cost both. A Genetic Algorithm is a best approach to solve the unproductive moves problem in container stacking system.

  • 2.
    Abbireddy, Sharath
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Model for Capacity Planning in Cassandra: Case Study on Ericsson’s Voucher System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.

  • 3.
    Abdelraheem, Mohamed Ahmed
    et al.
    SICS Swedish ICT AB, SWE.
    Gehrmann, Christian
    SICS Swedish ICT AB, SWE.
    Lindström, Malin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nordahl, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Executing Boolean queries on an encrypted Bitmap index2016In: CCSW 2016 - Proceedings of the 2016 ACM Cloud Computing Security Workshop, co-located with CCS 2016, Association for Computing Machinery (ACM), 2016, 11-22 p.Conference paper (Refereed)
    Abstract [en]

    We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.

  • 4.
    Abdelrasoul, Nader
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimization Techniques For an Artificial Potential Fields Racing Car Controller2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Building autonomous racing car controllers is a growing field of computer science which has been receiving great attention lately. An approach named Artificial Potential Fields (APF) is used widely as a path finding and obstacle avoidance approach in robotics and vehicle motion controlling systems. The use of APF results in a collision free path, it can also be used to achieve other goals such as overtaking and maneuverability. Objectives. The aim of this thesis is to build an autonomous racing car controller that can achieve good performance in terms of speed, time, and damage level. To fulfill our aim we need to achieve optimality in the controller choices because racing requires the highest possible performance. Also, we need to build the controller using algorithms that does not result in high computational overhead. Methods. We used Particle Swarm Optimization (PSO) in combination with APF to achieve optimal car controlling. The Open Racing Car Simulator (TORCS) was used as a testbed for the proposed controller, we have conducted two experiments with different configuration each time to test the performance of our APF- PSO controller. Results. The obtained results showed that using the APF-PSO controller resulted in good performance compared to top performing controllers. Also, the results showed that the use of PSO proved to enhance the performance compared to using APF only. High performance has been proven in the solo driving and in racing competitions, with the exception of an increased level of damage, however, the level of damage was not very high and did not result in a controller shut down. Conclusions. Based on the obtained results we have concluded that the use of PSO with APF results in high performance while taking low computational cost.

  • 5.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Trend analysis to automatically identify heat program changes2017In: Energy Procedia, Elsevier, 2017, Vol. 116, 407-415 p.Conference paper (Refereed)
    Abstract [en]

    The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

  • 6.
    Adamov, Alexander
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cloud incident response model2016In: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

  • 7.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mujib, Saifullah Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

  • 8.
    Akser, M.
    et al.
    Ulster University, GBR.
    Bridges, B.
    Ulster University, GBR.
    Campo, G.
    Ulster University, GBR.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Curran, K.
    Ulster University, GBR.
    Fitzpatrick, L.
    Ulster University, GBR.
    Hamilton, L.
    Ulster University, GBR.
    Harding, J.
    Ulster University, GBR.
    Leath, T.
    Ulster University, GBR.
    Lunney, T.
    Ulster University, GBR.
    Lyons, F.
    Ulster University, GBR.
    Ma, M.
    University of Huddersfield, GBR.
    Macrae, J.
    Ulster University, GBR.
    Maguire, T.
    Ulster University, GBR.
    McCaughey, A.
    Ulster University, GBR.
    McClory, E.
    Ulster University, GBR.
    McCollum, V.
    Ulster University, GBR.
    Mc Kevitt, P.
    Ulster University, GBR.
    Melvin, A.
    Ulster University, GBR.
    Moore, P.
    Ulster University, GBR.
    Mulholland, E.
    Ulster University, GBR.
    Muñoz, K.
    BijouTech, CoLab, Letterkenny, Co., IRL.
    O’Hanlon, G.
    Ulster University, GBR.
    Roman, L.
    Ulster University, GBR.
    SceneMaker: Creative technology for digital storytelling2017In: Lect. Notes Inst. Comput. Sci. Soc. Informatics Telecommun. Eng. / [ed] Brooks A.L.,Brooks E., Springer Verlag , 2017, Vol. 196, 29-38 p.Conference paper (Refereed)
    Abstract [en]

    The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017.

  • 9.
    Albinsson, Mattias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Andersson, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Quality of Experience through Performance Optimization of Server-Client Communication2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied.

    In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time.

    After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience.

    The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement.

  • 10.
    Aleksandr, Polescuk
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linking Residential Burglaries using the Series Finder Algorithm in a Swedish Context2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. A minority of criminals performs a majority of the crimes today. It is known that every criminal or group of offenders to some extent have a particular pattern (modus operandi) how crime is performed. Therefore, computers' computational power can be employed to discover crimes that have the same model and possibly are carried out by the same criminal. The goal of this thesis was to apply the existing Series Finder algorithm to a feature-rich dataset containing data about Swedish residential burglaries.

    Objectives. The following objectives were achieved to complete this thesis: Modifications performed on an existing Series Finder implementation to fit the Swedish police forces dataset and MatLab code converted to Python. Furthermore, experiment setup designed with appropriate metrics and statistical tests. Finally, modified Series Finder implementation's evaluation performed against both Spatial-Temporal and Random models.

    Methods. The experimental methodology was chosen in order to achieve the objectives. An initial experiment was performed to find right parameters to use for main experiments. Afterward, a proper investigation with dependent and independent variables was conducted.

    Results. After the metrics calculations and the statistical tests applications, the accurate picture revealed how each model performed. Series Finder showed better performance than a Random model. However, it had lower performance than the Spatial-Temporal model. The possible causes of one model performing better than another are discussed in analysis and discussion section.

    Conclusions. After completing objectives and answering research questions, it could be clearly seen how the Series Finder implementation performed against other models. Despite its low performance, Series Finder still showed potential, as presented in future work.

  • 11.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rohani, Sarmad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

  • 12.
    Amjad, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Malhi, Rohail Khan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Burhan, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

  • 13.
    ananth, Indirajith Vijai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

  • 14.
    Andersson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Silhouette-based Level of Detail: A comparison of real-time performance and image space metrics2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The geometric complexity of objects in games and other real-time applications is a crucial aspect concerning the performance of the application. Such applications usually redraw the screen between 30-60 times per second, sometimes even more often, which can be a hard task in an environment with a high number of geometrically complex objects. The concept called Level of Detail, often abbreviated LoD, aims to alleviate the load on the hardware by introducing methods and techniques to minimize the amount of geometry while still maintaining the same, or very similar result.

    Objectives. This study will compare four of the often used techniques, namely Static LoD, Unpopping LoD, Curved PN Triangles, and Phong Tessellation. Phong Tessellation is silhouette-based, and since the silhouette is considered one of the most important properties, the main aim is to determine how it performs compared to the other three techniques.

    Methods. The four techniques are implemented in a real-time application using the modern rendering API Direct3D 11. Data will be gathered from this application to use in several experiments in the context of both performance and image space metrics.

    Conclusions. This study has shown that all of the techniques used works in real-time, but with varying results. From the experiments it can be concluded that the best technique to use is Unpopping LoD. It has good performance and provides a good visual result with the least average amount of popping of the compared techniques. The dynamic techniques are not suitable as a substitute to Unpopping LoD, but further research could be conducted to examine how they can be used together, and how the objects themselves can be designed with the dynamic techniques in mind.

  • 15.
    Andersson, Marcus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nilsson, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Student thesis
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

  • 16.
    Andrej, Sekáč
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance evaluation based on data from code reviews2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process.

    Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation.

    Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis.

    Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.

  • 17.
    Annavarjula, Vaishnavi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Computer-Vision Based Retinal Image Analysis for Diagnosis and Treatment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context- Vision is one of the five elementary physiologial senses. Vision is enabled via the eye, a very delicate sense organ which is highly susceptible to damage which results in loss of vision. The damage comes in the form of injuries or diseases such as diabetic retinopathy and glaucoma. While it is not possible to predict accidents, predicting the onset of disease in the earliest stages is highly attainable. Owing to the leaps in imaging technology,it is also possible to provide near instant diagnosis by utilizing computer vision and image processing capabilities.

    Objectives- In this thesis, an algorithm is proposed and implemented to classify images of the retina into healthy or two classes of unhealthy images, i.e, diabetic retinopathy, and glaucoma thus aiding diagnosis. Additionally the algorithm is studied to investigate which image transformation is more feasible in implementation within the scope of this algorithm and which region of retina helps in accurate diagnosis.

    Methods- An experiment has been designed to facilitate the development of the algorithm. The algorithm is developed in such a way that it can accept all the values of a dataset concurrently and perform both the domain transforms independent of each other.

    Results- It is found that blood vessels help best in predicting disease associations, with the classifier giving an accuracy of 0.93 and a Cohen’s kappa score of 0.90. Frequency transformed images also presented a accuracy in prediction with 0.93 on blood vessel images and 0.87 on optic disk images.

    Conclusions- It is concluded that blood vessels from the fundus images after frequency transformation gives the highest accuracy for the algorithm developed when the algorithm is using a bag of visual words and an image category classifier model.

    Keywords-Image Processing, Machine Learning, Medical Imaging

  • 18.
    Axelsson, Arvid
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Light Field Coding Using Panoramic Projection2014Student thesis
    Abstract [en]

    A new generation of 3d displays provides depth perception without the need for glasses and allows the viewer to see content from many different directions. Providing video for these displays requires capturing the scene by several cameras at different viewpoints, the data from which together forms light field video. To encode such video with existing video coding requires a large amount of data and it increases quickly with a higher number of views, which this application needs. One such coding is the multiview extension of High Efficiency Video Coding (mv-hevc), which encodes a number of similar video streams as different layers. A new coding scheme for light field video, called Panoramic Light Field (plf), is implemented and evaluated in this thesis. The main idea behind the coding is to project all points in a scene that are visible from any of the viewpoints to a single, global view, similar to how texture mapping maps a texture onto a 3d model in computer graphics. Whereas objects ordinarily shift position in the frame as the camera position changes, this is not the case when using this projection. A visible point in space is projected to the same image pixel regardless of viewpoint, resulting in large similarities between images from different viewpoints. The similarity between the layers in light field video helps to achieve more efficient compression when the projection is combined with existing multiview coding. In order to evaluate the scheme, 3d content was created and software was developed to encode it using plf. Video using this coding is compared to existing technology: a straightforward encoding of the views using mvhevc. The results show that the plf coding performs better on the sample content at lower quality levels, while it is worse at higher bitrate due to quality loss from the projection procedure. It is concluded that plf is a promising technology and suggestions are given for future research that may improve its performance further.

  • 19. Baca, Dejan
    et al.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    A Novel Security-Enhanced Agile Software Development Process Applied in an Industrial Setting2015In: Proceedings 10th International Conference on Availability, Reliability and Security ARES 2015, IEEE Computer Society Digital Library, 2015Conference paper (Refereed)
    Abstract [en]

    A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., the baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., more than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., for security, was 1% in the previous development process.

  • 20.
    Bachu, Rajesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud: Performance comparison between on premise and the migrated Virtual Machine2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2.

    Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2.

    Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS).

    Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine.

    Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.

  • 21.
    Bakhtyar, Shoaib
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing Electronic Waybill Solutions for Road Freight Transport2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

    A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

  • 22.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    On Improving Research Methodology Course at Blekinge Institute of Technology2016Conference paper (Refereed)
    Abstract [en]

    The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master’s theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty-one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH. 

  • 23.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Electronic Waybill Solutions: A Systemtic ReviewIn: Journal of Special Topics in Information Technology and Management, ISSN 1385-951X, E-ISSN 1573-7667Article in journal (Other academic)
    Abstract [en]

    A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

  • 24.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Freight transport prediction using electronic waybills and machine learning2014In: 2014 International Conference on Informative and Cybernetics for Computational Social Systems, IEEE Computer Society, 2014, 128-133 p.Conference paper (Refereed)
    Abstract [en]

    A waybill is a document that accompanies the freight during transportation. The document contains essential information such as, origin and destination of the freight, involved actors, and the type of freight being transported. We believe, the information from a waybill, when presented in an electronic format, can be utilized for building knowledge about the freight movement. The knowledge may be helpful for decision makers, e.g., freight transport companies and public authorities. In this paper, the results from a study of a Swedish transport company are presented using order data from a customer ordering database, which is, to a larger extent, similar to the information present in paper waybills. We have used the order data for predicting the type of freight moving between a particular origin and destination. Additionally, we have evaluated a number of different machine learning algorithms based on their prediction performances. The evaluation was based on their weighted average true-positive and false-positive rate, weighted average area under the curve, and weighted average recall values. We conclude, from the results, that the data from a waybill, when available in an electronic format, can be used to improve knowledge about freight transport. Additionally, we conclude that among the algorithms IBk, SMO, and LMT, IBk performed better by predicting the highest number of classes with higher weighted average values for true-positive and false-positive, and recall.

  • 25.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Holmgren, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Data Mining Based Method for Route and Freight Estimation2015In: Procedia Computer Science, Elsevier, 2015, Vol. 52, 396-403 p.Conference paper (Refereed)
    Abstract [en]

    We present a method, which makes use of historical vehicle data and current vehicle observations in order to estimate 1) the route a vehicle has used and 2) the freight the vehicle carried along the estimated route. The method includes a learning phase and an estimation phase. In the learning phase, historical data about the movement of a vehicle and of the consignments allocated to the vehicle are used in order to build estimation models: one for route choice and one for freight allocation. In the estimation phase, the generated estimation models are used together with a sequence of observed positions for the vehicle as input in order to generate route and freight estimates. We have partly evaluated our method in an experimental study involving a medium-size Swedish transport operator. The results of the study indicate that supervised learning, in particular the algorithm Naive Bayes Multinomial Updatable, shows good route estimation performance even when significant amount of information about where the vehicle has traveled is missing. For the freight estimation, we used a method based on averaging the consignments on the historical known trips for the estimated route. We argue that the proposed method might contribute to building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

  • 26.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Holmgren, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Persson, Jan A.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Technical Requirements of the e-Waybill Service2016In: International Journal of Computer and Communication Engineering, ISSN 2010-3743, ISSN 2010-3743, Vol. 5, no 2, 130-140 p.Article in journal (Refereed)
    Abstract [en]

    An electronic waybill (e-Waybill) is a service whose purpose is to replace the paper waybill, which is a paper documents that traditionally follows a consignment during transport. An important purpose of the e-Waybill is to achieve a paperless flow of information during freight transport. In this paper, we investigate five e-Waybill solutions, that is, system design specifications for the e-Waybill, regarding their non-functional (technical) requirements. In addition, we discuss how well existing technologies are able to fulfil the identified requirements. We have identified that information storage, synchronization and conflict management, access control, and communication are important categories of technical requirements of the e-Waybill service. We argue that the identified technical requirements can be used to support the process of designing and implementing the e-Waybill service.

  • 27.
    Bakhtyar, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mbiydzenyuy, Gideon
    Netport Science Park, Karlshamn.
    Henesey, Lawrence
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Simulation Study of the Electronic Waybill Service2015In: Proceedings - EMS 2015: UKSim-AMSS 9th IEEE European Modelling Symposium on Computer Modelling and Simulation / [ed] David Al-Dabas, Gregorio Romero, Alessandra Orsoni, Athanasios Pantelous, IEEE Computer Society, 2015, 307-312 p.Conference paper (Refereed)
    Abstract [en]

    We present results from a simulation study, whichwas designed for investigating the potential positive impacts, i.e., the invoicing and processing time, and financial savings,when using an electronic waybill instead of paper waybillsfor road-based freight transportation. The simulation modelis implemented in an experiment for three different scenarios,where the processing time for waybills at the freight loadingand unloading locations in a particular scenario differs fromother scenarios. The results indicate that a saving of 65%–99%in the invoicing time can be achieved when using an electronicwaybill instead of paper waybills. Our study can be helpful todecision makers, e.g., managers and staff dealing with paperwaybills, to estimate the potential benefits when making deci-sions concerning the implementation of an electronic waybillsolution for replacing paper waybills.

  • 28.
    Bala, Jaswanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Filtering estimated series of residential burglaries using spatio-temporal route calculations2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. According to Swedish National Council for Crime Prevention, there is an increase of 19% in residential burglary crimes in Sweden over the last decade and only 5% of the total crimes reported were actually solved by the law enforcement agencies. In order to solve these cases quickly and efficiently, the law enforcement agencies has to look into the possible linked serial crimes. Many studies have suggested to link crimes based on Modus Operendi and other characteristic. Sometimes crimes which are not possible to travel spatially with in the reported times but have similar Modus Operendi are also grouped as linked crimes. Investigating such crimes could possibly waste the resources of the law enforcement agencies.

    Objectives. In this study, we investigate the possibility of the usage of travel distance and travel duration between different crime locations while linking the residential burglary crimes. A filtering method has been designed and implemented for filtering the unlinked crimes from the estimated linked crimes by utilizing the distance and duration values.

    Methods. The objectives in this study are satisfied by conducting an experiment. The travel distance and travel duration values are obtained from various online direction services. The filtering method was first validated on ground truth represented by known linked crime series and then it was used to filter out crimes from the estimated linked crimes.

    Results. The filtering method had removed a total of 4% unlinked crimes from the estimated linked crime series when the travel mode is considered as driving. Whereas it had removed a total of 23% unlinked crimes from the estimated linked crime series when the travel mode is considered as walking. Also it was found that a burglar can take an average of 900 seconds (15 minutes) for committing a burglary.

    Conclusions. From this study it is evident that the usage of spatial and temporal values in linking residential burglaries gives effective crime links in a series. Also, the usage of Google Maps for getting distance and duration values can increase the overall performance of the filtering method in linking crimes.

  • 29.
    Baskaravel, Yogaraj
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Implementation and evaluation of global router for Information-Centric Networking2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. A huge majority of the current Internet traffic is information dissemination. Information-Centric Networking (ICN) is a future networking paradigm that focuses on global level information dissemination. In ICN, the communication is defined in terms of requesting and providing Named Data Objects (NDO). NetInf is a future networking architecture based on Information-Centric Networking principles. Objectives. In this thesis, a global routing solution for ICN has been implemented. The authority part of NDO's name is mapped to a set of routing hints each with a priority value. Multiple NDOs can share the same authority part and thus the first level aggregation is provided. The routing hints are used to forward a request for a NDO towards a suitable copy of the NDO. The second level aggregation is achieved by aggregating high priority routing hints on low priority routing hints. The performance and scalability of the routing implementation are evaluated with respect to global ICN requirements. Furthermore, some of the notable challenges in implementing global ICN routing are identified. Methods. The NetInf global routing solution is implemented by extending NEC's NetInf Router Platform (NNRP). A NetInf testbed is built over the Internet using the extended NNRP implementation. Performance measurements have been taken from the NetInf testbed. The performance measurements have been discussed in detail in terms of routing scalability. Results. The performance measurements show that hop-by-hop transport has significant impact on the overall request forwarding. A notable amount of time is taken for extracting and inserting binary objects such as routing hints at each router. Conclusions. A more suitable hop-by-hop transport mechanism can be evaluated and used with respect to global ICN requirements. The NetInf message structure can be redefined so that binary objects such as routing hints can be transmitted more efficiently. Apart from that, the performance of the global routing implementation appears to be reasonable. As the NetInf global routing solution provides two levels of aggregation, it can be scalable as well.

  • 30.
    Begnert, Joel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tilljander, Rasmus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Combining Regional Time Stepping With Two-Scale PCISPH Method2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In computer graphics, realistic looking fluid is often desired. Simulating realistic fluids is a time consuming and computationally expensive task, therefore, much research has been devoted to reducing the simulation time while maintaining the realism. Two of the more recent optimization algorithms within particle based simulations are two-scale simulation and regional time stepping (RTS). Both of them are based on the predictive-corrective incompressible smoothed particle hydrodynamics (PCISPH) algorithm.

    Objectives. These algorithms improve on two separate aspects of PCISPH, two-scale simulation reduces the number of particles and RTS focuses computational power on regions of the fluid where it is most needed. In this paper we have developed and investigated the performance of an algorithm combining them, utilizing both optimizations.

    Methods. We implemented both of the base algorithms, as well as PCISPH, before combining them. Therefore we had equal conditions for all algorithms when we performed our experiments, which consisted of measuring the time it took to run each algorithm in three different scene configurations.

    Results. Results showed that our combined algorithm on average was faster than the other three algorithms. However, our implementation of two-scale simulation gave results inconsistent with the original paper, showing a slower time than even PCISPH. This invalidates the results for our combined algorithm since it utilizes the same implementation.

    Conclusions. We see that our combined algorithm has potential to speed up fluid simulations, but since the two-scale implementation was incorrect, our results are inconclusive.

  • 31.
    Berntsson, Fredrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Schengensamarbetet – Europas dröm2014Student thesis
    Abstract [sv]

    Denna uppsats klargör vad Schengensamarbetet är för något, varför det finns och hur det fungerar. Uppsatsen går igenom alla delar av samarbetet som till synes största del består av att avskaffa personkontrollerna mellan medlemsländerna.

  • 32. Beyene, Ayne A.
    et al.
    Welemariam, Tewelle
    Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improved concept drift handling in surgery prediction and other applications2015In: Knowledge and Information Systems, ISSN 0219-1377, Vol. 44, no 1, 177-196 p.Article in journal (Refereed)
    Abstract [en]

    The article presents a new algorithm for handling concept drift: the Trigger-based Ensemble (TBE) is designed to handle concept drift in surgery prediction but it is shown to perform well for other classification problems as well. At the primary care, queries about the need for surgical treatment are referred to a surgeon specialist. At the secondary care, referrals are reviewed by a team of specialists. The possible outcomes of this review are that the referral: (i) is canceled, (ii) needs to be complemented, or (iii) is predicted to lead to surgery. In the third case, the referred patient is scheduled for an appointment with a surgeon specialist. This article focuses on the binary prediction of case three (surgery prediction). The guidelines for the referral and the review of the referral are changed due to, e.g., scientific developments and clinical practices. Existing decision support is based on the expert systems approach, which usually requires manual updates when changes in clinical practice occur. In order to automatically revise decision rules, the occurrence of concept drift (CD) must be detected and handled. The existing CD handling techniques are often specialized; it is challenging to develop a more generic technique that performs well regardless of CD type. Experiments are conducted to measure the impact of CD on prediction performance and to reduce CD impact. The experiments evaluate and compare TBE to three existing CD handling methods (AWE, Active Classifier, and Learn++) on one real-world dataset and one artificial dataset. TBA significantly outperforms the other algorithms on both datasets but is less accurate on noisy synthetic variations of the real-world dataset.

  • 33.
    Bilski, Mateusz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Migration from blocking to non-blocking web frameworks2014Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The problem of performance and scalability of web applications is challenged by most of the software companies. It is difficult to maintain the performance of a web application while the number of users is continuously increasing. The common solution for this problem is scalability. A web application can handle incoming and outgoing requests using blocking or non-blocking Input/Output operation. The way that a single server handles requests affects its ability to scale and depends on a web framework that was used to build the web application. It is especially important for Resource Oriented Architecture (ROA) based applications which consist of distributed Representational State Transfer (REST) web services. This research was inspired by a real problem stated by a software company that was considering the migration to the non-blocking web framework but did not know the possible profits. The objective of the research was to evaluate the influence of web framework's type on the performance of ROA based applications and to provide guidelines for assessing profits of migration from blocking to non-blocking JVM web frameworks. First, internet ranking was used to obtain the list of the most popular web frameworks. Then, the web frameworks were used to conduct two experiments that investigated the influence of web framework's type on the performance of ROA based applications. Next, the consultations with software architects were arranged in order to find a method for approximating the performance of overall application. Finally, the guidelines were prepared based on the consultations and the results of the experiments. Three blocking and non-blocking highly ranked and JVM based web frameworks were selected. The first experiment showed that the non-blocking web frameworks can provide performance up to 2.5 times higher than blocking web frameworks in ROA based applications. The experiment performed on existing application showed average 27\% performance improvement after the migration. The elaborated guidelines successfully convinced the company that provided the application for testing to conduct the migration on the production environment. The experiment results proved that the migration from blocking to non-blocking web frameworks increases the performance of web application. The prepared guidelines can help software architects to decide if it is worth to migrate. However the guidelines are context depended and further investigation is needed to make it more general.

  • 34.
    Boddapati, Venkatesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying Environmental Sounds with Image Networks2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Environmental Sound Recognition, unlike Speech Recognition, is an area that is still in the developing stages with respect to using Deep Learning methods. Sound can be converted into images by extracting spectrograms and the like. Object Recognition from images using deep Convolutional Neural Networks is a currently developing area holding high promise. The same technique has been studied and applied, but on image representations of sound.

    Objectives. In this study, investigation is done to determine the best possible accuracy of performing a sound classification task using existing deep Convolutional Neural Networks by comparing the data pre-processing parameters. Also, a novel method of combining different features into a single image is proposed and its effect tested. Lastly, the performance of an existing network that fuses Convolutional and Recurrent Neural architectures is tested on the selected datasets.

    Methods. In this, experiments were conducted to analyze the effects of data pre-processing parameters on the best possible accuracy with two CNNs. Also, experiment was also conducted to determine whether the proposed method of feature combination is beneficial or not. Finally, an experiment to test the performance of a combined network was conducted.

    Results. GoogLeNet had the highest classification accuracy of 73% on 50-class dataset and 90-93% on 10-class datasets. The sampling rate and frame length values of the respective datasets which contributed to the high scores are 16kHz, 40ms and 8kHz, 50ms respectively. The proposed combination of features does not improve the classification accuracy. The fused CRNN network could not achieve high accuracy on the selected datasets.

    Conclusions. It is concluded that deep networks designed for object recognition can be successfully used to classify environmental sounds and the pre-processing parameters’ values determined for achieving best accuracy. The novel method of feature combination does not significantly improve the accuracy when compared to spectrograms alone. The fused network which learns the special and temporal features from spectral images performs poorly in the classification task when compared to the convolutional network alone.

  • 35.
    Boddapati, Venkatesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Petef, Andrej
    Sony Mobile Communications AB, SWE.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying environmental sounds using image recognition networks2017In: Procedia Computer Science / [ed] Toro C.,Hicks Y.,Howlett R.J.,Zanni-Merk C.,Toro C.,Frydman C.,Jain L.C.,Jain L.C., Elsevier B.V. , 2017, Vol. 112, 2048-2056 p.Conference paper (Refereed)
    Abstract [en]

    Automatic classification of environmental sounds, such as dog barking and glass breaking, is becoming increasingly interesting, especially for mobile devices. Most mobile devices contain both cameras and microphones, and companies that develop mobile devices would like to provide functionality for classifying both videos/images and sounds. In order to reduce the development costs one would like to use the same technology for both of these classification tasks. One way of achieving this is to represent environmental sounds as images, and use an image classification neural network when classifying images as well as sounds. In this paper we consider the classification accuracy for different image representations (Spectrogram, MFCC, and CRP) of environmental sounds. We evaluate the accuracy for environmental sounds in three publicly available datasets, using two well-known convolutional deep neural networks for image recognition (AlexNet and GoogLeNet). Our experiments show that we obtain good classification accuracy for the three datasets. © 2017 The Author(s).

  • 36.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Anton, Borg
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering residential burglaries using multiple heterogeneous variablesIn: International Journal of Information Technology & Decision MakingArticle in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed bythe same constellations of criminals is necessary. Comparison of crime reports today isdicult as crime reports traditionally have been written as unstructured text and oftenlack a common information-basis. Based on a novel process for collecting structured crimescene information the present study investigates the use of clustering algorithms to groupsimilar crime reports based on combined crime characteristics from the structured form.Clustering quality is measured using Connectivity and Silhouette index, stability usingJaccard index, and accuracy is measured using Rand index and a Series Rand index.The performance of clustering using combined characteristics was compared with spatialcharacteristic. The results suggest that the combined characteristics perform better orsimilar to the spatial characteristic. In terms of practical signicance, the presentedclustering approach is capable of clustering cases using a broader decision basis.

  • 37.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Bala, Jaswanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Filtering Estimated Crime Series Based on Route Calculations on Spatio-temporal Data2016In: European Intelligence and Security Informatics Conference / [ed] Brynielsson J.,Johansson F., IEEE, 2016, 92-95 p.Conference paper (Refereed)
    Abstract [en]

    Law enforcement agencies strive to link serial crimes, most preferably based on physical evidence, such as DNA or fingerprints, in order to solve criminal cases more efficiently. However, physical evidence is more common at crime scenes in some crime categories than others. For crime categories with relative low occurrence of physical evidence it could instead be possible to link related crimes using soft evidence based on the perpetrators' modus operandi (MO). However, crime linkage based on soft evidence is associated with considerably higher error-rates, i.e. crimes being incorrectly linked. In this study, we investigate the possibility of filtering erroneous crime links based on travel time between crimes using web-based direction services, more specifically Google maps. A filtering method has been designed, implemented and evaluated using two data sets of residential burglaries, one with known links between crimes, and one with estimated links based on soft evidence. The results show that the proposed route-based filtering method removed 79 % more erroneous crimes than the state-of-the-art method relying on Euclidean straight-line routes. Further, by analyzing travel times between crimes in known series it is indicated that burglars on average have up to 15 minutes for carrying out the actual burglary event. © 2016 IEEE.

  • 38.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A statistical method for detecting significant temporal hotspots using LISA statistics2017Conference paper (Refereed)
    Abstract [en]

    This work presents a method for detecting statisticallysignificant temporal hotspots, i.e. the date and time of events,which is useful for improved planning of response activities.Temporal hotspots are calculated using Local Indicators ofSpatial Association (LISA) statistics. The temporal data is ina 7x24 matrix that represents a temporal resolution of weekdaysand hours-in-the-day. Swedish residential burglary events areused in this work for testing the temporal hotspot detectionapproach. Although, the presented method is also useful forother events as long as they contain temporal information, e.g.attack attempts recorded by intrusion detection systems. Byusing the method for detecting significant temporal hotspotsit is possible for domain-experts to gain knowledge about thetemporal distribution of the events, and also to learn at whichtimes mitigating actions could be implemented.

  • 39.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluating Temporal Analysis Methods UsingResidential Burglary Data2016In: ISPRS International Journal of Geo-Information, Special Issue on Frontiers in Spatial and Spatiotemporal Crime Analytics, ISSN 2220-9964, Vol. 5, no 9, 1-22 p.Article in journal (Refereed)
    Abstract [en]

    Law enforcement agencies, as well as researchers rely on temporal analysis methods in many crime analyses, e.g., spatio-temporal analyses. A number of temporal analysis methods are being used, but a structured comparison in different configurations is yet to be done. This study aims to fill this research gap by comparing the accuracy of five existing, and one novel, temporal analysis methods in approximating offense times for residential burglaries that often lack precise time information. The temporal analysis methods are evaluated in eight different configurations with varying temporal resolution, as well as the amount of data (number of crimes) available during analysis. A dataset of all Swedish residential burglaries reported between 2010 and 2014 is used (N = 103,029). From that dataset, a subset of burglaries with known precise offense times is used for evaluation. The accuracy of the temporal analysis methods in approximating the distribution of burglaries with known precise offense times is investigated. The aoristic and the novel aoristic_ext method perform significantly better than three of the traditional methods. Experiments show that the novel aoristic_ext method was most suitable for estimating crime frequencies in the day-of-the-year temporal resolution when reduced numbers of crimes were available during analysis. In the other configurations investigated, the aoristic method showed the best results. The results also show the potential from temporal analysis methods in approximating the temporal distributions of residential burglaries in situations when limited data are available.

  • 40.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    En strukturerad metod för registrering och automatisk analys av brott2014In: The Past, the Present and the Future of Police Research: Proceedings from the fifth Nordic Police Research seminar / [ed] Rolf Granér och Ola Kronkvist, 2014Conference paper (Refereed)
    Abstract [sv]

    I detta artikel beskrivs en metod som används i polisregionerna Syd, Väst och Stockholm1 för att samla in strukturerade brottsplatsuppgifter från bostadsinbrott, samt hur den insamlade informationen kan analyseras med automatiska metoder som kan assistera brottssamordnare i deras arbete. Dessa automatiserade analyser kan användas som filtrerings- eller selekteringsverktyg för bostadsinbrott och därmed effektivisera och underlätta arbetet. Vidare kan metoden användas för att avgöra sannolikheten att två brott är utförda av samma gärningsman, vilket kan hjälpa polisen att identifiera serier av brott. Detta är möjligt då gärningsmän tenderar att begå brott på ett snarlikt sätt och det är möjligt, baserat på strukturerade brottsplatsuppgifter, att automatiskt hitta dessa mönster. I kapitlet presenteras och utvärderas en prototyp på ett IT-baserat beslutsstödsystem samt två automatiska metoder för brottssamordning.

  • 41.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Svensson, Martin
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Hildeby, Jonas
    Polisen, SWE.
    Predicting burglars' risk exposure and level of pre-crime preparation using crime scene data2018In: Intelligent Data Analysis, ISSN 1088-467X, Vol. 22, no 1, IDA 322-3210Article in journal (Refereed)
    Abstract [en]

    Objectives: The present study aims to extend current research on how offenders’ modus operandi (MO) can be used in crime linkage, by investigating the possibility to automatically estimate offenders’ risk exposure and level of pre-crime preparation for residential burglaries. Such estimations can assist law enforcement agencies when linking crimes into series and thus provide a more comprehensive understanding of offenders and targets, based on the combined knowledge and evidence collected from different crime scenes. Methods: Two criminal profilers manually rated offenders’ risk exposure and level of pre-crime preparation for 50 burglaries each. In an experiment we then analyzed to what extent 16 machine-learning algorithms could generalize both offenders’ risk exposure and preparation scores from the criminal profilers’ ratings onto 15,598 residential burglaries. All included burglaries contain structured and feature-rich crime descriptions which learning algorithms can use to generalize offenders’ risk and preparation scores from.Results: Two models created by Naïve Bayes-based algorithms showed best performance with an AUC of 0.79 and 0.77 for estimating offenders' risk and preparation scores respectively. These algorithms were significantly better than most, but not all, algorithms. Both scores showed promising distinctiveness between linked series, as well as consistency for crimes within series compared to randomly sampled crimes.Conclusions: Estimating offenders' risk exposure and pre-crime preparation  can complement traditional MO characteristics in the crime linkage process. The estimations are also indicative to function for cross-category crimes that otherwise lack comparable MO. Future work could focus on increasing the number of manually rated offenses as well as fine-tuning the Naïve Bayes algorithm to increase its estimation performance.

  • 42.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    jacobsson, andreas
    Malmö University, SWE.
    Baca, Dejan
    Fidesmo AB, SWE.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Introducing a novel security-enhanced agile software development process2017In: International Journal of Secure Software Engineering, ISSN 1947-3036, E-ISSN 1947-3044, ISSN 1947-3036, Vol. 8, no 2Article in journal (Refereed)
    Abstract [en]

    In this paper, a novel security-enhanced agile software development process, SEAP, is introduced. It has been designed, tested, and implemented at Ericsson AB, specifically in the development of a mobile money transfer system. Two important features of SEAP are 1) that it includes additional security competences, and 2) that it includes the continuous conduction of an integrated risk analysis for identifying potential threats. As a general finding of implementing SEAP in software development, the developers solve a large proportion of the risks in a timely, yet cost-efficient manner. The default agile software development process at Ericsson AB, i.e. where SEAP was not included, required significantly more employee hours spent for every risk identified compared to when integrating SEAP. The default development process left 50.0% of the risks unattended in the software version that was released, while the application of SEAP reduced that figure to 22.5%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.9%, a more than five times increment.

  • 43.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On the risk exposure of smart home automation systems2014In: Proceedings 2014 International Conferenceon Future Internet of Things and Cloud, IEEE Computer Society Digital Library, 2014Conference paper (Refereed)
    Abstract [en]

    A recent study has shown that more than every fourth person in Sweden feels that they have poor knowledge and control over their energy use, and that four out of ten would like to be more aware and to have better control over their consumption [5]. A solution is to provide the householders with feedback on their energy consumption, for instance, through a smart home automation system [10]. Studies have shown that householders can reduce energy consumption with up to 20% when gaining such feedback [5] [10]. Home automation is a prime example of a smart environment built on various types of cyber-physical systems generating volumes of diverse, heterogeneous, complex, and distributed data from a multitude of applications and sensors. Thereby, home automation is also an example of an Internet of Things (IoT) scenario, where a communication network extends the present Internet by including everyday items and sensors [22]. Home automation is attracting more and more attention from commercial actors, such as, energy suppliers, infrastructure providers, and third party software and hardware vendors [8] [10]. Among the non-commercial stake-holders, there are various governmental institutions, municipalities, as well as, end-users.

  • 44.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On Descriptive and Predictive Models for Serial Crime Analysis2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Law enforcement agencies regularly collect crime scene information. There exists, however, no detailed, systematic procedure for this. The data collected is affected by the experience or current condition of law enforcement officers. Consequently, the data collected might differ vastly between crime scenes. This is especially problematic when investigating volume crimes. Law enforcement officers regularly do manual comparison on crimes based on the collected data. This is a time-consuming process; especially as the collected crime scene information might not always be comparable. The structuring of data and introduction of automatic comparison systems could benefit the investigation process. This thesis investigates descriptive and predictive models for automatic comparison of crime scene data with the purpose of aiding law enforcement investigations. The thesis first investigates predictive and descriptive methods, with a focus on data structuring, comparison, and evaluation of methods. The knowledge is then applied to the domain of crime scene analysis, with a focus on detecting serial residential burglaries. This thesis introduces a procedure for systematic collection of crime scene information. The thesis also investigates impact and relationship between crime scene characteristics and how to evaluate the descriptive model results. The results suggest that the use of descriptive and predictive models can provide feedback for crime scene analysis that allows a more effective use of law enforcement resources. Using descriptive models based on crime characteristics, including Modus Operandi, allows law enforcement agents to filter cases intelligently. Further, by estimating the link probability between cases, law enforcement agents can focus on cases with higher link likelihood. This would allow a more effective use of law enforcement resources, potentially allowing an increase in clear-up rates.

  • 45.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering Residential Burglaries Using Modus Operandi and Spatiotemporal Information2016In: International Journal of Information Technology and Decision Making, ISSN 0219-6220, Vol. 15, no 1, 23-42 p.Article in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed by the same constellations of criminals is necessary. Comparison of crime reports today is difficult as crime reports traditionally have been written as unstructured text and often lack a common information-basis. Based on a novel process for collecting structured crime scene information, the present study investigates the use of clustering algorithms to group similar crime reports based on combined crime characteristics from the structured form. Clustering quality is measured using Connectivity and Silhouette index (SI), stability using Jaccard index, and accuracy is measured using Rand index (RI) and a Series Rand index (SRI). The performance of clustering using combined characteristics was compared with spatial characteristic. The results suggest that the combined characteristics perform better or similar to the spatial characteristic. In terms of practical significance, the presented clustering approach is capable of clustering cases using a broader decision basis.

  • 46.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eliasson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Detecting Crime Series Based on Route Estimationand Behavioral Similarity2017Conference paper (Refereed)
    Abstract [en]

    A majority of crimes are committed by a minority of offenders. Previous research has provided some support for the theory that serial offenders leave behavioral traces on the crime scene which could be used to link crimes to serial offenders. The aim of this work is to investigate to what extent it is possible to use geographic route estimations and behavioral data to detect serial offenders. Experiments were conducted using behavioral data from authentic burglary reports to investigate if it was possible to find crime routes with high similarity. Further, the use of burglary reports from serial offenders to investigate to what extent it was possible to detect serial offender crime routes. The result show that crime series with the same offender on average had a higher behavioral similarity than random crime series. Sets of crimes with high similarity, but without a known offender would be interesting for law enforcement to investigate further. The algorithm is also evaluated on 9 crime series containing a maximum of 20 crimes per series. The results suggest that it is possible to detect crime series with high similarity using analysis of both geographic routes and behavioral data recorded at crime scenes.

  • 47.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    Boeva, Veselka
    Detecting serial residential burglaries using clustering2014In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 41, no 11, 5252-5266 p.Article in journal (Refereed)
    Abstract [en]

    According to the Swedish National Council for Crime Prevention, law enforcement agencies solved approximately three to five percent of the reported residential burglaries in 2012. Internationally, studies suggest that a large proportion of crimes are committed by a minority of offenders. Law enforcement agencies, consequently, are required to detect series of crimes, or linked crimes. Comparison of crime reports today is difficult as no systematic or structured way of reporting crimes exists, and no ability to search multiple crime reports exist. This study presents a systematic data collection method for residential burglaries. A decision support system for comparing and analysing residential burglaries is also presented. The decision support system consists of an advanced search tool and a plugin-based analytical framework. In order to find similar crimes, law enforcement officers have to review a large amount of crimes. The potential use of the cut-clustering algorithm to group crimes to reduce the amount of crimes to review for residential burglary analysis based on characteristics is investigated. The characteristics used are modus operandi, residential characteristics, stolen goods, spatial similarity, or temporal similarity. Clustering quality is measured using the modularity index and accuracy is measured using the rand index. The clustering solution with the best quality performance score were residential characteristics, spatial proximity, and modus operandi, suggesting that the choice of which characteristic to use when grouping crimes can positively affect the end result. The results suggest that a high quality clustering solution performs significantly better than a random guesser. In terms of practical significance, the presented clustering approach is capable of reduce the amounts of cases to review while keeping most connected cases. While the approach might miss some connections, it is also capable of suggesting new connections. The results also suggest that while crime series clustering is feasible, further investigation is needed.

  • 48.
    BRHANIE, BEKALU MULLU
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Multi-Label Classification Methods for Image Annotation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 49. Brik, Bouziane
    et al.
    Lagraa, Nasreddine
    Abderrahmane, Lakas
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DDGP: Distributed Data Gathering Protocol for vehicular networks2016In: Vehicular Communications, ISSN 2214-2096, Vol. 4, 15-29 p.Article in journal (Refereed)
    Abstract [en]

    Vehicular Ad-Hoc Network (VANet) is an emerging research area, it offers a wide range of applications including safety, road traffic efficiency, and infotainment applications. Recently researchers are studying the possibility of making use of deployed VANet applications for data collection. In this case, vehicles are considered as mobile collectors that gather both real time and delay tolerant data and deliver them to interested entities. In this paper, we propose a novel Distributed Data Gathering Protocol (DDGP) for the collection of delay tolerant as well as real time data in both urban and highway environments. The main contribution of DDGP is a new medium access technique that enables vehicles to access the channel in a distributed way based on their location information. In addition, DDGP implements a new aggregation scheme, which deletes redundant, expired, and undesired data. We provide an analytical proof of correctness of DDGP, in addition to the performance evaluation through an extensive set of simulation experiments. Our results indicate that DDGP enhances the efficiency and the reliability of the data collection process by outperforming existing schemes in terms of several criteria such as delay and message overhead, aggregation ratio, and data retransmission rate. (C) 2016 Elsevier Inc. All rights reserved.

  • 50. Brik, Bouziane
    et al.
    Lagraa, Nasreddine
    Lakas, Abderrahmane
    Cherroun, Hadda
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    ECDGP: extended cluster-based data gathering protocol for vehicular networks2015In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677Article in journal (Refereed)
123456 1 - 50 of 289
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf