Endre søk
Begrens søket
1234567 1 - 50 of 432
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    ABBAS, FAHEEM
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Intelligent Container Stacking System at Seaport Container Terminal2016Independent thesis Advanced level (degree of Master (One Year)), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context: The workload at seaport container terminal is increasing gradually. We need to improve the performance of terminal to fulfill the demand. The key section of the container terminal is container stacking yard which is an integral part of the seaside and the landside. So its performance has the effects on both sides. The main problem in this area is unproductive moves of containers. However, we need a well-planned stacking area in order to increase the performance of terminal and maximum utilization of existing resources.

    Objectives: In this work, we have analyzed the existing container stacking system at Helsingborg seaport container terminal, Sweden, investigated the already provided solutions of the problem and find the best optimization technique to get the best possible solution. After this, suggest the solution, test the proposed solution and analyzed the simulation based results with respect to the desired solution.

    Methods: To identify the problem, methods and proposed solutions of the given problem in the domain of container stacking yard management, a literature review has been conducted by using some e-resources/databases. A GA with best parametric values is used to get the best optimize solution. A discrete event simulation model for container stacking in the yard has been build and integrated with genetic algorithm. A proposed mathematical model to show the dependency of cost minimization on the number of containers’ moves.

    Results: The GA has been achieved the high fitness value versus generations for 150 containers to storage at best location in a block with 3 tier levels and to minimize the unproductive moves in the yard. A comparison between Genetic Algorithm and Tabu Search has been made to verify that the GA has performed better than other algorithm or not. A simulation model with GA has been used to get the simulation based results and to show the container handling by using resources like AGVs, yard crane and delivery trucks and container stacking and retrieval system in the yard. The container stacking cost is directly proportional to the number of moves has been shown by the mathematical model.

    Conclusions: We have identified the key factor (unproductive moves) that is the base of other key factors (time & cost) and has an effect on the performance of the stacking yard and overall the whole seaport terminal. We have focused on this drawback of stacking system and proposed a solution that makes this system more efficient. Through this, we can save time and cost both. A Genetic Algorithm is a best approach to solve the unproductive moves problem in container stacking system.

    Fulltekst (pdf)
    fulltext
  • 2.
    Abbireddy, Sharath
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Model for Capacity Planning in Cassandra: Case Study on Ericsson’s Voucher System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.

    Fulltekst (pdf)
    fulltext
  • 3.
    Abdelraheem, Mohamed Ahmed
    et al.
    SICS Swedish ICT AB, SWE.
    Gehrmann, Christian
    SICS Swedish ICT AB, SWE.
    Lindström, Malin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nordahl, Christian
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Executing Boolean queries on an encrypted Bitmap index2016Inngår i: CCSW 2016 - Proceedings of the 2016 ACM Cloud Computing Security Workshop, co-located with CCS 2016, Association for Computing Machinery (ACM), 2016, s. 11-22Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.

  • 4.
    Abdelrasoul, Nader
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Optimization Techniques For an Artificial Potential Fields Racing Car Controller2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Building autonomous racing car controllers is a growing field of computer science which has been receiving great attention lately. An approach named Artificial Potential Fields (APF) is used widely as a path finding and obstacle avoidance approach in robotics and vehicle motion controlling systems. The use of APF results in a collision free path, it can also be used to achieve other goals such as overtaking and maneuverability. Objectives. The aim of this thesis is to build an autonomous racing car controller that can achieve good performance in terms of speed, time, and damage level. To fulfill our aim we need to achieve optimality in the controller choices because racing requires the highest possible performance. Also, we need to build the controller using algorithms that does not result in high computational overhead. Methods. We used Particle Swarm Optimization (PSO) in combination with APF to achieve optimal car controlling. The Open Racing Car Simulator (TORCS) was used as a testbed for the proposed controller, we have conducted two experiments with different configuration each time to test the performance of our APF- PSO controller. Results. The obtained results showed that using the APF-PSO controller resulted in good performance compared to top performing controllers. Also, the results showed that the use of PSO proved to enhance the performance compared to using APF only. High performance has been proven in the solo driving and in racing competitions, with the exception of an increased level of damage, however, the level of damage was not very high and did not result in a controller shut down. Conclusions. Based on the obtained results we have concluded that the use of PSO with APF results in high performance while taking low computational cost.

    Fulltekst (pdf)
    FULLTEXT01
  • 5.
    Abghari, Shahrooz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Data Modeling for Outlier Detection2018Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    This thesis explores the data modeling for outlier detection techniques in three different application domains: maritime surveillance, district heating, and online media and sequence datasets. The proposed models are evaluated and validated under different experimental scenarios, taking into account specific characteristics and setups of the different domains.

    Outlier detection has been studied and applied in many domains. Outliers arise due to different reasons such as fraudulent activities, structural defects, health problems, and mechanical issues. The detection of outliers is a challenging task that can reveal system faults, fraud, and save people's lives. Outlier detection techniques are often domain-specific. The main challenge in outlier detection relates to modeling the normal behavior in order to identify abnormalities. The choice of model is important, i.e., an incorrect choice of data model can lead to poor results. This requires a good understanding and interpretation of the data, the constraints, and the requirements of the problem domain. Outlier detection is largely an unsupervised problem due to unavailability of labeled data and the fact that labeled data is expensive.

    We have studied and applied a combination of both machine learning and data mining techniques to build data-driven and domain-oriented outlier detection models. We have shown the importance of data preprocessing as well as feature selection in building suitable methods for data modeling. We have taken advantage of both supervised and unsupervised techniques to create hybrid methods. For example, we have proposed a rule-based outlier detection system based on open data for the maritime surveillance domain. Furthermore, we have combined cluster analysis and regression to identify manual changes in the heating systems at the building level. Sequential pattern mining for identifying contextual and collective outliers in online media data have also been exploited. In addition, we have proposed a minimum spanning tree clustering technique for detection of groups of outliers in online media and sequence data. The proposed models have been shown to be capable of explaining the underlying properties of the detected outliers. This can facilitate domain experts in narrowing down the scope of analysis and understanding the reasons of such anomalous behaviors. We have also investigated the reproducibility of the proposed models in similar application domains.

    Fulltekst (pdf)
    fulltext
  • 6.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Gustafsson, Jörgen
    Ericsson AB.
    Shaikh, Junaid
    Ericsson AB.
    Outlier Detection for Video Session Data Using Sequential Pattern Mining2018Inngår i: ACM SIGKDD Workshop On Outlier Detection De-constructed, 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The growth of Internet video and over-the-top transmission techniqueshas enabled online video service providers to deliver highquality video content to viewers. To maintain and improve thequality of experience, video providers need to detect unexpectedissues that can highly affect the viewers’ experience. This requiresanalyzing massive amounts of video session data in order to findunexpected sequences of events. In this paper we combine sequentialpattern mining and clustering to discover such event sequences.The proposed approach applies sequential pattern mining to findfrequent patterns by considering contextual and collective outliers.In order to distinguish between the normal and abnormal behaviorof the system, we initially identify the most frequent patterns. Thena clustering algorithm is applied on the most frequent patterns.The generated clustering model together with Silhouette Index areused for further analysis of less frequent patterns and detectionof potential outliers. Our results show that the proposed approachcan detect outliers at the system level.

    Fulltekst (pdf)
    fulltext
  • 7.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ickin, Selim
    Ericsson, SWE.
    Gustafsson, Jörgen
    Ericsson, SWE.
    A Minimum Spanning Tree Clustering Approach for Outlier Detection in Event Sequences2018Inngår i: 2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA) / [ed] Wani M.A.,Sayed-Mouchaweh M.,Lughofer E.,Gama J.,Kantardzic M., IEEE, 2018, s. 1123-1130, artikkel-id 8614207Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Outlier detection has been studied in many domains. Outliers arise due to different reasons such as mechanical issues, fraudulent behavior, and human error. In this paper, we propose an unsupervised approach for outlier detection in a sequence dataset. The proposed approach combines sequential pattern mining, cluster analysis, and a minimum spanning tree algorithm in order to identify clusters of outliers. Initially, the sequential pattern mining is used to extract frequent sequential patterns. Next, the extracted patterns are clustered into groups of similar patterns. Finally, the minimum spanning tree algorithm is used to find groups of outliers. The proposed approach has been evaluated on two different real datasets, i.e., smart meter data and video session data. The obtained results have shown that our approach can be applied to narrow down the space of events to a set of potential outliers and facilitate domain experts in further analysis and identification of system level issues.

  • 8.
    Abghari, Shahrooz
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    García Martín, Eva
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Trend analysis to automatically identify heat program changes2017Inngår i: Energy Procedia, Elsevier, 2017, Vol. 116, s. 407-415Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

    Fulltekst (pdf)
    fulltext
  • 9.
    Adamov, Alexander
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Cloud incident response model2016Inngår i: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

  • 10.
    Adamov, Alexander
    et al.
    Kharkiv National University of Radioelectronics, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    The state of ransomware: Trends and mitigation techniques2017Inngår i: Proceedings of 2017 IEEE East-West Design and Test Symposium, EWDTS 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, artikkel-id 8110056Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper contains an analysis of the payload of the popular ransomware for Windows, Android, Linux, and MacOSX platforms. Namely, VaultCrypt (CrypVault), TeslaCrypt, NanoLocker, Trojan-Ransom.Linux.Cryptor, Android Simplelocker, OSX/KeRanger-A, WannaCry, Petya, NotPetya, Cerber, Spora, Serpent ransomware were put under the microscope. A set of characteristics was proposed to be used for the analysis. The purpose of the analysis is generalization of the collected data that describes behavior and design trends of modern ransomware. The objective is to suggest ransomware threat mitigation techniques based on the obtained information. The novelty of the paper is the analysis methodology based on the chosen set of 13 key characteristics that helps to determine similarities and differences thorough the list of ransomware put under analysis. Most of the ransomware samples presented were manually analyzed by the authors eliminating contradictions in descriptions of ransomware behavior published by different malware research laboratories through verification of the payload of the latest versions of ransomware. © 2017 IEEE.

  • 11.
    Adidamu, Naga Shruti
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bheemisetty, Shanmukha Sai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Assessment of Ixia BreakingPoint Virtual Edition: Evolved Packet Gateway2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Fulltekst (pdf)
    fulltext
  • 12.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Designing a Secure IoT System Architecture from a Virtual Premise for a Collaborative AI Lab2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    IoT systems are increasingly composed out of flexible, programmable, virtualised, and arbitrarily chained IoT elements and services using portable code. Moreover, they might be sliced, i.e. allowing multiple logical IoT systems (network + application) to run on top of a shared physical network and compute infrastructure. However, implementing and designing particularly security mechanisms for such IoT systems is challenging since a) promising technologies are still maturing, and b) the relationships among the many requirements, technologies and components are difficult to model a-priori.

    The aim of the paper is to define design cues for the security architecture and mechanisms of future, virtualised, arbitrarily chained, and eventually sliced IoT systems. Hereby, our focus is laid on the authorisation and authentication of user, host, and code integrity in these virtualised systems. The design cues are derived from the design and implementation of a secure virtual environment for distributed and collaborative AI system engineering using so called AI pipelines. The pipelines apply chained virtual elements and services and facilitate the slicing of the system. The virtual environment is denoted for short as the virtual premise (VP). The use-case of the VP for AI design provides insight into the complex interactions in the architecture, leading us to believe that the VP concept can be generalised to the IoT systems mentioned above. In addition, the use-case permits to derive, implement, and test solutions. This paper describes the flexible architecture of the VP and the design and implementation of access and execution control in virtual and containerised environments. 

    Fulltekst (pdf)
    fulltext
  • 13.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Privacy and DRM Requirements for Collaborative Development of AI Application2019Inngår i: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2019, artikkel-id 3233268Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The use of data is essential for the capabilities of Data-driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. This data usage, however, raises intrinsically the concerns on data privacy. In addition, supporting collaborative development of AI applications across organisations has become a major need in AI system design. Digital Rights Management (DRM) is required to protect intellectual property in such collaboration. As a consequence of DRM, privacy threats and privacy-enforcing mechanisms will interact with each other.

    This paper describes the privacy and DRM requirements in collaborative AI system design using AI pipelines. It describes the relationships between DRM and privacy and outlines the threats against these non-functional features. Finally, the paper provides first security architecture to protect against the threats on DRM and privacy in collaborative AI design using AI pipelines. 

    Fulltekst (pdf)
    fulltext
  • 14.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Towards Privacy Requirements for Collaborative Development of AI Applications2018Inngår i: 14th Swedish National Computer Networking Workshop (SNCNW), 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The use of data is essential for the capabilities of Data- driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. The use of data, however, raises intrinsically the concern of the data privacy, in particular for the individuals that provide data. Hence, data privacy is considered as one of the main non-functional features of the Next Generation Internet. This paper describes the privacy challenges and requirements for collaborative AI application development. We investigate the constraints of using digital right management for supporting collaboration to address the privacy requirements in the regulation.

    Fulltekst (pdf)
    fulltext
  • 15.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Flexible Privacy and High Trust in the Next Generation Internet: The Use Case of a Cloud-based Marketplace for AI2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cloudified architectures facilitate resource ac-cess and sharing which is independent from physical lo-cations. They permit high availability of resources at lowoperational costs. These advantages, however, do not comefor free. End users might fear that they lose control overthe location of their data and, thus, of their autonomy indeciding to whom the data is communicate to. Thus, strongprivacy and trust concerns arise for end users.In this work we will review and investigate privacy andtrust requirements for Cloud systems in general and for acloud-based marketplace (CMP) for AI in particular. We willinvestigate whether and how the current privacy and trustdimensions can be applied to Clouds and for the design ofa CMP. We also propose the concept of a "virtual premise"for enabling "Privacy-by-Design" [1] in Clouds. The ideaof a "virtual premise" might probably not be a universalsolution for any privacy requirement. However, we expectthat it provides flexibility in designing privacy in Cloudsand thus leading to higher trust.

    Fulltekst (pdf)
    fulltext
  • 16.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tutschku, Kurt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Privacy and trust in cloud-based marketplaces for AI and data resources2017Inngår i: IFIP Advances in Information and Communication Technology, Springer New York LLC , 2017, Vol. 505, s. 223-225Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The processing of the huge amounts of information from the Internet of Things (IoT) has become challenging. Artificial Intelligence (AI) techniques have been developed to handle this task efficiently. However, they require annotated data sets for training, while manual preprocessing of the data sets is costly. The H2020 project “Bonseyes” has suggested a “Market Place for AI”, where the stakeholders can engage trustfully in business around AI resources and data sets. The MP permits trading of resources that have high privacy requirements (e.g. data sets containing patient medical information) as well as ones with low requirements (e.g. fuel consumption of cars) for the sake of its generality. In this abstract we review trust and privacy definitions and provide a first requirement analysis for them with regards to Cloud-based Market Places (CMPs). The comparison of definitions and requirements allows for the identification of the research gap that will be addressed by the main authors PhD project. © IFIP International Federation for Information Processing 2017.

  • 17.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Mujib, Saifullah Bin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

    Fulltekst (pdf)
    FULLTEXT01
  • 18.
    Akser, M.
    et al.
    Ulster University, GBR.
    Bridges, B.
    Ulster University, GBR.
    Campo, G.
    Ulster University, GBR.
    Cheddad, Abbas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Curran, K.
    Ulster University, GBR.
    Fitzpatrick, L.
    Ulster University, GBR.
    Hamilton, L.
    Ulster University, GBR.
    Harding, J.
    Ulster University, GBR.
    Leath, T.
    Ulster University, GBR.
    Lunney, T.
    Ulster University, GBR.
    Lyons, F.
    Ulster University, GBR.
    Ma, M.
    University of Huddersfield, GBR.
    Macrae, J.
    Ulster University, GBR.
    Maguire, T.
    Ulster University, GBR.
    McCaughey, A.
    Ulster University, GBR.
    McClory, E.
    Ulster University, GBR.
    McCollum, V.
    Ulster University, GBR.
    Mc Kevitt, P.
    Ulster University, GBR.
    Melvin, A.
    Ulster University, GBR.
    Moore, P.
    Ulster University, GBR.
    Mulholland, E.
    Ulster University, GBR.
    Muñoz, K.
    BijouTech, CoLab, Letterkenny, Co., IRL.
    O’Hanlon, G.
    Ulster University, GBR.
    Roman, L.
    Ulster University, GBR.
    SceneMaker: Creative technology for digital storytelling2018Inngår i: Lect. Notes Inst. Comput. Sci. Soc. Informatics Telecommun. Eng. / [ed] Brooks A.L.,Brooks E., Springer Verlag , 2018, Vol. 196, s. 29-38Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017.

    Fulltekst (pdf)
    fulltext
  • 19.
    Albinsson, Mattias
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Andersson, Linus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improving Quality of Experience through Performance Optimization of Server-Client Communication2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied.

    In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time.

    After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience.

    The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement.

    Fulltekst (pdf)
    fulltext
  • 20.
    Aleksandr, Polescuk
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Linking Residential Burglaries using the Series Finder Algorithm in a Swedish Context2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. A minority of criminals performs a majority of the crimes today. It is known that every criminal or group of offenders to some extent have a particular pattern (modus operandi) how crime is performed. Therefore, computers' computational power can be employed to discover crimes that have the same model and possibly are carried out by the same criminal. The goal of this thesis was to apply the existing Series Finder algorithm to a feature-rich dataset containing data about Swedish residential burglaries.

    Objectives. The following objectives were achieved to complete this thesis: Modifications performed on an existing Series Finder implementation to fit the Swedish police forces dataset and MatLab code converted to Python. Furthermore, experiment setup designed with appropriate metrics and statistical tests. Finally, modified Series Finder implementation's evaluation performed against both Spatial-Temporal and Random models.

    Methods. The experimental methodology was chosen in order to achieve the objectives. An initial experiment was performed to find right parameters to use for main experiments. Afterward, a proper investigation with dependent and independent variables was conducted.

    Results. After the metrics calculations and the statistical tests applications, the accurate picture revealed how each model performed. Series Finder showed better performance than a Random model. However, it had lower performance than the Spatial-Temporal model. The possible causes of one model performing better than another are discussed in analysis and discussion section.

    Conclusions. After completing objectives and answering research questions, it could be clearly seen how the Series Finder implementation performed against other models. Despite its low performance, Series Finder still showed potential, as presented in future work.

    Fulltekst (pdf)
    fulltext
  • 21.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Rohani, Sarmad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

    Fulltekst (pdf)
    FULLTEXT01
  • 22.
    Amjad, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Malhi, Rohail Khan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Burhan, Muhammad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

    Fulltekst (pdf)
    FULLTEXT01
  • 23.
    ananth, Indirajith Vijai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

    Fulltekst (pdf)
    FULLTEXT01
  • 24.
    Andersson, Jonas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Silhouette-based Level of Detail: A comparison of real-time performance and image space metrics2016Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. The geometric complexity of objects in games and other real-time applications is a crucial aspect concerning the performance of the application. Such applications usually redraw the screen between 30-60 times per second, sometimes even more often, which can be a hard task in an environment with a high number of geometrically complex objects. The concept called Level of Detail, often abbreviated LoD, aims to alleviate the load on the hardware by introducing methods and techniques to minimize the amount of geometry while still maintaining the same, or very similar result.

    Objectives. This study will compare four of the often used techniques, namely Static LoD, Unpopping LoD, Curved PN Triangles, and Phong Tessellation. Phong Tessellation is silhouette-based, and since the silhouette is considered one of the most important properties, the main aim is to determine how it performs compared to the other three techniques.

    Methods. The four techniques are implemented in a real-time application using the modern rendering API Direct3D 11. Data will be gathered from this application to use in several experiments in the context of both performance and image space metrics.

    Conclusions. This study has shown that all of the techniques used works in real-time, but with varying results. From the experiments it can be concluded that the best technique to use is Unpopping LoD. It has good performance and provides a good visual result with the least average amount of popping of the compared techniques. The dynamic techniques are not suitable as a substitute to Unpopping LoD, but further research could be conducted to examine how they can be used together, and how the objects themselves can be designed with the dynamic techniques in mind.

    Fulltekst (pdf)
    fulltext
  • 25.
    Andersson, Marcus
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nilsson, Alexander
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Oppgave
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

    Fulltekst (pdf)
    FULLTEXT01
  • 26.
    Andrej, Sekáč
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance evaluation based on data from code reviews2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process.

    Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation.

    Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis.

    Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.

    Fulltekst (pdf)
    fulltext
  • 27.
    Angelova, Milena
    et al.
    Technical University of Sofia-branch Plovdiv, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Linde, Peter
    Blekinge Tekniska Högskola, Biblioteket.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An expertise recommender system based on data from an institutional repository (DiVA)2018Inngår i: Proceedings of the 22nd edition of the International Conference on ELectronic PUBlishing: From Projects to Sustainable Infrastructure, ELPUB 2018 / [ed] Chan L.,Mounier P., OpenEdition Press , 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors inacademy.

    Fulltekst (pdf)
    fulltext
  • 28.
    Annavarjula, Vaishnavi
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Computer-Vision Based Retinal Image Analysis for Diagnosis and Treatment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context- Vision is one of the five elementary physiologial senses. Vision is enabled via the eye, a very delicate sense organ which is highly susceptible to damage which results in loss of vision. The damage comes in the form of injuries or diseases such as diabetic retinopathy and glaucoma. While it is not possible to predict accidents, predicting the onset of disease in the earliest stages is highly attainable. Owing to the leaps in imaging technology,it is also possible to provide near instant diagnosis by utilizing computer vision and image processing capabilities.

    Objectives- In this thesis, an algorithm is proposed and implemented to classify images of the retina into healthy or two classes of unhealthy images, i.e, diabetic retinopathy, and glaucoma thus aiding diagnosis. Additionally the algorithm is studied to investigate which image transformation is more feasible in implementation within the scope of this algorithm and which region of retina helps in accurate diagnosis.

    Methods- An experiment has been designed to facilitate the development of the algorithm. The algorithm is developed in such a way that it can accept all the values of a dataset concurrently and perform both the domain transforms independent of each other.

    Results- It is found that blood vessels help best in predicting disease associations, with the classifier giving an accuracy of 0.93 and a Cohen’s kappa score of 0.90. Frequency transformed images also presented a accuracy in prediction with 0.93 on blood vessel images and 0.87 on optic disk images.

    Conclusions- It is concluded that blood vessels from the fundus images after frequency transformation gives the highest accuracy for the algorithm developed when the algorithm is using a bag of visual words and an image category classifier model.

    Keywords-Image Processing, Machine Learning, Medical Imaging

    Fulltekst (pdf)
    fulltext
  • 29.
    Atchukatla, Mahammad suhail
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Algorithms for efficient VM placement in data centers: Cloud Based Design and Performance Analysis2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.

     

    Objectives:  In this research, our objectives are as follows

    • Perform simulation of algorithms in CloudSim simulator.
    • Estimate and compare the energy consumption of different packing algorithms.
    • Design an OpenStack testbed to implement the Bin packing algorithm.

     

    Methods:

    We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.

     

    Results:

    Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.

     

    Conclusions:

    The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.

    Fulltekst (pdf)
    BTH2018Atchukatla
  • 30.
    Avutu, Neeraj
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of MongoDB on Amazon Web Service and OpenStack2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context

    MongoDB is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. MongoDB uses a master-slave design rather than the ring topology used by Cassandra. Virtualization is the technique used for accessing multiple machines in a single host and utilizing the various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users.

    Objectives

    Studying and identifying MongoDB, Virtualization on AWS and OpenStack. Experiments were conducted to identify the CPU utilization associated when Mongo DB instances are deployed on AWS and physical server arrangement. Understanding the effect of Replication in the Mongo DB instances and its effect on MongoDB concerning throughput, CPU utilization and latency.

    Methods

    Initially, a literature review is conducted to design the experiment with the mentioned problems. A three node MongoDB cluster runs on Amazon EC2 and OpenStack Nova with Ubuntu 16.04 LTS as an operating system. Latency, throughput and CPU utilization were measured using this setup. This procedure was repeated for five nodes MongoDB cluster and three nodes production cluster with six types of workloads of YCSB.

    Results

    Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on MongoDB are found out in terms of CPU utilization, latency and throughput.

    Conclusions

    It is concluded that there is a decrease in latency and increases throughput with the increase in nodes. Due to replication, increase in latency was observed.

    Fulltekst (pdf)
    BTH2018Avutu
  • 31.
    Axelsson, Arvid
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Light Field Coding Using Panoramic Projection2014Oppgave
    Abstract [en]

    A new generation of 3d displays provides depth perception without the need for glasses and allows the viewer to see content from many different directions. Providing video for these displays requires capturing the scene by several cameras at different viewpoints, the data from which together forms light field video. To encode such video with existing video coding requires a large amount of data and it increases quickly with a higher number of views, which this application needs. One such coding is the multiview extension of High Efficiency Video Coding (mv-hevc), which encodes a number of similar video streams as different layers. A new coding scheme for light field video, called Panoramic Light Field (plf), is implemented and evaluated in this thesis. The main idea behind the coding is to project all points in a scene that are visible from any of the viewpoints to a single, global view, similar to how texture mapping maps a texture onto a 3d model in computer graphics. Whereas objects ordinarily shift position in the frame as the camera position changes, this is not the case when using this projection. A visible point in space is projected to the same image pixel regardless of viewpoint, resulting in large similarities between images from different viewpoints. The similarity between the layers in light field video helps to achieve more efficient compression when the projection is combined with existing multiview coding. In order to evaluate the scheme, 3d content was created and software was developed to encode it using plf. Video using this coding is compared to existing technology: a straightforward encoding of the views using mvhevc. The results show that the plf coding performs better on the sample content at lower quality levels, while it is worse at higher bitrate due to quality loss from the projection procedure. It is concluded that plf is a promising technology and suggestions are given for future research that may improve its performance further.

    Fulltekst (pdf)
    FULLTEXT01
  • 32. Baca, Dejan
    et al.
    Boldt, Martin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Jacobsson, Andreas
    A Novel Security-Enhanced Agile Software Development Process Applied in an Industrial Setting2015Inngår i: Proceedings 10th International Conference on Availability, Reliability and Security ARES 2015, IEEE Computer Society Digital Library, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., the baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., more than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., for security, was 1% in the previous development process.

  • 33.
    Bachu, Rajesh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud: Performance comparison between on premise and the migrated Virtual Machine2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2.

    Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2.

    Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS).

    Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine.

    Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.

    Fulltekst (pdf)
    fulltext
  • 34.
    Bakhtyar, Shoaib
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Designing Electronic Waybill Solutions for Road Freight Transport2016Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In freight transportation, a waybill is an important document that contains essential information about a consignment. The focus of this thesis is on a multi-purpose electronic waybill (e-Waybill) service, which can provide the functions of a paper waybill, and which is capable of storing, at least, the information present in a paper waybill. In addition, the service can be used to support other existing Intelligent Transportation System (ITS) services by utilizing on synergies with the existing services. Additionally, information entities from the e-Waybill service are investigated for the purpose of knowledge-building concerning freight flows.

    A systematic review on state-of-the-art of the e-Waybill service reveals several limitations, such as limited focus on supporting ITS services. Five different conceptual e-Waybill solutions (that can be seen as abstract system designs for implementing the e-Waybill service) are proposed. The solutions are investigated for functional and technical requirements (non-functional requirements), which can potentially impose constraints on a potential system for implementing the e-Waybill service. Further, the service is investigated for information and functional synergies with other ITS services. For information synergy analysis, the required input information entities for different ITS services are identified; and if at least one information entity can be provided by an e-Waybill at the right location we regard it to be a synergy. Additionally, a service design method has been proposed for supporting the process of designing new ITS services, which primarily utilizes on functional synergies between the e-Waybill and different existing ITS services. The suggested method is applied for designing a new ITS service, i.e., the Liability Intelligent Transport System (LITS) service. The purpose of the LITS service isto support the process of identifying when and where a consignment has been damaged and who was responsible when the damage occurred. Furthermore, information entities from e-Waybills are utilized for building improved knowledge concerning freight flows. A freight and route estimation method has been proposed for building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    The results from this thesis can be used to support the choice of practical e-Waybill service implementation, which has the possibility to provide high synergy with ITS services. This may lead to a higher utilization of ITS services and more sustainable transport, e.g., in terms of reduced congestion and emissions. Furthermore, the implemented e-Waybill service can be an enabler for collecting consignment and traffic data and converting the data into useful traffic information. In particular, the service can lead to increasing amounts of digitally stored data about consignments, which can lead to improved knowledge on the movement of freight and trucks. The knowledge may be helpful when making decisions concerning road taxes, fees, and infrastructure investments.

    Fulltekst (pdf)
    fulltext
  • 35.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ghazi, Ahmad Nauman
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    On Improving Research Methodology Course at Blekinge Institute of Technology2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master’s theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty-one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH. 

    Fulltekst (pdf)
    On Improving Research Methodology Course at Blekinge Institute of Technology
  • 36.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Henesey, Lawrence
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Electronic Waybill Solutions: A Systematic ReviewManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    A critical component in freight transportation is the waybill, which is a transport document that has essential information about a consignment. Actors within the supply chain handle not only the freight but also vast amounts of information,which are often unclear due to various errors. An electronic waybill (e-Waybill) solution is an electronic replacement of the paper waybill in a better way, e.g., by ensuring error free storage and flow of information. In this paper, a systematic review using the snowball method is conducted to investigate the state-of-the-art of e-Waybill solutions. After performing three iterations of the snowball process,we identified eleven studies for further evaluation and analysis due to their strong relevancy. The studies are mapped in relation to each other and a classification of the e-Waybill solutions is constructed. Most of the studies identified from our review support the benefits of electronic documents including e-Waybills. Typically, most research papers reviewed support EDI (Electronic Documents Interchange) for implementing e-Waybills. However, limitations exist due to high costs that make it less affordable for small organizations. Recent studies point to alternative technologies that we have listed in this paper. Additionally in this paper, we present from our research that most studies focus on the administrative benefits, but few studies investigate the potential of e-Waybill information for achieving services, such as estimated time of arrival and real-time tracking and tracing.

  • 37.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Henesey, Lawrence
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Freight transport prediction using electronic waybills and machine learning2014Inngår i: 2014 International Conference on Informative and Cybernetics for Computational Social Systems, IEEE Computer Society, 2014, s. 128-133Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A waybill is a document that accompanies the freight during transportation. The document contains essential information such as, origin and destination of the freight, involved actors, and the type of freight being transported. We believe, the information from a waybill, when presented in an electronic format, can be utilized for building knowledge about the freight movement. The knowledge may be helpful for decision makers, e.g., freight transport companies and public authorities. In this paper, the results from a study of a Swedish transport company are presented using order data from a customer ordering database, which is, to a larger extent, similar to the information present in paper waybills. We have used the order data for predicting the type of freight moving between a particular origin and destination. Additionally, we have evaluated a number of different machine learning algorithms based on their prediction performances. The evaluation was based on their weighted average true-positive and false-positive rate, weighted average area under the curve, and weighted average recall values. We conclude, from the results, that the data from a waybill, when available in an electronic format, can be used to improve knowledge about freight transport. Additionally, we conclude that among the algorithms IBk, SMO, and LMT, IBk performed better by predicting the highest number of classes with higher weighted average values for true-positive and false-positive, and recall.

    Fulltekst (pdf)
    fulltext
  • 38.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Holmgren, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Data Mining Based Method for Route and Freight Estimation2015Inngår i: Procedia Computer Science, Elsevier, 2015, Vol. 52, s. 396-403Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a method, which makes use of historical vehicle data and current vehicle observations in order to estimate 1) the route a vehicle has used and 2) the freight the vehicle carried along the estimated route. The method includes a learning phase and an estimation phase. In the learning phase, historical data about the movement of a vehicle and of the consignments allocated to the vehicle are used in order to build estimation models: one for route choice and one for freight allocation. In the estimation phase, the generated estimation models are used together with a sequence of observed positions for the vehicle as input in order to generate route and freight estimates. We have partly evaluated our method in an experimental study involving a medium-size Swedish transport operator. The results of the study indicate that supervised learning, in particular the algorithm Naive Bayes Multinomial Updatable, shows good route estimation performance even when significant amount of information about where the vehicle has traveled is missing. For the freight estimation, we used a method based on averaging the consignments on the historical known trips for the estimated route. We argue that the proposed method might contribute to building improved knowledge, e.g., in national road administrations, on the movement of trucks and freight.

    Fulltekst (pdf)
    fulltext
  • 39.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Holmgren, Johan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Persson, Jan A.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Technical Requirements of the e-Waybill Service2016Inngår i: International Journal of Computer and Communication Engineering, ISSN 2010-3743, ISSN 2010-3743, Vol. 5, nr 2, s. 130-140Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    An electronic waybill (e-Waybill) is a service whose purpose is to replace the paper waybill, which is a paper documents that traditionally follows a consignment during transport. An important purpose of the e-Waybill is to achieve a paperless flow of information during freight transport. In this paper, we investigate five e-Waybill solutions, that is, system design specifications for the e-Waybill, regarding their non-functional (technical) requirements. In addition, we discuss how well existing technologies are able to fulfil the identified requirements. We have identified that information storage, synchronization and conflict management, access control, and communication are important categories of technical requirements of the e-Waybill service. We argue that the identified technical requirements can be used to support the process of designing and implementing the e-Waybill service.

    Fulltekst (pdf)
    fulltext
  • 40.
    Bakhtyar, Shoaib
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Mbiydzenyuy, Gideon
    Netport Science Park, Karlshamn.
    Henesey, Lawrence
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Simulation Study of the Electronic Waybill Service2015Inngår i: Proceedings - EMS 2015: UKSim-AMSS 9th IEEE European Modelling Symposium on Computer Modelling and Simulation / [ed] David Al-Dabas, Gregorio Romero, Alessandra Orsoni, Athanasios Pantelous, IEEE Computer Society, 2015, s. 307-312Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present results from a simulation study, whichwas designed for investigating the potential positive impacts, i.e., the invoicing and processing time, and financial savings,when using an electronic waybill instead of paper waybillsfor road-based freight transportation. The simulation modelis implemented in an experiment for three different scenarios,where the processing time for waybills at the freight loadingand unloading locations in a particular scenario differs fromother scenarios. The results indicate that a saving of 65%–99%in the invoicing time can be achieved when using an electronicwaybill instead of paper waybills. Our study can be helpful todecision makers, e.g., managers and staff dealing with paperwaybills, to estimate the potential benefits when making deci-sions concerning the implementation of an electronic waybillsolution for replacing paper waybills.

    Fulltekst (pdf)
    fulltext
  • 41.
    Bala, Jaswanth
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Filtering estimated series of residential burglaries using spatio-temporal route calculations2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. According to Swedish National Council for Crime Prevention, there is an increase of 19% in residential burglary crimes in Sweden over the last decade and only 5% of the total crimes reported were actually solved by the law enforcement agencies. In order to solve these cases quickly and efficiently, the law enforcement agencies has to look into the possible linked serial crimes. Many studies have suggested to link crimes based on Modus Operendi and other characteristic. Sometimes crimes which are not possible to travel spatially with in the reported times but have similar Modus Operendi are also grouped as linked crimes. Investigating such crimes could possibly waste the resources of the law enforcement agencies.

    Objectives. In this study, we investigate the possibility of the usage of travel distance and travel duration between different crime locations while linking the residential burglary crimes. A filtering method has been designed and implemented for filtering the unlinked crimes from the estimated linked crimes by utilizing the distance and duration values.

    Methods. The objectives in this study are satisfied by conducting an experiment. The travel distance and travel duration values are obtained from various online direction services. The filtering method was first validated on ground truth represented by known linked crime series and then it was used to filter out crimes from the estimated linked crimes.

    Results. The filtering method had removed a total of 4% unlinked crimes from the estimated linked crime series when the travel mode is considered as driving. Whereas it had removed a total of 23% unlinked crimes from the estimated linked crime series when the travel mode is considered as walking. Also it was found that a burglar can take an average of 900 seconds (15 minutes) for committing a burglary.

    Conclusions. From this study it is evident that the usage of spatial and temporal values in linking residential burglaries gives effective crime links in a series. Also, the usage of Google Maps for getting distance and duration values can increase the overall performance of the filtering method in linking crimes.

    Fulltekst (pdf)
    fulltext
  • 42.
    Baskaravel, Yogaraj
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Implementation and evaluation of global router for Information-Centric Networking2014Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. A huge majority of the current Internet traffic is information dissemination. Information-Centric Networking (ICN) is a future networking paradigm that focuses on global level information dissemination. In ICN, the communication is defined in terms of requesting and providing Named Data Objects (NDO). NetInf is a future networking architecture based on Information-Centric Networking principles. Objectives. In this thesis, a global routing solution for ICN has been implemented. The authority part of NDO's name is mapped to a set of routing hints each with a priority value. Multiple NDOs can share the same authority part and thus the first level aggregation is provided. The routing hints are used to forward a request for a NDO towards a suitable copy of the NDO. The second level aggregation is achieved by aggregating high priority routing hints on low priority routing hints. The performance and scalability of the routing implementation are evaluated with respect to global ICN requirements. Furthermore, some of the notable challenges in implementing global ICN routing are identified. Methods. The NetInf global routing solution is implemented by extending NEC's NetInf Router Platform (NNRP). A NetInf testbed is built over the Internet using the extended NNRP implementation. Performance measurements have been taken from the NetInf testbed. The performance measurements have been discussed in detail in terms of routing scalability. Results. The performance measurements show that hop-by-hop transport has significant impact on the overall request forwarding. A notable amount of time is taken for extracting and inserting binary objects such as routing hints at each router. Conclusions. A more suitable hop-by-hop transport mechanism can be evaluated and used with respect to global ICN requirements. The NetInf message structure can be redefined so that binary objects such as routing hints can be transmitted more efficiently. Apart from that, the performance of the global routing implementation appears to be reasonable. As the NetInf global routing solution provides two levels of aggregation, it can be scalable as well.

    Fulltekst (pdf)
    FULLTEXT01
  • 43.
    Begnert, Joel
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tilljander, Rasmus
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Combining Regional Time Stepping With Two-Scale PCISPH Method2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. In computer graphics, realistic looking fluid is often desired. Simulating realistic fluids is a time consuming and computationally expensive task, therefore, much research has been devoted to reducing the simulation time while maintaining the realism. Two of the more recent optimization algorithms within particle based simulations are two-scale simulation and regional time stepping (RTS). Both of them are based on the predictive-corrective incompressible smoothed particle hydrodynamics (PCISPH) algorithm.

    Objectives. These algorithms improve on two separate aspects of PCISPH, two-scale simulation reduces the number of particles and RTS focuses computational power on regions of the fluid where it is most needed. In this paper we have developed and investigated the performance of an algorithm combining them, utilizing both optimizations.

    Methods. We implemented both of the base algorithms, as well as PCISPH, before combining them. Therefore we had equal conditions for all algorithms when we performed our experiments, which consisted of measuring the time it took to run each algorithm in three different scene configurations.

    Results. Results showed that our combined algorithm on average was faster than the other three algorithms. However, our implementation of two-scale simulation gave results inconsistent with the original paper, showing a slower time than even PCISPH. This invalidates the results for our combined algorithm since it utilizes the same implementation.

    Conclusions. We see that our combined algorithm has potential to speed up fluid simulations, but since the two-scale implementation was incorrect, our results are inconclusive.

    Fulltekst (pdf)
    fulltext
  • 44.
    Berntsson, Fredrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Schengensamarbetet – Europas dröm2014Oppgave
    Abstract [sv]

    Denna uppsats klargör vad Schengensamarbetet är för något, varför det finns och hur det fungerar. Uppsatsen går igenom alla delar av samarbetet som till synes största del består av att avskaffa personkontrollerna mellan medlemsländerna.

    Fulltekst (pdf)
    FULLTEXT01
  • 45.
    Bertoni, Alessandro
    et al.
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för maskinteknik.
    Dasari, Siva Krishna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hallstedt, Sophie
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för strategisk hållbar utveckling.
    Petter, Andersson
    GKN Aerospace Systems , SWE.
    Model-based decision support for value and sustainability assessment: Applying machine learning in aerospace product development2018Inngår i: DS92: Proceedings of the DESIGN 2018 15th International Design Conference / [ed] Marjanović D., Štorga M., Škec S., Bojčetić N., Pavković N, The Design Society, 2018, Vol. 6, s. 2585-2596Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a prescriptive approach toward the integration of value and sustainability models in an automated decision support environment enabled by machine learning (ML). The approach allows the concurrent multidimensional analysis of design cases complementing mechanical simulation results with value and sustainability assessment. ML allows to deal with both qualitative and quantitative data and to create surrogate models for quicker design space exploration. The approach has been developed and preliminary implemented in collaboration with a major aerospace sub-system manufacturer.

    Fulltekst (pdf)
    fulltext
  • 46. Beyene, Ayne A.
    et al.
    Welemariam, Tewelle
    Persson, Marie
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improved concept drift handling in surgery prediction and other applications2015Inngår i: Knowledge and Information Systems, ISSN 0219-1377, Vol. 44, nr 1, s. 177-196Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The article presents a new algorithm for handling concept drift: the Trigger-based Ensemble (TBE) is designed to handle concept drift in surgery prediction but it is shown to perform well for other classification problems as well. At the primary care, queries about the need for surgical treatment are referred to a surgeon specialist. At the secondary care, referrals are reviewed by a team of specialists. The possible outcomes of this review are that the referral: (i) is canceled, (ii) needs to be complemented, or (iii) is predicted to lead to surgery. In the third case, the referred patient is scheduled for an appointment with a surgeon specialist. This article focuses on the binary prediction of case three (surgery prediction). The guidelines for the referral and the review of the referral are changed due to, e.g., scientific developments and clinical practices. Existing decision support is based on the expert systems approach, which usually requires manual updates when changes in clinical practice occur. In order to automatically revise decision rules, the occurrence of concept drift (CD) must be detected and handled. The existing CD handling techniques are often specialized; it is challenging to develop a more generic technique that performs well regardless of CD type. Experiments are conducted to measure the impact of CD on prediction performance and to reduce CD impact. The experiments evaluate and compare TBE to three existing CD handling methods (AWE, Active Classifier, and Learn++) on one real-world dataset and one artificial dataset. TBA significantly outperforms the other algorithms on both datasets but is less accurate on noisy synthetic variations of the real-world dataset.

    Fulltekst (pdf)
    fulltext
  • 47.
    Bilski, Mateusz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Migration from blocking to non-blocking web frameworks2014Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    The problem of performance and scalability of web applications is challenged by most of the software companies. It is difficult to maintain the performance of a web application while the number of users is continuously increasing. The common solution for this problem is scalability. A web application can handle incoming and outgoing requests using blocking or non-blocking Input/Output operation. The way that a single server handles requests affects its ability to scale and depends on a web framework that was used to build the web application. It is especially important for Resource Oriented Architecture (ROA) based applications which consist of distributed Representational State Transfer (REST) web services. This research was inspired by a real problem stated by a software company that was considering the migration to the non-blocking web framework but did not know the possible profits. The objective of the research was to evaluate the influence of web framework's type on the performance of ROA based applications and to provide guidelines for assessing profits of migration from blocking to non-blocking JVM web frameworks. First, internet ranking was used to obtain the list of the most popular web frameworks. Then, the web frameworks were used to conduct two experiments that investigated the influence of web framework's type on the performance of ROA based applications. Next, the consultations with software architects were arranged in order to find a method for approximating the performance of overall application. Finally, the guidelines were prepared based on the consultations and the results of the experiments. Three blocking and non-blocking highly ranked and JVM based web frameworks were selected. The first experiment showed that the non-blocking web frameworks can provide performance up to 2.5 times higher than blocking web frameworks in ROA based applications. The experiment performed on existing application showed average 27\% performance improvement after the migration. The elaborated guidelines successfully convinced the company that provided the application for testing to conduct the migration on the production environment. The experiment results proved that the migration from blocking to non-blocking web frameworks increases the performance of web application. The prepared guidelines can help software architects to decide if it is worth to migrate. However the guidelines are context depended and further investigation is needed to make it more general.

    Fulltekst (pdf)
    FULLTEXT01
  • 48.
    Boddapati, Venkatesh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Classifying Environmental Sounds with Image Networks2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Environmental Sound Recognition, unlike Speech Recognition, is an area that is still in the developing stages with respect to using Deep Learning methods. Sound can be converted into images by extracting spectrograms and the like. Object Recognition from images using deep Convolutional Neural Networks is a currently developing area holding high promise. The same technique has been studied and applied, but on image representations of sound.

    Objectives. In this study, investigation is done to determine the best possible accuracy of performing a sound classification task using existing deep Convolutional Neural Networks by comparing the data pre-processing parameters. Also, a novel method of combining different features into a single image is proposed and its effect tested. Lastly, the performance of an existing network that fuses Convolutional and Recurrent Neural architectures is tested on the selected datasets.

    Methods. In this, experiments were conducted to analyze the effects of data pre-processing parameters on the best possible accuracy with two CNNs. Also, experiment was also conducted to determine whether the proposed method of feature combination is beneficial or not. Finally, an experiment to test the performance of a combined network was conducted.

    Results. GoogLeNet had the highest classification accuracy of 73% on 50-class dataset and 90-93% on 10-class datasets. The sampling rate and frame length values of the respective datasets which contributed to the high scores are 16kHz, 40ms and 8kHz, 50ms respectively. The proposed combination of features does not improve the classification accuracy. The fused CRNN network could not achieve high accuracy on the selected datasets.

    Conclusions. It is concluded that deep networks designed for object recognition can be successfully used to classify environmental sounds and the pre-processing parameters’ values determined for achieving best accuracy. The novel method of feature combination does not significantly improve the accuracy when compared to spectrograms alone. The fused network which learns the special and temporal features from spectral images performs poorly in the classification task when compared to the convolutional network alone.

    Fulltekst (pdf)
    fulltext
  • 49.
    Boddapati, Venkatesh
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Petef, Andrej
    Sony Mobile Communications AB, SWE.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Classifying environmental sounds using image recognition networks2017Inngår i: Procedia Computer Science / [ed] Toro C.,Hicks Y.,Howlett R.J.,Zanni-Merk C.,Toro C.,Frydman C.,Jain L.C.,Jain L.C., Elsevier B.V. , 2017, Vol. 112, s. 2048-2056Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Automatic classification of environmental sounds, such as dog barking and glass breaking, is becoming increasingly interesting, especially for mobile devices. Most mobile devices contain both cameras and microphones, and companies that develop mobile devices would like to provide functionality for classifying both videos/images and sounds. In order to reduce the development costs one would like to use the same technology for both of these classification tasks. One way of achieving this is to represent environmental sounds as images, and use an image classification neural network when classifying images as well as sounds. In this paper we consider the classification accuracy for different image representations (Spectrogram, MFCC, and CRP) of environmental sounds. We evaluate the accuracy for environmental sounds in three publicly available datasets, using two well-known convolutional deep neural networks for image recognition (AlexNet and GoogLeNet). Our experiments show that we obtain good classification accuracy for the three datasets. © 2017 The Author(s).

  • 50.
    Boeva, Veselka
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Angelova, Milena
    Technical University Sofia, BUL.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Rosander, Oliver
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Tsiporkova, Elena
    Collective Center for the Belgian Technological Industry, BEL.
    Evolutionary clustering techniques for expertise mining scenarios2018Inngår i: ICAART 2018 - Proceedings of the 10th International Conference on Agents and Artificial Intelligence, Volume 2 / [ed] van den Herik J.,Rocha A.P., SciTePress , 2018, Vol. 2, s. 523-530Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The problem addressed in this article concerns the development of evolutionary clustering techniques that can be applied to adapt the existing clustering solution to a clustering of newly collected data elements. We are interested in clustering approaches that are specially suited for adapting clustering solutions in the expertise retrieval domain. This interest is inspired by practical applications such as expertise retrieval systems where the information available in the system database is periodically updated by extracting new data. The experts available in the system database are usually partitioned into a number of disjoint subject categories. It is becoming impractical to re-cluster this large volume of available information. Therefore, the objective is to update the existing expert partitioning by the clustering produced on the newly extracted experts. Three different evolutionary clustering techniques are considered to be suitable for this scenario. The proposed techniques are initially evaluated by applying the algorithms on data extracted from the PubMed repository. Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.

1234567 1 - 50 of 432
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf