Ändra sökning
Avgränsa sökresultatet
1234567 101 - 150 av 435
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 101.
    Delgado, Sergio Mellado
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Velasco, Alberto Díaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Indoor Positioning using the Android Platform2014Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    In recent years, there has been a great increase in the development of wireless technologies and location services. For this reason, numerous projects in the location field, have arisen. In addition, with the appearance of the open Android operating system, wireless technologies are being developed faster than ever. This Project approaches the design and development of a system that combines the technologies of wireless, location and Android with the implementation of an indoor positioning system. As a result, an Android application has been obtained, which detects the position of a phone in a simple and useful way. The application is based on the WIFI manager API of Android. It combines the data stored in a SQL database with the wifi data received at any given time. Afterwards the position of the user is determined with the algorithm that has been implemented. This application is able to obtain the position of any person who is inside a building with Wi-Fi coverage, and display it on the screen of any device with the Android operating system. Besides the estimation of the position, this system displays a map that helps you see in which quadrant of the room are positioned in real time. This system has been designed with a simple interface to allow people without technology knowledge. Finally, several tests and simulations of the system have been carried out to see its operation and accuracy. The performance of the system has been verified in two different places and changes have been made in the Java code to improve its precision and effectiveness. As a result of the several tests, it has been noticed that the placement of the access point (AP) and the configuration of the Wireless network is an important point that should be taken into account to avoid interferences and errors as much as possible, in the estimation of the position.

  • 102.
    Demir, Muhammed Fatih
    et al.
    Karatay Üniversitesi, TUR.
    Cankirli, Aysenur
    Karatay Üniversitesi, TUR.
    Karabatak, Begum
    Turkcell, Nicosia, CYP.
    Yavariabdi, Amir
    Karatay Üniversitesi, TUR.
    Mendi, Engin
    Karatay Üniversitesi, TUR.
    Kusetogullari, Hüseyin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Real-Time Resistor Color Code Recognition using Image Processing in Mobile Devices2018Ingår i: 9th International Conference on Intelligent Systems 2018: Theory, Research and Innovation in Applications, IS 2018 - Proceedings / [ed] JardimGoncalves, R; Mendonca, JP; Jotsov, V; Marques, M; Martins, J; Bierwolf, R, Institute of Electrical and Electronics Engineers Inc. , 2018, s. 26-30Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper proposes a real-time video analysis algorithm to read the resistance value of a resistor using a color recognition technique. To achieve this, firstly, a nonlinear filtering is applied to input video frame to smooth intensity variations and remove impulse noises. After that, a photometric invariants technique is employed to transfer the video frame from RGB color space to Hue-Saturation-Value (HSV) color space, which decreases sensitivity of the proposed method to illumination changes. Next, a region of interest is defined to automatically detect resistor's colors and then an Euclidean distance based clustering strategy is employed to recognize the color bars. The proposed method provides a wide range of color classification which includes twelve colors. In addition, it utilizes relatively low computational time which makes it suitable for real-time mobile video applications. The experiments are performed on a variety of test videos and results show that the proposed method has low error rate compared to the other resistor color code recognition mobile applications. © 2018 IEEE.

  • 103.
    Devagiri, Vishnu Manasa
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Splicing Forgery Detection and the Impact of Image Resolution2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: There has been a rise in the usage of digital images these days. Digital images are being used in many areas like in medicine, wars, etc. As the images are being used to make many important decisions, it is necessary to know if the images used are clean or forged. In this thesis, we have considered the area of splicing forgery. In this thesis, we are also considering and analyzing the impact of low-resolution images on the considered algorithms.

    Objectives. Through this thesis, we try to improve the detection rate of splicing forgery detection. We also examine how the examined splicing forgery detection algorithm works on low-resolution images and considered classification algorithms (classifiers).

    Methods: The research methods used in this research are Implementation and Experimentation. Implementation was used to answer the first research question i.e., to improve the detection rate in splicing forgery. Experimentation was used to answer the second research question. The results of the experiment were analyzed using statistical analysis to find out how the examined algorithm works on different image resolutions and on the considered classifiers.

    Results: One-tailed Wilcoxon signed rank test was conducted to compare which algorithm performs better, the T+ value obtained was less than To so the null hypothesis was rejected and the alternative hypothesis which states that Algorithm 2 (our enhanced version of the algorithm) performs better than Algorithm 1 (original algorithm), is accepted. Experiments were conducted and the accuracy of the algorithms in different cases were noted, ROC curves were plotted to obtain the AUC parameter. The accuracy, AUC parameters were used to determine the performance of the algorithms.

    Conclusions: After the results were analyzed using statistical analysis, we came to the conclusion that Algorithm 2 performs better than Algorithm 1 in detecting the forged images. It was also observed that Algorithm 1 improves its performance on low-resolution images when trained on original images and tested on images of different resolutions but, in the case of Algorithm 2, its performance is improved when trained and tested on images of the same resolution. There was not much variance in the performance of both of the algorithms on images of different resolution. Coming to the classifiers, Algorithm 1 improves its performance on linear SVM whereas Algorithm 2 improves its performance when using the simple tree classifier.

  • 104.
    Devagiri, Vishnu Manasa
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Cheddad, Abbas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Splicing Forgery Detection and the Impact of Image Resolution2017Ingår i: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE - ECAI 2017, IEEE , 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    With the development of the Internet, and the increase in the online storage space, there has been an explosion in the volume of videos and images circulating online. An important part of the digital forensics' tasks is to scrutinise part of these images to make important decisions. Digital tampering of images can impede reliability of these decisions. Through this paper we attempt to improve the detection rate of splicing forgery. We also examine how well the examined splicing forgery detection algorithm works on low-resolution images. In this paper, the aim is to enhance the accuracy of an existing algorithm. One tailed Wilcoxon signed rank test was utilised to compare the performance of the different algorithms.

  • 105.
    Diyar, Jamal
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Post-Pruning of Random Forests2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Sammanfattning 

    Kontext. Ensemble metoder fortsätter att få mer uppmärksamhet inom maskininlärning. Då maskininlärningstekniker som genererar en enskild klassificerare eller prediktor har visat tecken på begränsad kapacitet i vissa sammanhang, har ensemble metoder vuxit fram som alternativa metoder för att åstadkomma bättre prediktiva prestanda. En av de mest intressanta och effektiva ensemble algoritmerna som har introducerats under de senaste åren är Random Forests. För att säkerställa att Random Forests uppnår en hög prediktiv noggrannhet behöver oftast ett stort antal träd användas. Resultatet av att använda ett större antal träd för att öka den prediktiva noggrannheten är en komplex modell som kan vara svår att tolka eller analysera. Problemet med det stora antalet träd ställer dessutom högre krav på såväl lagringsutrymmet som datorkraften. 

    Syfte. Denna uppsats utforskar möjligheten att automatiskt förenkla modeller som är genererade av Random Forests i syfte att reducera storleken på modellen, öka dess tolkningsbarhet, samt bevara eller förbättra den prediktiva noggrannheten. Syftet med denna uppsats är tvåfaldigt. Vi kommer först att jämföra och empiriskt utvärdera olika beskärningstekniker. Den andra delen av uppsatsen undersöker sambandet mellan den prediktiva noggrannheten och modellens tolkningsbarhet. 

    Metod. Den primära forskningsmetoden som har använts för att genomföra den studien är experiment. Alla beskärningstekniker är implementerade i Python. För att träna, utvärdera, samt validera de olika modellerna, har fem olika datamängder använts. 

    Resultat. Det finns inte någon signifikant skillnad i det prediktiva prestanda mellan de jämförda teknikerna och ingen av de undersökta beskärningsteknikerna är överlägsen på alla plan. Resultat från experimenten har också visat att sambandet mellan tolkningsbarhet och noggrannhet är proportionellt, i alla fall för de studerade konfigurationerna. Det vill säga, en positiv förändring i modellens tolkningsbarhet åtföljs av en negativ förändring i modellens noggrannhet. 

    Slutsats. Det är möjligt att reducera storleken på en komplex Random Forests modell samt bibehålla eller förbättra den prediktiva noggrannheten. Dessutom beror valet av beskärningstekniken på användningsområdet och mängden träningsdata tillgänglig. Slutligen kan modeller som är signifikant förenklade vara mindre noggranna men å andra sidan tenderar de att uppfattas som mer förståeliga. 

  • 106.
    Donthula, Sushmitha
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    INFLUENCE OF DESIGN ELEMENTS IN MOBILE APPLICATIONS ON USER EXPERIENCE OF ELDERLY PEOPLE: An Experiment approach2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: Technology in the field of health care has taken a step forward for making easy health maintenance on a daily basis. With gradual increase in the elderly population, it is important to provide this them, the facilities gained with the use of technology. But it is observed that the elderly show reluctance to the use of new technology like the mobile applications. In the thesis, an effort is made to overcome this barrier with the study of both user experience of elderly and user interface design of a m-health application and analyzing a relation among them.Objectives. In this thesis, user interface design responsible for an increase in the user experience of elderly is focused, to create a base for mobile application developers to design m-health applications that improve the usability of the application.Methods. Quasi-Experiment is conducted to measure user experience with the selected sample from the elderly population. By conducting interviews with the selected sample, data is collected for the experiment./Results. The user experience of the elderly people is analyzed with the original glucosio application and with the prototype of glucosio application. Comparison is made between the user experience in both the cases and conclusion about the relation between the user experience and user interface design of

    Objectives: In this thesis, user interface design responsible for an increase in the user experience of elderly is focused, to create a base for mobile application developers to design m-health applications that improve the usability of the application.Methods. Quasi-Experiment is conducted to measure user experience with the selected sample from the elderly population. By conducting interviews with the selected sample, data is collected for the experiment./Results. The user experience of the elderly people is analyzed with the original glucosio application and with the prototype of glucosio application. 

    Methods: Quasi-Experiment is conducted to measure user experience with the selected sample from the elderly population. By conducting interviews with the selected sample, data is collected for the experiment.Results. The user experience of the elderly people is analyzed with the original glucosio application and with the prototype of glucosio application. 

    Results: The user experience of the elderly people is analyzed with the original glucosio application and with the prototype of glucosio application. Comparison is made between the user experience in both the cases and conclusion about the relation between the user experience and user interface design of the m-health application is made.Conclusions. With the analysis, we can conclude that the combined user interface design of

    Conclusions: With the analysis, we can conclude that the combined user interface design of m-health application, when designed as per the interest of elderly people can increase the user experience of the elderly while using the application. Besides, it increases the usability of the application resulting in the elderly population gets benefited with the advanced mobile technologies for their health promotion.

  • 107.
    Duravkin, Ievgen
    et al.
    Kharkiv Natl Univ Radioelect, UKR.
    Loktionova, Anastasiya
    Kharkiv Natl Univ Radioelect, UKR.
    Carlsson, Anders
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Method of Slow-Attack Detection2014Ingår i: 2014 FIRST INTERNATIONAL SCIENTIFIC-PRACTICAL CONFERENCE PROBLEMS OF INFOCOMMUNICATIONS SCIENCE AND TECHNOLOGY (PIC S&T), IEEE , 2014, s. 171-172Konferensbidrag (Refereegranskat)
    Abstract [en]

    The analysis of realization low-intensity HTTP-attacks was performed. Were described scenarios of Slowloris, Slow POST and Slow READ attack. Features of this type of attacks in comparison with low-level attacks such as "denial of service" were selected: they do not require a large number of resources from the attacking machine, and they are difficult for the detection, since their parameters are similar to legitimate traffic. For each type of attacks the characteristic features were high-lighted. Parameters of http-request, which assume the detection of this type attacks highlighted. The analysis of mathematical tools of building the models for the systems for these types of attacks detection on the basis of the obtained parameters was performed.

  • 108.
    Edbro, Oskar
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hansson, Annika
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Identifying high risk targets in a corporate multi-user network2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 109.
    Ekramian, Elnaz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Automation of the test methods for packet loss and packet loss duration in the Ixia platform2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Today’s technology has a strong tendency towards automation. As a matter of fact, the tremendous improvement of science in the recent years brought new ideas regarding to accelerate the scientific process that is not separated of automation. This paper also deals with automation of manual tests that were used to analyze packet loss and packet loss duration in a network. These two metrics were chosen based on their importance they have in the communication technology, also based on the weak points that were found in the manual processes. This experiment is done in the Ixia platform that was an appropriate test bed to design an automation framework.After a comprehensive research on network and communication we could choose packet loss and packet loss duration as two important parameters that are under test several times per day. Therefore, based on the properties that are used for automation, these two metrics had the priority compare to other metrics. We could create a framework that works correspond to the manual test process. For this purpose, Tcl programming language is used. The script that was written with this high-level language can communicate with the graphical user interface, configuring all the connected devices, measuring mentioned metrics and ultimately save the result in a csv file.Finally, we could reach to the main objective of this project which was to show how positively automatic method can affect on the quality of test in terms of accuracy, time and manpower saving.

  • 110.
    Elahi, Haroon
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Boosted-Window Ensemble2014Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context. The problem of obtaining predictions from stream data involves training on the labeled instances and suggesting the class values for the unseen stream instances. The nature of the data-stream environments makes this task complicated. The large number of instances, the possibility of changes in the data distribution, presence of noise and drifting concepts are just some of the factors that add complexity to the problem. Various supervised-learning algorithms have been designed by putting together efficient data-sampling, ensemble-learning, and incremental-learning methods. The performance of the algorithm is dependent on the chosen methods. This leaves an opportunity to design new supervised-learning algorithms by using different combinations of constructing methods. Objectives. This thesis work proposes a fast and accurate supervised-learning algorithm for performing predictions on the data-streams. This algorithm is called as Boosted-Window Ensemble (BWE), which is invented using the mixture-of-experts technique. BWE uses Sliding Window, Online Boosting and incremental-learning for data-sampling, ensemble-learning, and maintaining a consistent state with the current stream data, respectively. In this regard, a sliding window method is introduced. This method uses partial-updates for sliding the window on the data-stream and is called Partially-Updating Sliding Window (PUSW). The investigation is carried out to compare two variants of sliding window and three different ensemble-learning methods for choosing the superior methods. Methods. The thesis uses experimentation approach for evaluating the Boosted-Window Ensemble (BWE). CPU-time and the Prediction accuracy are used as performance indicators, where CPU-time is the execution time in seconds. The benchmark algorithms include: Accuracy-Updated Ensemble1 (AUE1), Accuracy-Updated Ensemble2 (AUE2), and Accuracy-Weighted Ensemble (AWE). The experiments use nine synthetic and five real-world datasets for generating performance estimates. The Asymptotic Friedman test and the Wilcoxon Signed-Rank test are used for hypothesis testing. The Wilcoxon-Nemenyi-McDonald-Thompson test is used for performing post-hoc analysis. Results. The hypothesis testing suggests that: 1) both for the synthetic and real-wrold datasets, the Boosted Window Ensemble (BWE) has significantly lower CPU-time values than two benchmark algorithms (Accuracy-updated Ensemble1 (AUE1) and Accuracy-weighted Ensemble (AWE). 2) BWE returns similar prediction accuracy as AUE1 and AWE for synthetic datasets. 3) BWE returns similar prediction accuracy as the three benchmark algorithms for the real-world datasets. Conclusions. Experimental results demonstrate that the proposed algorithm can be as accurate as the state-of-the-art benchmark algorithms, while obtaining predictions from the stream data. The results further show that the use of Partially-Updating Sliding Window has resulted in lower CPU-time for BWE as compared with the chunk-based sliding window method used in AUE1, AUE2, and AWE.

  • 111.
    Elhorr, Suzanne
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    The three dimensional relation between user system experience, user satisfaction, and user acceptance2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. The subject presented in this research is the fact that people resist IT induced change and want to maintain their current situation when implementing a new information system.  If no strategy is set to deal with it, resistance to change leads to Information System failure.

    Objectives. In this study, the author is investigating how to anticipate and handle resistance to change when implementing a new information system in order to succeed. This is followed by introducing the factors affecting user satisfaction which in turn affects user acceptance.

    Methods The data collection involves interviews in order to assemble appropriate, justifiable and relevant data, in addition to surveys to measure and validate the hypotheses in this thesis. The banking sector in Lebanon was selected as a source of data collection.

    Results. Three factors Perceived ease of use(PEOU), Perceived Usefulness (PU), and User Involvement react together to satisfy user and hence to make the user accept change.

    Conclusions. Based on the studies conducted so far with respect to this topic, there exists an indirect relationship between the three factors discussed in this thesis, the user satisfaction, and the user acceptance. The more the user finds the system easy to use (simple way of work with less efforts) and useful (the extent to which person’s work is improved) and the more he/sh  is involved, the more he is satisfied and hence the more he is willing to accept the change and causes system success.

  • 112.
    Elmir, Ahmad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    PaySim Financial Simulator: PaySim Financial Simulator2016Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The lack of legitimate datasets on mobile money transactions toperform research on in the domain of fraud detection is a big prob-lem today in the scientic community. Part of the problem is theintrinsic private nature of mobile transactions, not much infor-mation can be exploited. This will leave the researchers with theburden of rst harnessing the dataset before performing the actualresearch on it. The dataset corresponds to the set of data in whichthe research is to be performed on. This thesis discusses a solutionto such a problem, namely the Paysim simulator. Paysim is a -nancial simulator that simulates mobile money transactions basedon an original dataset. We present a solution to ultimately yieldthe possibility to simulate mobile money transactions in such a waythat they become similar to the original dataset. The similarity orthe congruity will be measured by calculating the error-rate betweenthe synthetic data set and the original data set. With technologyframeworks such as "Agent Based" simulation techniques, and theapplication of mathematical statistics, it can be demonstrated thatthe synthetic data is as prudent as the original data set. The aimof this thesis is to demonstrate with statistical models that PaySimcan be used as a tool for the intents of nancial simulations.

  • 113.
    Engqvist, Markus
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Mori Soto, Karen
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Defining a Process for Statistical Analysis of Vulnerability Management using KPI2017Självständigt arbete på avancerad nivå (masterexamen), 300 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In todays connected society, with rapidly advancing technology, there is an interest in offering technical services in our day to day life. Since these services are used to handle sensitive information and money, there are demands for increased information security. Sometimes errors occur in these systems that risk the security for both parties. These systems should be secured to maintain secure operations even though vulnerabilities occur.

    Outpost24 is one company that specializes in vulnerability management. By using their scanning tool OUTSCAN™, Outpost24 can identify vulnerabilities in network components, such as firewalls, switches, printers, devices, servers, workstations and other computer systems. These results are then stored in a database. Within this study, the authors will work together with Outpost24 towards this data. The goal is to define a process for generation of vulnerability reports for the company. The process will perform a statistical analysis of the data and present the findings.

    To solve the task a report was created, during which the process was documented. The work began with a background study into Key Performance Indicators (KPIs), in which the most common security KPIs were identified from similar works. A tool was also developed to help with the analysis. This resulted in a statistical analysis using Outpost24’s dataset. By presenting the data formatted by the KPIs, trends could be identified. This showed an overall trend of increasing vulnerabilities and the necessity for organizations to spend resources towards security. The KPIs offer other possibilities, such as creating a baseline for security evaluation using data from one year. In the future, one could use the KPIs to compare how the security situation has changed.

  • 114.
    Ericsson, Jimmy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Säkerheten i Android-applikationers nätverkskommunikation2016Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Kontext. Allt fler vardagliga problem kan idag uträttas via smartphones. I takt med ett alltmer utbrett användande ökar utvecklingen av applikationer till Androids operativsystem. En avgörande faktor blir därmed säkerhetsaspekten, framförallt gällande den data som skickas i kommunikationen med internet (nätkommunikationen).

    Mål. Målet och syftet med denna uppsats är att försöka påvisa hur säkert olika Android-applikationers nätkommunikation är i dagsläget.

    Metod. Uppsatsen använder sig av två metoder ett experimentellt och en analysdel. Den experimentella metoden används för att utföra ett antal tester i ett experiment. För att genomföra undersökningen används en Android emulator där trafiken avlyssnas medan olika applikationer används. Undersökningen bygger på tio applikationer som testas på två olika versioner av Android systemet. Totalt blir det 20 tester. Därefter övergår projektet till en analysdel där resultatet av testerna ska analyseras manuellt med ett analyseringsverktyg för nätverkstrafik.

    Resultat. Resultatet av experimentet och analysen visar att det finns skillnader på säkerheten bland applikationerna och även mellan vilken Android version som används.

    Slutsats. Av de testade applikationerna fanns en säkerhetsrisk bland vissa. Orsak är att de använde sig av osäkra överföringsprotokoll. En skillnad mellan de två Androidversionerna upptäcktes, det var TLS protokollet.

    Nyckelord: Smartphones, Android-applikationer, nätkommunikation

  • 115.
    Erik, Bergenholtz
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    False Alarm Reduction in Maritime Surveillance2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. A large portion of all the transportation in the world consists of voyages over the sea. Systems such as Automatic Identification Systems (AIS) have been developed to aid in the surveillance of the maritime traffic, in order to help keeping the amount accidents and illegal activities down. In recent years a lot of time and effort has gone into automated surveillance of maritime traffic, with the purpose of finding and reporting behaviour deviating from what is considered normal. An issue with many of the present approaches is inaccuracy and the amount of false positives that follow from it.

    Objectives. This study continues the work presented by Woxberg and Grahn in 2015. In their work they used quadtrees to improve upon the existing tool STRAND, created by Osekowska et al. STRAND utilizes potential fields to build a model of normal behaviour from received AIS data, which can then be used to detect anomalies in the traffic. The goal of this study is to further improve the system by adding statistical analysis to reduce the number of false positives detected by Grahn and Woxberg's implementation.

    Method. The method for reducing false positives proposed in this thesis uses the charge in overlapping potential fields to approximate a normal distribution of the charge in the area. If a charge is too similar to that of the overlapping potential fields the detection is dismissed as a false positive. A series of experiments were ran to find out which of the methods proposed by the thesis are most suited for this application.  

    Results. The tested methods for estimating the normal distribution of a cell in the potential field, i.e. the unbiased formula for estimating the standard deviation and a version using Kalman filtering, both find as many of the confirmed anomalies as the base implementation, i.e. 9/12. Furthermore, both suggested methods reduce the amount of false positives by 11.5% in comparison to the base implementation, bringing the amount of false positives down to 17.7%. However, there are indications that the unbiased method has more promise.

    Conclusion. The two proposed methods both work as intended and both proposed methods perform equally. There are however indications that the unbiased method may be better despite the test results, but a new extended set of training data is needed to confirm or deny this. The two methods can only work if the examined overlapping potential fields are independent from each other, which means that the methods can not be applied to anomalies of the positional variety. Constructing a filter for these anomalies is left for future study.

  • 116.
    Erlandsson, Fredrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Human Interactions on Online Social Media: Collecting and Analyzing Social Interaction Networks2018Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Online social media, such as Facebook, Twitter, and LinkedIn, provides users with services that enable them to interact both globally and instantly. The nature of social media interactions follows a constantly growing pattern that requires selection mechanisms to find and analyze interesting data. These interactions on social media can then be modeled into interaction networks, which enable network-based and graph-based methods to model and understand users’ behaviors on social media. These methods could also benefit the field of complex networks in terms of finding initial seeds in the information cascade model. This thesis aims to investigate how to efficiently collect user-generated content and interactions from online social media sites. A novel method for data collection that is using an exploratory research, which includes prototyping, is presented, as part of the research results in this thesis.

     

    Analysis of social data requires data that covers all the interactions in a given domain, which has shown to be difficult to handle in previous work. An additional contribution from the research conducted is that a novel method of crawling that extracts all social interactions from Facebook is presented. Over the period of the last few years, we have collected 280 million posts from public pages on Facebook using this crawling method. The collected posts include 35 billion likes and 5 billion comments from 700 million users. The data collection is the largest research dataset of social interactions on Facebook, enabling further and more accurate research in the area of social network analysis.

     

    With the extracted data, it is possible to illustrate interactions between different users that do not necessarily have to be connected. Methods using the same data to identify and cluster different opinions in online communities have also been developed and evaluated. Furthermore, a proposed method is used and validated for finding appropriate seeds for information cascade analyses, and identification of influential users. Based upon the conducted research, it appears that the data mining approach, association rule learning, can be used successfully in identifying influential users with high accuracy. In addition, the same method can also be used for identifying seeds in an information cascade setting, with no significant difference than other network-based methods. Finally, privacy-related consequences of posting online is an important area for users to consider. Therefore, mitigating privacy risks contributes to a secure environment and methods to protect user privacy are presented.

  • 117.
    Erlandsson, Fredrik
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On social interaction metrics: social network crawling based on interestingness2014Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    With the high use of online social networks we are entering the era of big data. With limited resources it is important to evaluate and prioritize interesting data. This thesis addresses the following aspects of social network analysis: efficient data collection, social interaction evaluation and user privacy concerns. It is possible to collect data from online social networks via their open APIs. However, a systematic and efficient collection of online social networks data is still challenging. To improve the quality of the data collection process, prioritizing methods are statistically evaluated. Results suggest that the collection time can be reduced by up to 48% by prioritizing the collection of posts. Evaluation of social interactions also require data that covers all the interactions in a given domain. This has previously been hard to do, but the proposed crawler is capable of extracting all social interactions from a given page. With the extracted data it is for instance possible to illustrate indirect interactions between different users that do not necessarily have to be connected. Methods using the same data to identify and cluster different opinions in online communities have been developed. These methods are evaluated with the too Linguistic Inquiry and Word Count. The privacy of the content produced; and the users’ private information provided on social networks is important to protect. Users must be aware of the consequence of posting in online social networks in terms of privacy. Methods to protect user privacy are presented. The proposed crawler in this thesis has, over the period of 20 months, collected over 38 million posts from public pages on Facebook covering: 4 billion likes and 340 million comments from over 280 million users. The performed data collection yielded one of the largest research dataset of social interactions on Facebook today, enabling qualitative research in form of social network analysis.

  • 118.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boldt, Martin
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Johnson, Henric
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Privacy threats related to user profiling in online social networks2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    The popularity of Online Social Networks (OSNs) has increased the visibility of users profiles and interactions performed between users. In this paper we structure different privacy threats related to OSNs and describe six different types of privacy threats. One of these threats, named public information harvesting, is previously not documented so we therefore present it in further detail by also presenting the results from a proof-of-concept implementation of that threat. The basis of the attack is gathering of user interactions from various open groups on Facebook which then is transformed into a social interaction graph. Since the data gathered from the OSN originates from open groups it could be executed by any third-party connected to the Internet independently of the users' privacy settings. In addition to presenting the different privacy threats we also we propose a range of different protection techniques.

  • 119.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bródka, Piotr
    Wrocław University of Technolog, POL.
    Predicting User Participation in Social Media2016Ingår i: Lecture Notes in Computer Science / [ed] Wierzbicki A., Brandes U., Schweitzer F., Pedreschi D., Springer, 2016, Vol. 9564, s. 126-135Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Abstract Online social networking services like Facebook provides a popular way for users to participate in different communication groups and discuss relevant topics with each other. While users tend to have an impact on each other, it is important to better understand and ...

  • 120.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bródka, Piotr
    Wrocław University of Science and Technology, POL.
    Boldt, Martin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Do We Really Need To Catch Them All?: A New User-Guided Social Media Crawling Method2017Ingår i: Entropy, ISSN 1099-4300, E-ISSN 1099-4300, Vol. 19, nr 12, artikel-id 686Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    With the growing use of popular social media services like Facebook and Twitter it is hard to collect all content from the networks without access to the core infrastructure or paying for it. Thus, if all content cannot be collected one must consider which data are of most importance.In this work we present a novel User-Guided Social Media Crawling method (USMC) that is able to collect data from social media, utilizing the wisdom of the crowd to decide the order in which user generated content should be collected, to cover as many user interactions as possible. USMC is validated by crawling 160 Facebook public pages, containing 368 million users and 1.3 billion interactions, and it is compared with two other crawling methods. The results show that it is possible to cover approximately 75% of the interactions on a Facebook page by sampling just 20% of its posts, and at the same time reduce the crawling time by 53%.What is more, the social network constructed from the 20% sample has more than 75% of the users and edges compared to the social network created from all posts, and has very similar degree distribution.

  • 121.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bródka, Piotr
    Wrocƚaw University of Technology, POL.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Seed selection for information cascade in multilayer networks2018Ingår i: Studies in Computational IntelligenceVolume , 2018, Pages -436 / [ed] Cherifi H.,Cherifi C.,Musolesi M.,Karsai M., Springer-Verlag New York, 2018, Vol. 689, s. 426-436Konferensbidrag (Refereegranskat)
    Abstract [en]

    Information spreading is an interesting field in the domain of online social media. In this work, we are investigating how well different seed selection strategies affect the spreading processes simulated using independent cascade model on eighteen multilayer social networks. Fifteen networks are built based on the user interaction data extracted from Facebook public pages and tree of them are multilayer networks downloaded from public repository (two of them being Twitter networks). The results indicate that various state of the art seed selection strategies for single-layer networks like K-Shell or VoteRank do not perform so well on multilayer networks and are outperformed by Degree Centrality.

  • 122.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Bródka, Piotr
    Wrocƚaw University of Technology, POL.
    Borg, Anton
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Finding Influential Users in Social Media Using Association Rule Learning2016Ingår i: Entropy, ISSN 1099-4300, E-ISSN 1099-4300, Vol. 18, nr 5Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.

  • 123.
    Erlandsson, Fredrik
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nia, Roozbeh
    Boldt, Martin
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Johnson, Henric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Wu, S. Felix
    Crawling Online Social Networks2015Ingår i: SECOND EUROPEAN NETWORK INTELLIGENCE CONFERENCE (ENIC 2015), IEEE Computer Society, 2015, s. 9-16Konferensbidrag (Refereegranskat)
    Abstract [en]

    Researchers put in tremendous amount of time and effort in order to crawl the information from online social networks. With the variety and the vast amount of information shared on online social networks today, different crawlers have been designed to capture several types of information. We have developed a novel crawler called SINCE. This crawler differs significantly from other existing crawlers in terms of efficiency and crawling depth. We are getting all interactions related to every single post. In addition, are we able to understand interaction dynamics, enabling support for making informed decisions on what content to re-crawl in order to get the most recent snapshot of interactions. Finally we evaluate our crawler against other existing crawlers in terms of completeness and efficiency. Over the last years we have crawled public communities on Facebook, resulting in over 500 million unique Facebook users, 50 million posts, 500 million comments and over 6 billion likes.

  • 124.
    Espinosa, Javier
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Clustering of Image Search Results to Support Historical Document Recognition2014Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context. Image searching in historical handwritten documents is a challenging problem in computer vision and pattern recognition. The amount of documents which have been digitalized is increasing each day, and the task to find occurrences of a selected sub-image in a collection of documents has special interest for historians and genealogist. Objectives. This thesis develops a technique for image searching in historical documents. Divided in three phases, first the document is segmented into sub-images according to the words on it. These sub-images are defined by a features vector with measurable attributes of its content. And based on these vectors, a clustering algorithm computes the distance between vectors to decide which images match with the selected by the user. Methods. The research methodology is experimentation. A quasi-experiment is designed based on repeated measures over a single group of data. The image processing, features selection, and clustering approach are the independent variables; whereas the accuracies measurements are the dependent variable. This design provides a measurement net based on a set of outcomes related to each other. Results. The statistical analysis is based on the F1 score to measure the accuracy of the experimental results. This test analyses the accuracy of the experiment regarding to its true positives, false positives, and false negatives detected. The average F-measure for the experiment conducted is F1 = 0.59, whereas the actual performance value of the method is matching ratio of 66.4%. Conclusions. This thesis provides a starting point in order to develop a search engine for historical document collections based on pattern recognition. The main research findings are focused in image enhancement and segmentation for degraded documents, and image matching based on features definition and cluster analysis.

  • 125.
    Fayyaz, Ali Raza
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Munir, Madiha
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of PHP Frameworks (CakePHP and CodeIgniter) in relation to the Object-Relational Mapping, with respect to Load Testing2014Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Context: Information technology is playing an important role in creating innovation in business. Due to increase in demand of information technology, web development has become an important field. PHP is an open source language, which is widely used in web development. PHP is used to develop dynamic web pages and it has the ability to connect with database. PHP has some good features i.e. cross platform compatibility, scalability, efficient execution and is an open source technology. These features make it a good choice for developers to choose PHP for web development. The maintenance of an application becomes difficult and performance being considerably reduced, if PHP is to be used without using its frameworks. To resolve these issues, different frameworks have been introduced by web development communities on the internet. These frameworks are based on Model, View, Controller design pattern. These frameworks provide, different common functionalities and classes in the form of helpers, components, and plug-in to reduce the development time. Due to these features like robustness, scalability, maintainability and performance, these frameworks are mostly used for web development in PHP, with performance being considered the most important factor. Objectives:The objective of this thesis is to compare and analyze the affect of data abstraction layer (ORM) on the performance of two PHP frameworks. These two frameworks are CakePHP and CodeIgniter. CAKEPHP has built-in support of object-relational mapping (ORM) but CodeIgniter has no built-in support of object-relational mapping (ORM). We considered load testing and stress testing to measure the performance of these two frameworks. Methods: We performed the experiment to show the performance of CakePHP (with ORM) and CodeIgniter (no ORM) frameworks. We developed two applications in both the PHP frameworks, with the same scope and design and measured the performance of these applications, with respect to load testing, with automated testing tool. The results have been obtained by testing the performance of both the applications on local and live servers. Conclusions:After analyzing the results we concluded that CodeIgniter is useful for small and medium sized applications. But CAKEPHP is good for large and enterprise level applications, as in stress conditions the CAKEPHP performed better than CodeIgniter on both local and live environment.

  • 126.
    Felizardo, Katia
    et al.
    Federal University of Technology, BRA.
    De Souza, Erica
    Federal University of Technology, BRA.
    Falbo, Ricardo
    Federal University of Esp'rito Santo, BRA.
    Vijaykumar, Nandamudi
    Instituto Nacional de Pesquisas Espaciais, BRA.
    Mendes, Emilia
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Nakagawa, Elisayumi
    Universidade de Sao Paulo, BRA.
    Defining protocols of systematic literature reviews in software engineering: A survey2017Ingår i: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017 / [ed] Felderer, M; Olsson, HH; Skavhaug, A, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 202-209, artikel-id 8051349Konferensbidrag (Refereegranskat)
    Abstract [en]

    Context: Despite being defined during the first phase of the Systematic Literature Review (SLR) process, the protocol is usually refined when other phases are performed. Several researchers have reported their experiences in applying SLRs in Software Engineering (SE) however, there is still a lack of studies discussing the iterative nature of the protocol definition, especially how it should be perceived by researchers conducting SLRs. Objective: The main goal of this study is to perform a survey aiming to identify: (i) the perception of SE researchers related to protocol definition; (ii) the activities of the review process that typically lead to protocol refinements; and (iii) which protocol items are refined in those activities. Method: A survey was performed with 53 SE researchers. Results: Our results show that: (i) protocol definition and pilot test are the two activities that most lead to further protocol refinements; (ii) data extraction form is the most modified item. Besides that, this study confirmed the iterative nature of the protocol definition. Conclusions: An iterative pilot testcan facilitate refinements in the protocol. © 2017 IEEE.

  • 127.
    Fenn, Edward
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Fornling, Eric
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Mapping and identifying misplaced devices on a network by use of metadata2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Kontext. Placeringen av enheter i nätverk har idag blivit en säkerhetsfråga för de flesta företagen. Eftersom en felplacerad enhet kan äventyra ett helt nätverk, och i förlängning, ett företag så är det essentiellt att ha koll på vad som är placerat vart. Kunskap är nyckeln till framgång, och att ha kunskap om sin nätverksstruktur är avgörande för att göra nätverket säkert. Stora nätverk kan dock vara svåra att ha koll på om anställda kan lägga till eller ta bort enheter, och på så sätt göra det svårt för administratören att ständigt hålla sig uppdaterad om vad som finns vart.

    Mål. Den här studien fokuserar på skapandet av en analysmetod för att kartlägga ett nätverk baserat på metadata från nätverket. Analysmetoden ska sedan implementeras i ett verktyg som sedan automatiskt kartlägger nätverket utifrån den metadata som valts ut i analysmetoden. Motivationen och målet med den här studien är att skapa en metod som förbättrar nätverkskartläggning med syftet att identifiera felplacerade enheter, och att uppnå en större förståelse för den inverkan felplacerade enheter kan få för ett nätverk.

    Metod. Metoden för att analysera metadatan var att genom att för hand leta igenom den metadata som Outpost24 ABs sårbarhetsskanner samlade in när den letade efter sårbarheter i ett nätverk. Genom att analysera metadatan så kunde vi singla ut enskilda bitar som vi ansåg vara nödvändiga för att identifiera enhetens typ. Dessa attribut implementerades sedan i en sannolikhetsfunktion som avgjorde vilken typ en enhet hade, baserat på informationen i metadatan. Resultatet från denna sannolikhetsfunktion presenterades sedan visuellt som en graf. En algoritm som matade ut varningar om den hittade felkonfigurerade subnät kördes sedan mot resultaten från sannolikhetsfunktionen.

    Resultat. Den i den här rapporten föreslagna metoden är fastställt till att vara cirka 30 878 gånger snabbare än föregående metoder, i.e. att leta igenom metadatan för hand. Dock så är den föreslagna metoden inte lika exakt då den har en identifikationsgrad på 80-93% av enheterna på nätverket, och en korrekt identifikationsgrad på enhetstypen på 95-98% av de identifierade enheterna. Detta till skillnad från den föregående metoden som hade 80-93% respektive 100% identifikationsgrad. Den föreslagna metoden identifierade också 48.9% av alla subnät som felkonfigurerade.

    Sammanfattning. För att sammanfatta så bevisar den föreslagna metoden att det är möjligt att identifiera felplacerade enheter på ett nätverk utifrån en analys av nätverkets metadata. Den föreslagna metoden är dessutom avsevärt snabbare än föregående metoder, men behöver utvecklas mer för att nå samma identifikationsgrad som föregående metoder. Det här arbetet kan ses som ett proof-of-concept gällande identifikation av enheter baserat på metadata, och behöver därför utvecklas för att nå sin fulla potential.

  • 128. Forsman, Mattias
    et al.
    Glad, Andreas
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kommunikationssystem.
    Algorithms for Automated Live Migration of Virtual Machines2015Ingår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 101, s. 110-126Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present two strategies to balance the load in a system with multiple virtual machines (VMs) through automated live migration. When the push strategy is used, overloaded hosts try to migrate workload to less loaded nodes. On the other hand, when the pull strategy is employed, the light-loaded hosts take the initiative to offload overloaded nodes. The performance of the proposed strategies was evaluated through simulations. We have discovered that the strategies complement each other, in the sense that each strategy comes out as “best” under different types of workload. For example, the pull strategy is able to quickly re-distribute the load of the system when the load is in the range low-to-medium, while the push strategy is faster when the load is medium-to-high. Our evaluation shows that when adding or removing a large number of virtual machines in the system, the “best” strategy can re-balance the system in 4–15 minutes.

  • 129.
    Gadila Swarajya, Haritha Reddy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Empirical Investigation on Measurement of Game Immersion using Real World Dissociation Factor2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: Games involve people to a large extent where they relate themselves with the game characters; this is commonly known as game immersion. Generally, some players play games for enjoyment, some for stress relaxation and so on.Game immersion is usually used to describe the degree of involvement with a game. When people play games, they don’t necessarily realize that they have been dissociated with the surrounding world. Real world dissociation (RWD) can be defined as the situation where a player is less aware of the surroundings outside the game than about what is happening in the game itself. The RWD factor has been expected to measure the losing track of time, lack of awareness of surroundings and mental transportation.

    Objectives: In this thesis, we measure and compare the difference in game immersion between experienced and inexperienced players using RWD factor. In addition, the study involves exploring the significance of game immersion and various approaches used to measure it.

    Methods: In this study literature review has been carried out to explore the meaning of game immersion and further user studies in the form of an experiment has been conducted to measure game immersion between experienced and inexperienced gamers. The game immersion has been measured using the real world dissociation (RWD) factor. After the experiment has been conducted, a statistical technique has been carried out to measure the difference in game immersion among the two groups.

    Results:The empirical investigation on the measurement of game immersion has been done using RWD factor. The results state that the significance value is less than 0.05 and hence null hypothesis is rejected for both the games. The measurable difference has been calculated by using Cohen’s d effect size between experienced and inexperienced players. The Cohen’s d value between experienced players and inexperienced players for Dota 2 is 0.7423 and CS:GO is 0.8383.

    Conclusions: After analyzing the data and calculating the effect size, the overall results state that inexperienced group of players are more immersed than the experienced group of players when measured by RWD factor. Hence it can be concluded that irrespective of the game played, inexperienced players are more dissociated from the real world than the experienced players.

  • 130.
    Gajvelly, Chakravarthy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Approaches for estimating the Uniqueness of linked residential burglaries2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context: According to Swedish National Council for Crime Prevention, there is an increase in residential burglary crimes by 2% in 2014 compared to 2013and by 19% in the past decade. Law enforcement agencies could only solve three to five percent of crimes reported in 2012. Multiple studies done in the field of crime analysis report that most of the residential burglaries are committed by relatively small number of offenders. Thus, the law enforcement agencies need toinvestigate the possibility of linking crimes into crime series.

    Objectives: This study presents the computation of a median crime which is the centre most crime in a crime series calculated using the statistical concept of median. This approach is used to calculate the uniqueness of a crime series consisting of linked residential burglaries. The burglaries are characterised using temporal, spatial features and modus operandi.

    Methods: Quasi experiment with repeated measures is chosen as research method.The burglaries are linked based on their characteristics(features) by building a statistical model using logistic regression algorithm to formulate estimated crime series. The study uses median crime as an approach for computing the uniqueness of linked burglaries. The measure of uniqueness is compared between estimated series and legally verified known series. In addition, the study compares the uniqueness of estimated and known series to randomly selected crimes. The measure of uniqueness is used to know the feasibility of using the formulated estimated series for investigation by the law bodies.

    Results: Statistical model built for linking crimes achieved an AUC = 0.964,R 2 = 0.770 and Dxy = 0.900 during internal evaluation and achieved AU C =0.916 for predictions on test data set and AUC = 0.85 for predictions on known series data set. The uniqueness measure of estimated series ranges from 0.526to 0.715, and from 0.359 to 0.442 for known series corresponding to differentseries. The uniqueness of randomly selected crimes ranges from 0.522 to 0.726 for estimated series and from 0.636 to 0.743 for known series. The values obtained are analysed and evaluated using Independent two sample t-test, Cohen’s d and kolmogorov-smirnov test. From this analysis, it is evident that the uniqueness measure for estimated series is high compared to the known series and closely matches with randomly selected crimes. The uniqueness of known series is clearly low compared to both the estimated series and randomly selected crimes.

    Conclusion: The present study concludes that estimated series formulated using the statistical model has high uniqueness measures and needs to be furtherfiltered to be used by the law bodies.

  • 131.
    García Martín, Eva
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Energy Efficiency in Machine Learning: A position paper2017Ingår i: 30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden / [ed] Niklas Lavesson, Linköping: Linköping University Electronic Press, 2017, Vol. 137, s. 68-72Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine learning algorithms are usually evaluated and developed in terms of predictive performance. Since these types of algorithms often run on large-scale data centers, they account for a significant share of the energy consumed in many countries. This position paper argues for the reasons why developing energy efficient machine learning algorithms is of great importance.

  • 132.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Is it ethical to avoid error analysis?2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine learning algorithms tend to create more accurate models with the availability of large datasets. In some cases, highly accurate models can hide the presence of bias in the data. There are several studies published that tackle the development of discriminatory-aware machine learning algorithms. We center on the further evaluation of machine learning models by doing error analysis, to understand under what conditions the model is not working as expected. We focus on the ethical implications of avoiding error analysis, from a falsification of results and discrimination perspective. Finally, we show different ways to approach error analysis in non-interpretable machine learning algorithms such as deep learning.

  • 133.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Energy Efficiency Analysis of the Very Fast Decision Tree Algorithm2017Ingår i: Trends in Social Network Analysis: Information Propagation, User Behavior Modeling, Forecasting, and Vulnerability Assessment / [ed] Rokia Missaoui, Talel Abdessalem, Matthieu Latapy, Cham, Switzerland: Springer, 2017, s. 229-252Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Data mining algorithms are usually designed to optimize a trade-off between predictive accuracy and computational efficiency. This paper introduces energy consumption and energy efficiency as important factors to consider during data mining algorithm analysis and evaluation. We conducted an experiment to illustrate how energy consumption and accuracy are affected when varying the parameters of the Very Fast Decision Tree (VFDT) algorithm. These results are compared with a theoretical analysis on the algorithm, indicating that energy consumption is affected by the parameters design and that it can be reduced significantly while maintaining accuracy.

  • 134.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Energy Efficiency in Data Stream Mining2015Ingår i: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 2015, s. 1125-1132Konferensbidrag (Refereegranskat)
    Abstract [en]

    Data mining algorithms are usually designed to optimize a trade-off between predictive accuracy and computational efficiency. This paper introduces energy consumption and energy efficiency as important factors to consider during data mining algorithm analysis and evaluation. We extended the CRISP (Cross Industry Standard Process for Data Mining) framework to include energy consumption analysis. Based on this framework, we conducted an experiment to illustrate how energy consumption and accuracy are affected when varying the parameters of the Very Fast Decision Tree (VFDT) algorithm. The results indicate that energy consumption can be reduced by up to 92.5% (557 J) while maintaining accuracy.

  • 135.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Identification of Energy Hotspots: A Case Study of the Very Fast Decision Tree2017Ingår i: GPC 2017: Green, Pervasive, and Cloud Computing / [ed] Au M., Castiglione A., Choo KK., Palmieri F., Li KC., Cham, Switzerland: Springer, 2017, Vol. 10232, s. 267-281Konferensbidrag (Refereegranskat)
    Abstract [en]

    Large-scale data centers account for a significant share of the energy consumption in many countries. Machine learning technology requires intensive workloads and thus drives requirements for lots of power and cooling capacity in data centers. It is time to explore green machine learning. The aim of this paper is to profile a machine learning algorithm with respect to its energy consumption and to determine the causes behind this consumption. The first scalable machine learning algorithm able to handle large volumes of streaming data is the Very Fast Decision Tree (VFDT), which outputs competitive results in comparison to algorithms that analyze data from static datasets. Our objectives are to: (i) establish a methodology that profiles the energy consumption of decision trees at the function level, (ii) apply this methodology in an experiment to obtain the energy consumption of the VFDT, (iii) conduct a fine-grained analysis of the functions that consume most of the energy, providing an understanding of that consumption, (iv) analyze how different parameter settings can significantly reduce the energy consumption. The results show that by addressing the most energy intensive part of the VFDT, the energy consumption can be reduced up to a 74.3%.

  • 136.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hoeffding Trees with nmin adaptationManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Machine learning software accounts for a significant amount of energy consumed in data centers. These algorithms are usually optimized towards predictive performance, i.e. accuracy, and scalability. This is the case of data stream mining algorithms. Although these algorithms are adaptive to the incoming data, they have fixed parameters from the beginning of the execution, which lead to energy hotspots. We present dynamic parameter adaptation for data stream mining algorithms to trade-off energy efficiency against accuracy during runtime. To validate this approach, we introduce the nmin adaptation method to improve parameter adaptation in Hoeffding trees. This method dynamically adapts the number of instances needed to make a split (nmin) and thereby reduces the overall energy consumption. We created an experiment to compare the Very Fast Decision Tree algorithm (VFDT, original Hoeffding tree algorithm) with nmin adaptation and the standard VFDT. The results show that VFDT with nmin adaptation consumes up to 89% less energy than the standard VFDT, trading off a few percent of accuracy. Our approach can be used to trade off energy consumption with predictive and computational performance in the strive towards resource-aware machine learning. 

  • 137.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Hoeffding Trees with nmin adaptation2018Ingår i: The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018), IEEE, 2018, s. 70-79Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine learning software accounts for a significant amount of energy consumed in data centers. These algorithms are usually optimized towards predictive performance, i.e. accuracy, and scalability. This is the case of data stream mining algorithms. Although these algorithms are adaptive to the incoming data, they have fixed parameters from the beginning of the execution. We have observed that having fixed parameters lead to unnecessary computations, thus making the algorithm energy inefficient.In this paper we present the nmin adaptation method for Hoeffding trees. This method adapts the value of the nmin pa- rameter, which significantly affects the energy consumption of the algorithm. The method reduces unnecessary computations and memory accesses, thus reducing the energy, while the accuracy is only marginally affected. We experimentally compared VFDT (Very Fast Decision Tree, the first Hoeffding tree algorithm) and CVFDT (Concept-adapting VFDT) with the VFDT-nmin (VFDT with nmin adaptation). The results show that VFDT-nmin consumes up to 27% less energy than the standard VFDT, and up to 92% less energy than CVFDT, trading off a few percent of accuracy in a few datasets.

  • 138.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    How to Measure Energy Consumption in Machine Learning Algorithms2019Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): ECMLPKDD 2018: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases Workshops. Lecture Notes in Computer Science. Springer, Cham, 2019, Vol. 11329, s. 243-255Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine learning algorithms are responsible for a significant amount of computations. These computations are increasing with the advancements in different machine learning fields. For example, fields such as deep learning require algorithms to run during weeks consuming vast amounts of energy. While there is a trend in optimizing machine learning algorithms for performance and energy consumption, still there is little knowledge on how to estimate an algorithm’s energy consumption. Currently, a straightforward cross-platform approach to estimate energy consumption for different types of algorithms does not exist. For that reason, well-known researchers in computer architecture have published extensive works on approaches to estimate the energy consumption. This study presents a survey of methods to estimate energy consumption, and maps them to specific machine learning scenarios. Finally, we illustrate our mapping suggestions with a case study, where we measure energy consumption in a big data stream mining scenario. Our ultimate goal is to bridge the current gap that exists to estimate energy consumption in machine learning scenarios.

  • 139.
    García-Martín, Eva
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Extraction and Energy Efficient Processing of Streaming Data2017Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The interest in machine learning algorithms is increasing, in parallel with the advancements in hardware and software required to mine large-scale datasets. Machine learning algorithms account for a significant amount of energy consumed in data centers, which impacts the global energy consumption. However, machine learning algorithms are optimized towards predictive performance and scalability. Algorithms with low energy consumption are necessary for embedded systems and other resource constrained devices; and desirable for platforms that require many computations, such as data centers. Data stream mining investigates how to process potentially infinite streams of data without the need to store all the data. This ability is particularly useful for companies that are generating data at a high rate, such as social networks.

    This thesis investigates algorithms in the data stream mining domain from an energy efficiency perspective. The thesis comprises of two parts. The first part explores how to extract and analyze data from Twitter, with a pilot study that investigates a correlation between hashtags and followers. The second and main part investigates how energy is consumed and optimized in an online learning algorithm, suitable for data stream mining tasks.

    The second part of the thesis focuses on analyzing, understanding, and reformulating the Very Fast Decision Tree (VFDT) algorithm, the original Hoeffding tree algorithm, into an energy efficient version. It presents three key contributions. First, it shows how energy varies in the VFDT from a high-level view by tuning different parameters. Second, it presents a methodology to identify energy bottlenecks in machine learning algorithms, by portraying the functions of the VFDT that consume the largest amount of energy. Third, it introduces dynamic parameter adaptation for Hoeffding trees, a method to dynamically adapt the parameters of Hoeffding trees to reduce their energy consumption. The results show an average energy reduction of 23% on the VFDT algorithm.

  • 140.
    Georgsson, Adam
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Christensson, Olof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Visualization of training data reportedby football players2018Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background. Data from training sessions is gathered by a trainer from the playerswith the goal of analyzing and getting an overview of how the team is performing.The collected data is represented in tabular form, and over time the effort to inter-pret it becomes more demanding.

    Objectives. This thesis’ goal is to find out if there is a solution where collecting,processing and representing training data from football players can ease and improvethe trainer’s analysis of the team.

    Methods. A dataset is received from a football trainer, and it contains informa-tion about training sessions for his team of football players. The dataset is used tofind a suitable method and visualize the data. Feedback from the trainer is used todetermine what works and what does not. Furthermore, a survey with examples ofvisualization is given to the players and the trainer to get an understanding of howthe selected charts are interpreted.

    Results. Representing the attributes of most importance from received datasetrequires a chain of views (usage flow) to be introduced, from primary view to qua-ternary view. Each step in the chain tightens the level of details represented. Boxplot proved to be an appropriate choice to provide an overview of the team’s trainingdata.

    Conclusions. Visualizing training data gives a significant advantage to the trainerregarding team analysis. With box plotting will the trainer get an overview of theteam and can hereafter dig into more detailed data while interacting with the charts

  • 141.
    Gholami, Omid
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge Inst Technol, Karlskrona, Sweden..
    Sotskov, Y. N.
    Natl Acad Sci Belarus, BLR.
    Werner, F.
    Otto von Guericke Univ, DEU.
    Zatsiupo, A. S.
    Servolux, Mogilev, BLR.
    Heuristic Algorithms to Maximize Revenue and the Number of Jobs Processed on Parallel Machines2019Ingår i: Automation and remote control, ISSN 0005-1179, E-ISSN 1608-3032, Vol. 80, nr 2, s. 297-316Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A set of jobs has to be processed on parallel machines. For each job, there are given a release time and a due date and the job must be processed no later than its due date. If the job will be completed no later than the given due date, a benefit will be earned. Otherwise, this job will be rejected and the benefit will be discarded. The criterion under consideration is to maximize the weighted sum of the benefits and the number of jobs processed in time. Some properties of the objective function are found which allow to construct a optimal schedule. We develop a simulated annealing algorithm, a tabu search algorithm, and a genetic algorithm for solving this problem. The developed algorithms were tested on moderate and large instances with up to 500 jobs and 50 machines. Some recommendations are given showing how to use the obtained results and developed algorithms in production planning.

  • 142.
    Gholami, Omid
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Sotskov, Yuri
    United Institute Of Informatics Problems, BLR.
    Werner, Frank
    Otto-von-Guericke-Universität, DEU.
    A Genetic algorithms for hybrid job-shop problems with minimizing the makespan and mean flow time2018Ingår i: Journal of Advanced Manufacturing Systems, ISSN 0219-6867, Vol. 17, nr 4, s. 461-486Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We address a generalization of the classical job-shop problem which is called a hybrid job-shop problem. The criteria under consideration are the minimization of the makespan and mean ow time. In the hybrid job-shop, machines of type k are available for processing the specific subset O^k of the given operations. Each set O^k maybe partitioned into subsets for their processing on the machines of type k. Solving the hybrid job-shopproblem implies the solution of two subproblems: an assignment of all operations fromthe set O^k to the machines of type k and nding optimal sequences of the operationsfor their processing on each machine. In this paper, a genetic algorithm is developedto solve these two subproblems simultaneously. For solving the subproblems, a specialchromosome is used in the genetic algorithm based on a mixed graph model. We com-pare our genetic algorithms with a branch and bound algorithms and three other recentheuristic algorithms from the literature. Computational results for benchmark instanceswith 10 jobs and up to 50 machines show that the proposed genetic algorithm is ratherecient for both criteria. Compared with the other heuristics, the new algorithm givesmost often an optimal solution and the average percentage deviation from the optimalfunction value is about 4 %.

  • 143.
    Gholami, Omid
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    A Heuristic Approach to Solving the Train Traffic Re-Scheduling Problem in Real Time2018Ingår i: Algorithms, ISSN 1999-4893, E-ISSN 1999-4893, ISSN 1999-4893, Vol. 11, nr 4, s. 1-18, artikel-id 55Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Effectiveness in managing disturbances and disruptions in railway traffic networks, when they inevitably do occur, is a significant challenge, both from a practical and theoretical perspective. In this paper, we propose a heuristic approach for solving the real-time train traffic re-scheduling problem. This problem is here interpreted as a blocking job-shop scheduling problem, and a hybrid of the mixed graph and alternative graph is used for modelling the infrastructure and traffic dynamics on a mesoscopic level. A heuristic algorithm is developed and applied to resolve the conflicts by re-timing, re-ordering, and locally re-routing the trains. A part of the Southern Swedish railway network from Karlskrona centre to Malmö city is considered for an experimental performance assessment of the approach. The network consists of 290 block sections, and for a one-hour time horizon with around 80 active trains, the algorithm generates a solution in less than ten seconds. A benchmark with the corresponding mixed-integer program formulation, solved by commercial state-of-the-art solver Gurobi, is also conducted to assess the optimality of the generated solutions.

  • 144.
    Ghorbanian, Sara
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Fryklund, Glenn
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improving DLP system security2014Studentuppsats (Examensarbete)
    Abstract [en]

    Context. Data leakage prevention (DLP), a system designed to prevent leakage and loss of secret sensitive data and at the same time not affect employees workflow. The aim is to have a system covering every possible leakage point that exist. Even if these are covered, there are ways of hiding information such as obfuscating a zip archive within an image file, detecting this hidden information and preventing it from leaking is a difficult task. Companies pay a great deal for these solutions and yet, as we uncover, the information is not safe. Objectives. In this thesis we evaluate four different existing types of DLP systems out on the market today, disclosing their weaknesses and found ways of improving their security. Methods. The four DLP systems tested in this study cover agentless, agent based, hybrids and regular expression DLP tools. The test cases simulate potential leakage points via every day used file transfer applications and media such as USB, Skype, email, etc. Results. We present a hypothetical solution in order to amend these weaknesses and to improve the efficiency of DLP systems today. In addition to these evaluations and experiments, a complementing proof of concept solution has been developed that can be integrated with other DLP solutions. Conclusions. We conclude that the exisiting DLP systems are still in need of improvement, none of the tested DLP solutions fully covered the possible leakage points that could exist in the corporate world. There is a need for continued evaluation of DLP systems, aspects and leakage points not covered in this thesis as well as a follow up on our suggested solution.

  • 145. Ghorbanian, Sara
    et al.
    Fryklund, Glenn
    Axelsson, Stefan
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    DO DATA LOSS PREVENTION SYSTEMS REALLY WORK?2015Ingår i: ADVANCES IN DIGITAL FORENSICS XI, 2015, s. 341-357Konferensbidrag (Refereegranskat)
    Abstract [en]

    The threat of insiders stealing valuable corporate data continues to escalate. The inadvertent exposure of internal data has also become a major problem. Data loss prevention systems are designed to monitor and block attempts at exposing sensitive data to the outside world. They have become very popular, to the point where forensic investigators have to take these systems into account. This chapter describes the first experimental analysis of data loss prevention systems that attempts to ascertain their effectiveness at stopping the unauthorized exposure of sensitive data and the ease with which the systems could be circumvented. Four systems are evaluated (three of them in detail). The results point to considerable weaknesses in terms of general effectiveness and the ease with which the systems could be disabled.

  • 146.
    Goteti, Aniruddh
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Machine Learning Approach to the Design of Autonomous Construction Equipment applying Data-Driven Decision Support Tool2019Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Design engineers working in construction machinery industry face a lot of complexities and uncertainties while taking important decisions during the design of construction equipment. These complexities can be reduced by the implementation of a data-driven decision support tool, which can predict the behaviour of the machine in operational complexity and give some valuable insights to the design engineer. This data-driven decision support tool must be supported by a suitable machine algorithm. The focus of this thesis is to find a suitable machine algorithm, which can predict the behaviour of a machine and can later be involved in the development of such data-driven decision-support tools. In finding such a solution, evaluation of the regression performance of four supervised machine learning regression algorithms, namely SupportVector Machine Regression, Bayesian Ridge Regression, DecisionTree Regression and Random Forest Regression, is done. The evaluation was done on the data-sets personally observed/collected at the site which was extracted from the autonomous construction machine byProduct Development Research Lab (P.D.R.L). An experiment is chosen as a research methodology based on the quantitative format of the data set. The sensor data extracted from the autonomous machine in time series format, which in turn is converted to supervised data with the help of the sliding window method. The four chosen algorithms are then trained on the mentioned data-sets and are evaluated with certain performance metrics (MSE, RMSE, MAE, Training Time). Based on the rigorous data collection, experimentation and analysis, Bayesian Ridge Regressor is found to be the best compared with other algorithms in terms of all performance metrics and is chosen as the optimal algorithm to be used in the development of data-driven decision support tool meant for design engineers working in the construction industry.

  • 147.
    Gundreddy, Rohith Reddy
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of MMAPv1 and WiredTiger Storage Engines in MongoDB: An Experiment2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Context. As the data world entered Web 2.0 era, there is loads of structured, semi-structured and unstructured data growing enormously. The structured data can be handled efficiently by SQL databases. But to handle unstructured and semi-structured data, NoSQL databases have been introduced. NoSQL databases can be broadly classified into four types – key-value, column-oriented, document-oriented and graph-oriented. MongoDB is one such NoSQL databases which comes under the category of document-oriented databases. The data in MongoDB is stored using storage engines. MongoDB currently uses two different storage engines– MMAPv1 and WiredTiger.

    Objectives. This study focuses on presenting a performance evaluation of two data storage engines, MMAPv1 and WiredTiger, emphasizing on certain metrics which will be obtained from the literature review. This thesis aims to show which storage engine is better while using different workloads.

    Methods. Literature study is done to obtain knowledge on performance evaluation of MongoDB database comparing with other SQL and NoSQL databases. YCSB benchmarking tool has been chosen to evaluate the performance of the storage engines. Later, to show which storage engine is better on different workloads, penalties have been calculated.

    Results. The literature search resulted in obtaining four metrics – Execution time, Throughput, CPU Utilization and Memory Utilization as the metrics which best comply with presenting the evaluation of two storage engines, MMAPv1 and WiredTiger. The experiment resulted in generation of penalties that indicate which storage engine is better than the other and in which scenarios.

    Conclusions. MMAPv1 shows better performance when the workloads are Read favorable. On the other hand, WiredTiger shows better performance when the workloads are Write favorable and also when the workloads are neutral (equal amounts of reads and writes).

  • 148.
    Gunnam, Sri Ganesh Sai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Investigation of Different DASH Players: Retrieval Strategy & Quality of Experience of DASH2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Dynamic Adaptive Streaming over HTTP (DASH) is a convenient approach to transfer videos in an adaptive and dynamic way to the user. Therefore, this system makes best use of the bandwidth available. In this thesis, we investigate Dynamic Adaptive Streaming over HTTP (DASH) based on data collected from the lab experiments and user’s experiments. The objectives include investigation of how three different DASH players behave at different network conditions and up to which limit the players are tolerating the disturbances.

    We summarized the outcome of lab experiments on DASH at different adverse conditions and checked the lab results with user quality of experience at different adverse conditions to see up to which extent the users could tolerate the disturbances in different DASH players.

  • 149.
    Guo, Yang
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier. Blekinge institute of Technology.
    Bai, Guohua
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier. Blekinge institute of Technology.
    Yao, Yong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik. Blekinge institute of Technology.
    A new Software Framework for Heterogeneous Knowledge Sharing in Healthcare system2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Today’s demand for healthcare is dramatically increasing as the factor of the aging population and expectations growing during the past few years. This leads to the need of substantial healthcare services with innovative technologies developed from both industry and academia. Designing an efficient healthcare system is however a sophisticated process due to different research issues with the requirement for the provision of high-quality healthcare services. Connected to this requirement, the focus of many studies done so far is widely laid on a well-known problem called knowledge sharing. In recent years, knowledge sharing raises as one of the most demanding applications with references to the dynamic inter-activity among different healthcare actors and the complex data structures involved in this application. Suitable solution approach to knowledge sharing can enhance the efficiency of healthcare delivery, and thus improving the quality of healthcare services. The corresponding development tasks can be accomplished by using different methodologies such as analytical approaches, simulation experiments and practical measurements on the real healthcare system.In our work, the problem of heterogeneous knowledge sharing in the healthcare system is considered. Here, the heterogeneous aspect is expressed in terms ofdifferent healthcare actors and the associated characterizations. To do this, we suggest a new software framework, which mainly consists of three components. The first component is about the ontology based activity theory, which is used to scientifically represent the healthcare actors together with their relationships and interactions. The second component refers to an overlay decision maker, which is responsible for dealing with the decision-making activities such as appointment scheduling. Its advantage is to jointly consider various healthcare parameters and different algorithms for decision-making purposes. Based on these two components, the third component provides the theoretical models to conduct the numerical analysis and performance evaluation on the particular healthcare service.

  • 150.
    Guo, Yang
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier. Blekinge institute of Technology.
    Yao, Yong
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    On Performance of Prioritized Appointment Scheduling for Healthcare2017Ingår i: TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIESArtikel i tidskrift (Refereegranskat)
    Abstract [en]

    Designing the appointment scheduling is a challenging task for the development of healthcare system. The efficient solution approach can provide high quality of healthcare service between care providers (CP) s and care receivers (CR) s. In this paper, we consider the healthcare system with the heterogeneous CRs in terms of urgent and routine CRs. Our suggested model assumes that the system gives the service priority to the urgent CRs by allowing them to interrupt the ongoing routine appointments. An appointment handoff scheme is suggested for the interrupted routine appointments, and thus the routine CRs can attempt to re-establish the appointment scheduling with other available CPs. With these considerations, we study the scheduling performance of the system by using Markov chains based modeling approach. The numerical analysis is reported and the simulation experiment is conducted to validate the numerical results.

1234567 101 - 150 av 435
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf