Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Axelsson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of user accuracy and speed when performing 3D game target practice using a computer monitor and virtual reality headset2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Consumer grade Virtual Reality (VR)-headsets are on the rise, and with them comes an increasing number of digital games which support VR. How players perceive the gameplay and how well they perform at the games tasks can be key factors to designing new games.

    This master’s thesis aims to evaluate if a user can performa game task, specifically a target practice, in less time and/or more accurately when using a VR-headset as opposed to a computer screen and mouse. To gather statistics and measure the differences, an experiment was conducted using a test application developed alongside this report. The experiment recorded accuracy scores and time taken in tests performed by 35 test participants using both a VR-headset and computer screen.

    The resulting data sets are presented in the results chapter of this report. A Kolmogorov-Smirnov Normality Test and Student’s paired samples t-test was performed on the data to establish its statistical significance. After analysis, the results are reviewed, discussed and conclusions are made.

    This study concludes that when performing the experiment, the use of a VR-headset decreased the users accuracy and to a lesser extent also increased the time the user took to hit all targets. An argument was made that the longer previous experience with computer screen and mouse of most users gave this method an unfair advantage. With equally long training, VR use might score similar results.

  • 2.
    Ekelund, Stefan
    et al.
    Blekinge Institute of Technology.
    Bengter, Johan
    Blekinge Institute of Technology.
    Inlevelse genom realism: En undersökning om relation mellan inlevelse och realism2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For this bachelor’s thesis we have investigated how we can make use of realism in order to create immersion in a virtual environment. Our goal is to investigate the relationship between realism an immersion in order to create an understanding of how to create immersion and build better gaming experiences. To investigate the problem area, we have created a 3D environment where we used realism as a foundation to discuss our choices and the choices made in other games. We concluded that there is no perfect method to fully measure and define immersion but we found strong connections between realism and immersion. In the end we felt that you should always work with realism as a foundation when attempting to create immersion in a virtual environment.

  • 3.
    Gustafsson, Fanny
    Blekinge Institute of Technology.
    Investigating whether Elements of Fun in a Gamification Tool Increases Students' Test Results in School: Comparing a Gamification Tool and an Educational Test2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    For achieving an environment where students think it is more fun to learn in school, a Gamification tool could be applied. Today these tools have been more commonly used, as the digital world evolves. There must be an understanding of how these tools affect the students, in order to use them in school environments. This study investigates whether elements of fun in a Gamification tool increases students' test results, by comparing a Gamification tool and an educational test in school. The results could be analysed by gathering data from two different tests, one with the Gamification tool: Kahoot, and one without it, performed by 16 middle-school students. As a part of the experiment, the students answered a survey including questions about whether they learned something from using a digital tool in school. Five of the students participated in an interview, which purpose was to open up further discussions that could provide valuable information for answering the research question. The results of this study show that the test results did not increase in the use of the gamified tool. Most of the students had fewer right answers in Kahoot compared to the traditional test. No statistically significant change occurred between the learning tools, which proved the null hypothesis. According to the survey, most of the students thought it was more fun using a digital tool in school. The result shows that the use of a Gamification tool affects students' learning negatively, due to the decrease of performance in the test in Kahoot compared to the traditional test. According to the interview, the notable difference between the learning tools is that the gamified tool could potentially create a stressful environment in the classroom compared to the traditional written test. This could be the reason for the decrease in performance of the test in Kahoot. This experiment creates room for further research in this field, but a suggestion would be to do it on a larger scale and to learn more about the elements of fun in learning environments, to get more valid results.

  • 4.
    Johansson, Christian
    et al.
    NODA, SWE.
    Bergkvist, Markus
    NODA, SWE.
    Geysen, Davy
    EnergyVille, BEL.
    De Somer, Oscar
    EnergyVille, BEL.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Vanhoudt, Dirk
    EnergyVille, BEL.
    Operational Demand Forecasting In District Heating Systems Using Ensembles Of Online Machine Learning Algorithms2017In: 15TH INTERNATIONAL SYMPOSIUM ON DISTRICT HEATING AND COOLING (DHC15-2016) / [ed] Ulseth, R, ELSEVIER SCIENCE BV , 2017, p. 208-216Conference paper (Refereed)
    Abstract [en]

    Heat demand forecasting is in one form or another an integrated part of most optimisation solutions for district heating and cooling (DHC). Since DHC systems are demand driven, the ability to forecast this behaviour becomes an important part of most overall energy efficiency efforts. This paper presents the current status and results from extensive work in the development, implementation and operational service of online machine learning algorithms for demand forecasting. Recent results and experiences are compared to results predicted by previous work done by the authors. The prior work, based mainly on certain decision tree based regression algorithms, is expanded to include other forms of decision tree solutions as well as neural network based approaches. These algorithms are analysed both individually and combined in an ensemble solution. Furthermore, the paper also describes the practical implementation and commissioning of the system in two different operational settings where the data streams are analysed online in real-time. It is shown that the results are in line with expectations based on prior work, and that the demand predictions have a robust behaviour within acceptable error margins. Applications of such predictions in relation to intelligent network controllers for district heating are explored and the initial results of such systems are discussed. (C) 2017 The Authors. Published by Elsevier Ltd.

  • 5.
    Kaspersson, Max
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Realism through Wrinkle Maps: The Perceived Impact of Different Dynamic Wrinkle Implementations2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Real time rendering has many challenges to overcome, one of them being character realism. One way to move towards realism is to use wrinkle maps. Although already used in several games, there might be room for improvement, common practice suggests using two wrinkle maps, however, if this number can be reduced both texture usage and workload might be reduced as well.

    Objectives. To determine whether or not it is possible to reduce the number of wrinkle maps from two to one without having any significant impact on the perceived realism of a character.

    Methods. After a base character model was created, a setup in Maya were made so that dynamic wrinkles could be displayed on the character using both one and two wrinkle maps. The face were animated and rendered, displaying emotions using both techniques. A two-alternative forced choice experiment was then conducted where the participants selected which implementation displaying the same facial expression and having the same lighting condition they perceived as most realistic.

    Results. Results showed that some facial expressions had more of an impact of the perceived realism than others, favoring two wrinkle maps in every case where there was a significant difference. The expressions with the most impact were the ones that required different kinds of wrinkles at the same area of the face, such as the forehead, where one variant of wrinkles run at a more vertical manner and the other variant runs horizontally along the forehead.

    Conclusions. Using one wrinkle map can not fully replicate the effect of using two when it comes to realism. The difference on the implementations are dependant on the expression being displayed.

  • 6.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Melander, Christian
    Compuverde AB.
    Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data2015In: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, p. 537-546Conference paper (Refereed)
    Abstract [en]

    Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

  • 7.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB Lund, SWE.
    Combining thread-level speculation and just-in-time compilation in Google’s V8 JavaScript engine2017In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 29, no 1, article id e3826Article in journal (Refereed)
    Abstract [en]

    Summary: Thread-level speculation can be used to take advantage of multicore architectures for JavaScript in web applications. We extend previous studies with these main contributions; we implement thread-level speculation in the state-of-the art just-in-time-enabled JavaScript engine V8 and make the measurements in the Chromium web browser both from Google instead of using an interpreted JavaScript engine. We evaluate the thread-level speculation and just-in-time compilation combination on 15 very popular web applications, 20 HTML5 demos from the JS1K competition, and 4 Google Maps use cases. The performance is evaluated on two, four, and eight cores. The results clearly show that it is possible to successfully combine thread-level speculation and just-in-time compilation. This makes it possible to take advantage of multicore architectures for web applications while hiding the details of parallel programming from the programmer. Further, our results show an average speedup for the thread-level speculation and just-in-time compilation combination by a factor of almost 3 on four cores and over 4 on eight cores, without changing any of the JavaScript source code.

  • 8.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB.
    The Effects of Parameter Tuning in Software Thread-Level Speculation in JavaScript Engines2015In: ACM Transactions on Architecture and Code Optimization, ISSN 1544-3566, Vol. 11, no 4Article in journal (Refereed)
    Abstract [en]

    JavaScript is a sequential programming language that has a large potential for parallel execution in Web applications. Thread-level speculation can take advantage of this, but it has a large memory overhead. In this article, we evaluate the effects of adjusting various parameters for thread-level speculation. Our results clearly show that thread-level speculation is a useful technique for taking advantage of multicore architectures for JavaScript in Web applications, that nested speculation is required in thread-level speculation, and that the execution characteristics of Web applications significantly reduce the needed memory, the number of threads, and the depth of our speculation.

  • 9.
    Martinsen, Jan Kasper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Isberg, Anders
    Sony Mobile Communications AB.
    Sundström, Henrik
    Sony Mobile Communications AB.
    Reducing Memory in Software-Based Thread-Level Speculation for JavaScript Virtual Machine Execution of Web Applications2014In: 2014 IEEE INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS, 2014 IEEE 6TH INTL SYMP ON CYBERSPACE SAFETY AND SECURITY, 2014 IEEE 11TH INTL CONF ON EMBEDDED SOFTWARE AND SYST (HPCC,CSS,ICESS), Elsevier, 2014, p. 181-184Conference paper (Refereed)
    Abstract [en]

    Thread-Level Speculation has been used to take advantage of multicore processors in virtual execution environments for the sequential JavaScript scripting language. While the results are promising the memory overhead is high. Here we propose to reduce the memory usage by limiting the checkpoint depth based on an in-depth study of the memory and execution time effects. We also propose an adaptive heuristic to dynamically adjust the checkpoints. We evaluate this using 15 web applications on an 8-core computer. The results show that the memory overhead is reduced for Thread-Level Speculation by over 90% as compared to storing all checkpoints. Further, the performance is often better than when storing all the checkpoints and at worst 4% slower.

  • 10.
    nagadevara, venkatesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluation of Intrusion Detection Systems under Denial of Service Attack in virtual  Environment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The intrusion detection systems are being widely used for detecting the malicious

    traffic in many industries and they use a variety of technologies. Each IDs had different

    architecture and are deployed for detecting malicious activity. Intrusion detection system has

    a different set of rules which can defined based on requirement. Therefore, choosing intrusion

    detection system for and the appropriate environment is not an easy task.

    Objectives. The goal of this research is to evaluate three most used open source intrusion

    detection systems in terms of performance. And we give details about different types of attacks

    that can be detected using intrusion detection system. The tools that we select are Snort,

    Suricata, OSSEC.

    Methods. The experiment is conducted using TCP, SCAN, ICMP, FTP attack. Each

    experiment was run in different traffic rates under normal and malicious traffics all rule are

    active. All these tests are conducted in a virtual environment.

    Results. We can calculate the performance of IDS by using CPU usage, memory usage, packet

    loss and a number of alerts generated. These results are calculated for both normal and

    malicious traffic.

    Conclusions. We conclude that results vary in different IDS for different traffic rates.

    Specially snort showed better performance in alerts identification and OSSEC in the

    performance of IDS. These results indicated that alerts are low when the traffic rates high are

    which indicates this is due to the packet loss. Overall OSSEC provides better performance.

    And Snort provides better performance and accuracy for alert detection.

  • 11.
    Nilsson, Eric
    et al.
    Intel Corp., SWE.
    Aarno, Daniel
    Intel Corp., SWE.
    Carstensen, Erik
    Intel Corp., SWE.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Accelerating Graphics in the Simics Full-System Simulator2015Conference paper (Refereed)
    Abstract [en]

    Virtual platforms provide benefits to developers in terms of a more rapid development cycle since development may begin before next-generation hardware is available. However, there is a distinct lack of graphics virtualization in industry-grade virtual platforms, leading to performance issues that may reduce the benefits virtual platforms otherwise have over execution on actual hardware. This paper demonstrates graphics acceleration by the means of paravirtualizing OpenGL ES in the Wind River Simics full-system simulator. We propose a solution for paravirtualized graphics using magic instructions to share memory between target and host systems, and present an implementation utilizing this method. The study illustrates the benefits and drawbacks of paravirtualized graphics acceleration and presents a performance analysis of strengths and weaknesses compared to software rasterization. Additionally, benchmarks are devised to stress key aspects in the solution, such as communication latency and computationally intensive applications. We assess paravirtualization as a viable method to accelerate graphics in system simulators, this reduces frame times up to 34 times compared to that of software rasterization. Furthermore, magic instructions are identified as the primary bottleneck of communication latency in the implementation.

  • 12.
    Posse, Oliver
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tomanović, Ognjen
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluation of Data Integrity Methods in Storage: Oracle Database2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. It is very common today that e-commerce systems store sensitiveclient information. The database administrators of these typesof systems have access to this sensitive client information and are ableto manipulate it. Therefore, data integrity is of core importance inthese systems and methods to detect fraudulent behavior need to beimplemented.

    Objectives. The objective of this thesis is to implement and evaluatethe features and performance impact of different methods for achievingdata integrity in a database, Oracle to be more exact.Methods. Five methods for achieving data integrity were tested.The methods were tested in a controlled environment. Three of themwas tested and performance evaluated by a tool emulating a real lifee-commerce scenario. The focus of this thesis is to evaluate the performanceimpact and the fraud detection ability of the implementedmethods.

    Results. This paper evaluates traditional Digital signature, Linkedtimestamping applied to a Merkle hash tree and Auditing performanceimpact and feature impact wise. Two more methods were implementedand tested in a controlled environment, Merkle hash tree and Digitalwatermarking. We showed results from the empirical analysis, dataverification and transaction performance. In our evaluation we provedour hypothesis that traditional Digital signature is faster than Linkedtimestamping.

    Conclusions. In this thesis we conclude that when choosing a dataintegrity method to implement it is of great importance to know whichtype of operation is more frequently used. Our experiments show thatthe Digital signature method performed better than Linked timestampingand Auditing. Our experiments did also conclude that applicationof Digital signature, Linked timestamping and Auditing decreasedthe performance by 4%, 12% and 27% respectively, which is arelatively small price to pay for data integrity.

  • 13.
    Westphal, Florian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Large collections of historical document images have been collected by companies and government institutions for decades. More recently, these collections have been made available to a larger public via the Internet. However, to make accessing them truly useful, the contained images need to be made readable and searchable. One step in that direction is document image binarization, the separation of text foreground from page background. This separation makes the text shown in the document images easier to process by humans and other image processing algorithms alike. While reasonably well working binarization algorithms exist, it is not sufficient to just being able to perform the separation of foreground and background well. This separation also has to be achieved in an efficient manner, in terms of execution time, but also in terms of training data used by machine learning based methods. This is necessary to make binarization not only theoretically possible, but also practically viable.

    In this thesis, we explore different ways to achieve efficient binarization in terms of execution time by improving the implementation and the algorithm of a state-of-the-art binarization method. We find that parameter prediction, as well as mapping the algorithm onto the graphics processing unit (GPU) help to improve its execution performance. Furthermore, we propose a binarization algorithm based on recurrent neural networks and evaluate the choice of its design parameters with respect to their impact on execution time and binarization quality. Here, we identify a trade-off between binarization quality and execution performance based on the algorithm’s footprint size and show that dynamically weighted training loss tends to improve the binarization quality. Lastly, we address the problem of training data efficiency by evaluating the use of interactive machine learning for reducing the required amount of training data for our recurrent neural network based method. We show that user feedback can help to achieve better binarization quality with less training data and that visualized uncertainty helps to guide users to give more relevant feedback.

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf