Change search
Refine search result
12345 101 - 150 of 208
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Hu, Yan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Exploring Biometrics as an Evaluation Technique for Digital Game Addiction Prevention2018In: Journal of Behavioral Addictions, ISSN 2062-5871, E-ISSN 2063-5303, Vol. 7, p. 15-15Article in journal (Other academic)
  • 102.
    Hyltegård, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Two Anti-aliasing Methods for Creating a Uniform Look2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. In the pursuit to render at good quality, anti-aliasing will be needed to reduce jagged edges. A challenge is presented where the image to be rendered consists of different elements like GUI and background. One anti-aliasing method alone might not be able to handle this due to some anti-aliasing methods not being applicable to certain elements within a rendering. Combining two different anti-aliasing methods on different elements within a rendering can however make parts appear extra blurry in relation to the rest, as some methods are prone to creating unwanted blur at specific occasions.

    Objectives. This thesis' goal is to present a method for applying anti-aliasing to an image containing different elements, while managing to render it with good quality and keeping a uniform look.

    Method. An experiment presented in the form of a user study was conducted for finding a suitable method for creating a uniform look. 26 respondents participated in the experiment, where they would rate a number of images by how uniform each were perceived as.

    Results. The results from a user study did not meet the author's predictions, where it was expected that FXAA would help create a uniform look when applied last as a final post-processing effect. However the respondents of the user study had varied opinions, as the results showed that all three methods presented in the experiment all were perceived to display a uniform look.

    Conclusions. A conclusion could be drawn that either anti-aliasing can not affect images greatly enough for the result to be perceived as non-uniform, at least for the two anti-aliasing methods which were tested. Or that the material presented in the survey did not manage to present an articulate display for the respondents.

  • 103.
    Höglund, Sofie
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Portningsprocessen för en applikation: Att porta en applikation mellan två motorer2015Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Kontext. Det blir allt vanligare att konsumenter vill använda spel och applikationer på alla möjliga enheter och plattformar och ett sätt att göra det möjligt är genom portning. Portning används för att återanvända kod, objekt och funktioner utan att behöva göra om allt från början. Det blir då effektivt att uppdatera ett spel eller en applikation till en ny plattform, med lite tid och inte så mycket extra planering. Objektiv. I studien letas ett sätt att göra en portning mellan två motorer, samt att identifiera och analysera de svårigheter som kan uppstå i en flytt av ett spel/applikation mellan två system. En exempel- applikation som är ett spel har användas för att göra portningen och utvärderingen, samt fyra motorer; Away3D, Unity3D, Papervision3D och Flare3D har undersökts. Portningen har utvärderats genom prestandamätningar av Frames Per Second (FPS), Random Access Memory (RAM), filstorlek och tiden det tog att göra portningen jämfört med att göra den ursprungliga applikationen. Metoder. Tre metoder har att användas i studien: Litteratursökning, implementation och experiment. Dessa har använts för att hitta en process att göra en portning på ett effektivt sätt. Experiment på den implementerade och den ursprungliga applikationen har genomförts för att se om portningen var gjord på ett effektivt sätt och om det blev några förbättringar eller försämringar på exempel-applikationen. Resultat. De två motorer som användes i implementationen har jämförts i förhållande till varandra, för att se vilken av de två motorerna som är mest effektiv och lämplig för att användas till denna exempel-applikation. Implementationen klargjorde att den portade applikationen var mer effektiv nu än förut i jämförelse med den ursprungliga applikationen, samt att göra en portning istället för att skriva en helt ny applikation sparar tid. Slutsats. Portning är ett väldigt bra alternativ för att skapa nya versioner och uppdateringar av spel och applikationer på nya plattformar. Mycket går att återanvända och lite tid behöver läggas på att utforma spelet/applikationen och funktionaliteten. Det som dock kan vara krångligt är att vissa funktioner och objekt inte kan användas utan måste skrivas om och anpassas till den nya motorn.

  • 104.
    Jamil, Momin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Yang, Xinshe
    Middlesex University, GBR.
    Sequence optimization for integrated radar and communication systems using meta-heuristic multiobjective methods2017In: 2017 IEEE Radar Conference, RadarConf 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 0502-0507Conference paper (Refereed)
    Abstract [en]

    In real-world engineering problems, several conflicting objective functions have often to be optimized simultaneously. Typically, the objective functions of these problems are too complex to solve using derivative-based optimization methods. Integration of navigation and radar functionality with communication applications is such a problem. Designing sequences for these systems is a difficult task. This task is further complicated by the following factors: (i) conflicting requirements on autocorrelation and crosscorrelation characteristics; (ii) the associated cost functions might be irregular and may have several local minima. Traditional or gradient based optimization methods may face challenges or are unsuitable to solve such a complex problem. In this paper, we pose simultaneous optimization of autocorrelation and crosscorrelation characteristics of Oppermann sequences as a multiobjective problem. We compare the performance of prominent state-of-the-art multiobjective evolutionary meta-heuristic algorithms to design Oppermann sequences for integrated radar and communication systems. © 2017 IEEE.

  • 105.
    Jamil, Momin
    et al.
    Harman/Becker Automotive Systems GmbH.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Syntheszizing Cross-Ambiguity Function Using Improved BAt Algorithm2017In: Recent Advances in Swarm Intelligence and Evolutionary Computation / [ed] Xin-She Yang, Springer Publishing Company, 2017, p. 179-202Chapter in book (Refereed)
  • 106.
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    The Effects of Emotions and Their Regulation on Decision-making Performance in Affective Serious Games2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Emotions are thought to be one of the key factors that critically influence human decision-making. Emotion-regulation can help to mitigate emotion-related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to practicing emotion-regulation, where meaningful biofeedback information communicates player's affective states to a series of informed gameplay choices. These findings motivate the notion that in the decision context of serious games, one would benefit from awareness and regulation of such emerging emotions.

    This thesis explores the design and evaluation methods for creating serious games where emotion-regulation can be practiced using physiological biofeedback measures. Furthermore, it investigates emotions and the effect of emotion-regulation on decision performance in serious games. Using the psychophysiological methods in the design of such games, emotions and their underlying neural mechanism have been explored.

    The results showed the benefits of practicing emotion-regulation in serious games, where decision-making performance was increased for the individuals who down-regulated high levels of arousal while having an experience of positive valence. Moreover, it increased also for the individuals who received the necessary biofeedback information. The results also suggested that emotion-regulation strategies (i.e., cognitive reappraisal) are highly dependent on the serious game context. Therefore, the reappraisal strategy was shown to benefit the decision-making tasks investigated in this thesis. The results further suggested that using psychophysiological methods in emotionally arousing serious games, the interplay between sympathetic and parasympathetic pathways could be mapped through the underlying emotions which activate those two pathways. Following this conjecture, the results identified the optimal arousal level for increased performance of an individual on a decision-making task, by carefully balancing the activation of those two pathways. The investigations also validated these findings in the collaborative serious game context, where the robot collaborators were found to elicit diverse affect in their human partners, influencing performance on a decision-making task. Furthermore, the evidence suggested that arousal is equally or more important than valence for the decision-making performance, but once optimal arousal has been reached, a further increase in performance may be achieved by regulating valence. Furthermore, the results showed that serious games designed in this thesis elicited high physiological arousal and positive valence. This makes them suitable as research platforms for the investigation of how these emotions influence the activation of sympathetic and parasympathetic pathways and influence performance on a decision-making task.

    Taking these findings into consideration, the serious games designed in this thesis allowed for the training of cognitive reappraisal emotion-regulation strategy on the decision-making tasks. This thesis suggests that using evaluated design and development methods, it is possible to design and develop serious games that provide a helpful environment where individuals could practice emotion-regulation through raising awareness of emotions, and subsequently improve their decision-making performance.

  • 107.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hagelbäck, Johan
    Linnaeus University, SWE.
    Lindley, Craig
    CSIRO ICT Centre, AUS.
    Physiological Affect and Performance in a Collaborative Serious Game Between Humans and an Autonomous Robot2018In: Lect. Notes Comput. Sci., Springer Verlag , 2018, Vol. 11112, p. 127-138Conference paper (Refereed)
    Abstract [en]

    This paper sets out to examine how elicited physiological affect influences the performance of human participants collaborating with the robot partners on a shared serious game task; furthermore, to investigate physiological affect underlying such human-robot proximate collaboration. The participants collaboratively played a turn-taking version of a serious game Tower of Hanoi, where physiological affect was investigated in a valence-arousal space. The arousal was inferred from the galvanic skin response data, while the valence was inferred from the electrocardiography data. It was found that the robot collaborators elicited a higher physiological affect in regard to both arousal and valence, in contrast to their human collaborator counterparts. Furthermore, a comparable performance between all collaborators was found on the serious game task. © 2018, IFIP International Federation for Information Processing.

  • 108.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sennersten, Charlotte
    CSIRO Mineral Resources, AUS.
    Lindley, Craig
    Intelligent Sensing and Systems Laboratory, CSIRO ICT Centre, AUS .
    Modeling cognitive load and physiological arousal through pupil diameter and heart rate2018In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721Article in journal (Refereed)
    Abstract [en]

    This study investigates individuals’ cognitive load processing abilities while engaged on a decision-making task in serious games, to explore how a substantial cognitive load dominates over the physiological arousal effect on pupil diameter. A serious game was presented to the participants, which displayed the on–line biofeedback based on physiological measurements of arousal. In such dynamic decision-making environment, the pupil diameter was analyzed in relation to the heart rate, to evaluate if the former could be a useful measure of cognitive abilities of individuals. As pupil might reflect both cognitive activity and physiological arousal, the pupillary response will show an arousal effect only when the cognitive demands of the situation are minimal. Evidence shows that in a situation where a substantial level of cognitive activity is required, only that activity will be observable on the pupil diameter, dominating over the physiological arousal effect indicated by the pupillary response. It is suggested that it might be possible to design serious games tailored to the cognitive abilities of an individual player, using the proposed physiological measurements to observe the moment when such dominance occurs. © 2018, The Author(s).

  • 109.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sennersten, Charlotte
    CSIRO Mineral Resources, AUS.
    Lindley, Craig
    Intelligent Sensing and Systems Laboratory, AUS.
    The effect of cognitive load on physiological arousal in a decision-making serious game2017In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 153-156, article id 8056587Conference paper (Refereed)
    Abstract [en]

    The aim of this paper is to investigate how a substantial cognitive load overshadows the physiological arousal effect, in an attempt to study cognitive abilities of participants engaged on decision-making tasks in serious games. Participants were engaged in a dynamic serious game environment displaying online biofeedback based on the physiological measurements of arousal. The pupil diameter was analyzed in relation to the heart rate during a challenging decision-making task. It was found that the moment when a substantial cognitive load overshadows the physiological arousal effect is observable on the pupil diameter in relation to the heart rate. © 2017 IEEE.

  • 110.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Practicing Emotion-Regulation Through Biofeedback on the Decision-Making Performance in the Context of Serious Games: a Systematic Review2019In: Entertainment Computing, ISSN 1875-9521, E-ISSN 1875-953X, Vol. 29, p. 75-86Article in journal (Refereed)
    Abstract [en]

    Evidence shows that emotions critically influence human decision-making. Therefore, emotion-regulation using biofeedback has been extensively investigated. Nevertheless, serious games have emerged as a valuable tool for such investigations set in the decision-making context. This review sets out to investigate the scientific evidence regarding the effects of practicing emotion-regulation through biofeedback on the decision-making performance in the context of serious games. A systematic search of five electronic databases (Scopus, Web of Science, IEEE, PubMed Central, Science Direct), followed by the author and snowballing investigation, was conducted from a publication's year of inception to October 2018. The search identified 16 randomized controlled experiment/quasi-experiment studies that quantitatively assessed the performance on decision-making tasks in serious games, involving students, military, and brain-injured participants. It was found that the participants who raised awareness of emotions and increased the skill of emotion-regulation were able to successfully regulate their arousal, which resulted in better decision performance, reaction time, and attention scores on the decision-making tasks. It is suggested that serious games provide an effective platform validated through the evaluative and playtesting studies, that supports the acquisition of the emotion-regulation skill through the direct (visual) and indirect (gameplay) biofeedback presentation on decision-making tasks.

  • 111.
    Jerčić, Petar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wen, Wei
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics. Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hagelbäck, Johan
    Linnéuniversitetet, SWE.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    The Effect of Emotions and Social Behavior on Performance in a Collaborative Serious Game Between Humans and Autonomous Robots2018In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, no 1, p. 115-129Article in journal (Refereed)
    Abstract [en]

    The aim of this paper is to investigate performance in a collaborative human–robot interaction on a shared serious game task. Furthermore, the effect of elicited emotions and perceived social behavior categories on players’ performance will be investigated. The participants collaboratively played a turn-taking version of the Tower of Hanoi serious game, together with the human and robot collaborators. The elicited emotions were analyzed in regards to the arousal and valence variables, computed from the Geneva Emotion Wheel questionnaire. Moreover, the perceived social behavior categories were obtained from analyzing and grouping replies to the Interactive Experiences and Trust and Respect questionnaires. It was found that the results did not show a statistically significant difference in participants’ performance between the human or robot collaborators. Moreover, all of the collaborators elicited similar emotions, where the human collaborator was perceived as more credible and socially present than the robot one. It is suggested that using robot collaborators might be as efficient as using human ones, in the context of serious game collaborative tasks.

  • 112.
    Johansson, Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Procedurally Generated Lightning Bolts Using Tessellation and Stream-Output: A GPU Based Approach2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 113.
    Karlsson, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Evaluation of the Complexity of Procedurally Generated Maze Algorithms2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Procedural Content Generation (PCG) in Video Games can be used as a tool for efficiently producing large varieties of new content using less manpower, making it ideal for smaller teams of developers who wants to compete with games made by larger teams. One particular facet of PCG is the generation of mazes. Designers that want their game to feature mazes also need to know how to evaluate their maze-complexity, in order to know which maze fits the difficulty curve best.

    Objectives. This project aims to investigate the difference in complexity between the maze generation algorithms recursive backtracker (RecBack), Prim’s algorithm (Prims), and recursive division (RecDiv), in terms completion time, when solved using a depth-first-search (DFS) algorithm. In order to understand which parameters affect completion time/complexity, investigate possible connections between completion time, and the distribution of branching paths, distribution of corridors, and length of the path traversed by DFS.

    Methods. The main methodology was an implementation in the form of a C# application, which randomly generated 100 mazes for each algorithm for five different maze grid resolutions (16x16, 32x32, 64x64, 128x128, 256x256). Each one of the generated mazes was solved using a DFS algorithm, whose traversed nodes, solving path, and completion time was recorded. Additionally, branch distribution and corridor distribution data was gathered for each generated maze.

    Results. The initial results showed that mazes generated by Prims algorithm had the lowest complexity (shortest completion time), the shortest solving path, the lowest amount of traversed nodes, and the lowest proportion of 2-branches, but the highest proportion of all other branch types. Additionally Prims had the highest proportion of 4-6 length paths, but the lowest proportion of 2 and 3 length paths. Later mazes generated by RecDiv had intermediate complexity, intermediate solving path, intermediate traversed nodes, intermediate proportion of all branch types, and the highest proportion of 2-length paths, but the lowest proportion of 4-6 length paths. Finally mazes generated by RecBack had opposite statistics from Prims: the highest complexity, the longest solving path, the highest amount of traversed nodes, the highest proportion of 2-branches, but lowest proportion of all other branch types, and the highest proportion of 3-length paths, but the lowest of 2-length paths.

    Conclusions. Prims algorithm had the lowest complexity, RecDiv intermediate complexity, and RecBack the highest complexity. Increased solving path length, traversed nodes, and increased proportions of 2-branches, seem to correlate with increased complexity. However the corridor distribution results are too small and diverse to identify a pattern affecting completion time. However the corridor distribution results are too diverse to make it possible to discern a pattern affecting completion time by just observing the data.

  • 114.
    Karlsson, Christoffer
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. 1987.
    The performance impact from processing clipped triangles in state-of-the-art games.2018Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Modern game applications pressures hardware to its limits, and affects how graphics hardware and APIs are designed. In games, rendering geometry plays a vital role, and the implementation of optimization techniques, such as view frustum culling, is generally necessary to meet the quality expected by the customers. Failing to optimize a game application can potentially lead to higher system requirements or less quality in terms of visual effects and content. Many optimization techniques, and studies of the performance of such techniques exist. However, no research was found where the utilization of computational resources in the GPU, in state-of-the-art games, was analyzed.

    Objectives. The aim of this thesis was to investigate the potential problem of commercial game applications wasting computational resources. Specifically, the focus was set on the triangle data processed in the geometry stage of the graphics pipeline, and the amount of triangles discarded through clipping.

    Methods. The objectives were met by conducting a case study and an empirical data analysis of the amount triangles and entire draw calls that were discarded through clipping, as well as the vertex data size and the time spent on processing these triangles, in eight games. The data was collected using Triangelplockaren, a tool which collects the triangle data that reaches the rasterizer stage. This data was then analyzed and discussed through relational findings in the results.

    Results. The results produced consisted of 30 captures of benchmark and gameplay sessions. The average of each captured session was used to make observations and to draw conclusions.

    Conclusions. This study showed evidence of noteworthy amounts of data being processed in the GPU which is discarded through clipping later in the graphics pipeline. This was seen in all of the game applications included in this study. While it was impossible to draw conclusions regarding the direct impact on performance, it was safe to say that the performance relative to the geometry processed was significant in each of the analyzed cases, and in many cases extreme.

  • 115.
    Karlsson, Christoffer
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Schachtschabel, Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Legible Tone Mapping: An evaluation of text processed by tone mapping operators2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Tone mapping operators (TMO) are designed to reduce the dynamicrange of high dynamic range images so that they can be presented onstandard dynamic range display devices. Many operators focus on creatingperceptually similar images.

    Objectives. This thesis aims to investigate how dierent TMOs reproducephotographed text. The underlying reason being to test the contrast reproductionof each TMO.

    Methods. An experiment has been performed in order to investigate thelegibility of photographed and tone mapped text. A user study was conducted,in which 18 respondents partook, where respondents were to ratehow much of the text in each photograph that they found to be legible.

    Results. Due to low participation, the results of the experiment are mostlyinconclusive. However, some tendencies have been observed and analyzedand they fall in line with previous work within the area.

    Conclusions. The main conclusion that can be drawn from the results isthat the TMO presented by Kuang [11] is rated as better than the TMOsby Fattal [7] and Kim and Kautz [10].

  • 116.
    Karlsson, Julia
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Using graphical attributes to influence the perception of safety in a 3D environment2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Most games make use of graphics to create an environment that fits the mood they wish to convey. To use a game's graphical attributes such as colour, shape and texture to their utmost ability, knowing how these are perceived could help.

    Objective. This paper tries to determine how graphical attributes such as colour, texture, and shapes affect the perceived safety of a path inside a 3d environment.

    Method. To reach the objective, an experiment was conducted with 20 participants. The experiment was a two-alternative forced-choice (2AFC) test of 38 pairs of images, where each pair contained two versions of a tunnel entrance scene rendered using different graphical attributes. Each difference was based around either colour (warm and cold colour schemes), shape (round, wide, angular and thin), or texture (rugged, neutral and sterile).

    Results. The experiment generated results that varied compared to the expected results. For instance, the wider shapes were seen as safer compared to the thinner shapes, as was the same result with rounder shapes being perceived safer than angular shapes. Although a few preferred the cold colour scheme, the warmer colour scheme was seen as safer by the majority. While expected to be perceived as less safe than neutral textures but more than the rugged ones, the sterile texture was actually most commonly seen as safe.

    Conclusions. The main conclusion that was made is that colour, texture and shape can be applied to change the perception of safety in a scene. However, when opposing attributes are used in combination, the result might be based on how dominant the attribute is. The dominance of the graphical attributes could be an interesting topic for future work.

  • 117.
    Kaspersson, Max
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Realism through Wrinkle Maps: The Perceived Impact of Different Dynamic Wrinkle Implementations2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Real time rendering has many challenges to overcome, one of them being character realism. One way to move towards realism is to use wrinkle maps. Although already used in several games, there might be room for improvement, common practice suggests using two wrinkle maps, however, if this number can be reduced both texture usage and workload might be reduced as well.

    Objectives. To determine whether or not it is possible to reduce the number of wrinkle maps from two to one without having any significant impact on the perceived realism of a character.

    Methods. After a base character model was created, a setup in Maya were made so that dynamic wrinkles could be displayed on the character using both one and two wrinkle maps. The face were animated and rendered, displaying emotions using both techniques. A two-alternative forced choice experiment was then conducted where the participants selected which implementation displaying the same facial expression and having the same lighting condition they perceived as most realistic.

    Results. Results showed that some facial expressions had more of an impact of the perceived realism than others, favoring two wrinkle maps in every case where there was a significant difference. The expressions with the most impact were the ones that required different kinds of wrinkles at the same area of the face, such as the forehead, where one variant of wrinkles run at a more vertical manner and the other variant runs horizontally along the forehead.

    Conclusions. Using one wrinkle map can not fully replicate the effect of using two when it comes to realism. The difference on the implementations are dependant on the expression being displayed.

  • 118.
    Kodide, Alekhya
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Outage probability of multiple relay networks over kappa-mu Shadowed fading2016In: 2016, 10th International Conference on Signal Processing and Communication Systems, ICSPCS 2016 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the outage probability of an opportunistic multiple relay communication system over κ-μ shadowed fading channels for both amplify-and-forward and decode-and-forward relaying protocols. We first provide an exact analysis of the impact of κ-μ shadowed fading on the outage probability of the system. To simplify the obtained analytical expression of the outage probability, we further approximate the κ-μ, shadowed fading channel by a Nakagami-m fading model. In order to illustrate the suitability of the approximation, numerical results are provided revealing that the exact outage probability using κ-μ, shadowed fading and the results obtained by approximating the fading by the Nakagami-m fading model match well as long as the parameters of the shadowed fading translate to integer values of the fading severity parameter m. Further, Monte-Carlo simulations have been conducted to validate the derived analytical expressions of the outage probability. Finally, the effect of network parameters such as the average transmit power at the source and the relay, the impact of the number of relays, the influence of the transmission distances and the fading parameters on the outage probability of the considered system is also illustrated through numerical examples. © 2016 IEEE.

  • 119.
    Kåvemark, Nils
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Miloradovic, Stevan
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems: A Comparison Between Octree-based and Screen Space Particle Collision2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Real-time applications like video games use particle systems for special effects to add visual aesthetics and realism. However, when complex behaviour that requires interaction between particles and geometry is desired, problems arise. The developer has to choose between having consistent and precise collisions or higher performance. This thesis goes over two particle collision implementations that try to solve these problems.

    Objectives. The objective of this thesis is to create an application that has support for two collision methods and compare them on performance and accuracy to decide which one is more suitable for real-time applications.

    Methods. To answer the research questions proposed the implementation methodology was used, as a result of that, a 3D-application was created using the graphics API OpenGL to render. Two simple GPGPU implementations were made for each method, to have a more fair comparison. To be able to measure performance the application logs frame-time every frame. A fixed time-step was added in the main loop to allow the users to stop the application at a certain time to be able to capture images of the scene that will then be used for pixel comparison to measure accuracy.

    Results. Screen space particle collision is almost three times faster than the octree-based method. Both methods had different behavior in both the real-time simulation and at specific time-steps resulting in loss of accuracy from the screen space particle collision.

    Conclusions. The tests allowed the authors to show that the screen space particle collision is faster and scales better than the octree-based method. However, it did lack precision as shown by the comparison by the images taken from the test. For particle simulations that require consistent and accurate collision checks the octree-based method is better due to the fact that screen space particle collision can result in false collisions checks and has problems with hidden geometry.

  • 120.
    Künkel, Rebecca
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Lomander, Jens
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    How can digitalization and AI be usedto increase the usage of medicallyprescribed physical activity?: A qualitative and quantitative study on thedigitalization of physical activity on prescription2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Medically prescribed physical activity (PaP) is a relatively unknown term, howeverit is nothing new. Being used as an alternative to traditional treatment it aimsto reduce the use of chemical medication where possible. This method has provensuccessful on diseases such as high blood pressure, heart attacks, diabetes, depressionetc. Issues with this treatment stems from the fact that as it is today, it is moreinconvenient and harder to follow up with than the alternatives.In this study we have examined an alternative method to treat people with PaP bydeveloping an AI coach for a PaP-focused smartphone application which’s goal it is tomotivate the patients to fulfill their PaP-schedule. We perform both an experimentwith the usage of the developed AI, as well as a literature study in order to evaluatethe efficiency of an AI coach in comparison with today’s method of real life meetingsand little to no contact between patients and prescribers. With this paper we showthat having an AI-made coaching messages can increase the motivation of the usersof a PaP-focused application.

  • 121.
    Lambrant, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    A preferred visual appearance for game avatars based on color theory2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context Colors are an important aspect of video games, they have a key roll when designing everything from characters to world objects. Therefore designers and developers need to know what colors that are preferred over others.

    Objectives This paper tries to determine which color setting that is the most preferred on a game avatar.

    Methods To do this an experiment conducted with 15 participants. They conducted a two alternative forced choice test (2AFC) with 236 pairs of pictures. All of the 236 pairs were based on color harmonies and displayed on an avatar and a cube. The different color harmonies that were used sprung from the three primary colors of the RGB-color wheel that worked as a base in this experiment. The results that were collected went through a Chi-square test.

    Result Some interesting results were generated from the experiment. For instance, the most preferred color harmony for the avatar was the split complementary with the base in the primary color red. Second to that was the color harmony triad, built on the three primary colors red, green and blue. The color harmonies that had their base in the color green were with zero percent the least preferred of all harmonies. On the other hand the color harmonies that had their base in the color blue were generally the most preferred among all of the harmonies.

    Conclusion The main conclusion that was made and that could answer the research question was that the most preferred color harmony for the avatar was split complementary red. There were also some conclusions made that could help to create a more general preference for all kind of avatars, if this experiment would be remade on a larger scale.

  • 122.
    Lambrant, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Luro, Francisco Lopez
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Avatar Preference Selection in Game Design Based on Color Theory2016In: Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, ACM Digital Library, 2016, p. 15-18Conference paper (Refereed)
    Abstract [en]

    Selecting color schemes for game objects is an important task. It can be valuable to game designers to know what colors are preferred. Principles of color theory are important to select appropriate colors. This paper presents a perceptual experiment that evaluates some basic principles of color theory applied to game objects to study if a particular combination is preferred. An experiment was conducted with 15 participants who performed a two-alternative forced choice (2AFC) preference experiment using 236 pairs of images each. The pairs were based on color harmonies derived from the colors red, green, and blue. The color harmonies were evaluated against each other and included analogous, complementary, split-complementary, triad, and warm and cool colors. A high and low saturation condition was also included. The color harmonies were applied to an existing game character (avatar) and a new object (cube) to study any potential differences in the results. The initial results show that some color harmonies, in particular triad and split-complementary, were generally preferred over others meaning that it is important to take into account these aspects in game design. Additional results also show that color harmonies with a base in green were not as popular as red and blue color harmonies.

  • 123.
    Larsson, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Movement Prediction Algorithms for High Latency Games: A Testing Framework for 2D Racing Games2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. In multiplayer games, player information takes time to reach other players because of network latency. This can cause inconsistencies because the actions from the other players are delayed. To increase consistency, movement prediction can be used to display other players closer to their actual position.

    Objectives. The goal was to compare different prediction methods and see how well they do in a 2D racing game.

    Methods. A testing framework was made to easily implement new methods and to get test results. Experiments were conducted to gather racing data from participants and was then used to analyze the performance of the methods offline. The distance error between the predicted position and the real position was used to measure the performance.

    Results. Out of the implemented algorithms, Input Prediction had the lowest average distance error at all latency. All methods tested did better than Dead Reckoning when above 600ms. Stored data algorithms did not do worse when predicting on a curvy part of the track unlike the other algorithms tested.

    Conclusions. Different methods are supported by different games and applications. Movement prediction should be tailored to its environment for best accuracy. Due to Input Predictions simple nature and its results here, it is a worthy contender as the go-to algorithm for games.

  • 124.
    Larsson, Jarl
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance of Physics-Driven Procedural Animation of Character Locomotion: For Bipedal and Quadrupedal Gait2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Animation of character locomotion is an important part of computer animation and games. It is a vital aspect in achieving believable behaviour and articulation for virtual characters. For games there is also often a need for supporting real-time reactive behaviour in an animation as a response to direct or indirect user interaction, which have given rise to procedural solutions to generate animation of locomotion. Objectives. In this thesis the performance aspects for procedurally generating animation of locomotion within real-time constraints is evaluated, for bipeds and quadrupeds, and for simulations of several characters. A general pose-driven feedback algorithm for physics-driven character locomotion is implemented for this purpose. Methods. The execution time of the locomotion algorithm is evaluated using an automated experiment process, in which real-time gait simulations of incrementing character population count are instantiated and measured, for the bipedal and quadrupedal gaits. The simulations are measured for both serial and parallel executions of the locomotion algorithm. Results. Simulations of up to and including 100 characters are performance measured providing an overview of the slowdown rate when increasing the character count in the simulations, as well as the performance relations between bipeds and quadrupeds. Conclusions. The experiment concludes that the evaluated algorithm on its own exhibits a relatively small performance impact that scales almost linearly for the evaluated population sizes. Due to the relatively low performance impacts it is thus also concluded that for future experiments a broader measurement of the locomotion algorithm that includes and compares different physics solvers is of interest.

  • 125.
    Larsson, Sebastian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Petri, Ossian
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Content evaluation of StarCraft maps using Neuroevolution2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Games are becoming larger and the amount of assets required is increasing. Game studios turn toward procedural generation to ease the load of asset creation. After the game is released the studios want to extend the longevity of their creation. One way of doing this is to open up the game for community created add-ons and assets or utilize some procedural content generation. Both community created assets and procedural generation comes with a classification problem to filter out the undesirable content.

    Objectives. This thesis will attempt to create a method to evaluate community-generated StarCraft maps with the help of machine learning.

    Methods. Manually extracted metrics from StarCraft maps and ratings from community repositories. This data is used to train neural networks using NeuroEvolution of Augmenting Topologies (NEAT). The method will be compared with Sequential Minimal Optimization (SMO) and ZeroR.

    Results and Conclusions. The problem turned out to be more difficult than initially thought. The results using NEAT are marginally better than SMO and ZeroR. The suspected reason for this is insufficient input data and/or bad input parameters. Further experimentation could be conducted with deep learning to try to find a suitable solution for this problem.

  • 126.
    Liljeson, Mattias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Mohlin, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Software defect prediction using machine learning on test and source code metrics2014Student thesis
    Abstract [en]

    Context. Software testing is the process of finding faults in software while executing it. The results of the testing are used to find and correct faults. Software defect prediction estimates where faults are likely to occur in source code. The results from the defect prediction can be used to opti- mize testing and ultimately improve software quality. Machine learning, that concerns computer programs learning from data, is used to build pre- diction models which then can be used to classify data. Objectives. In this study we, in collaboration with Ericsson, investigated whether software metrics from source code files combined with metrics from their respective tests predicts faults with better prediction perfor- mance compared to using only metrics from the source code files. Methods. A literature review was conducted to identify inputs for an ex- periment. The experiment was applied on one repository from Ericsson to identify the best performing set of metrics. Results. The prediction performance results of three metric sets are pre- sented and compared with each other. Wilcoxon’s signed rank tests are performed on four different performance measures for each metric set and each machine learning algorithm to demonstrate significant differences of the results. Conclusions. We conclude that metrics from tests can be used to predict faults. However, the combination of source code metrics and test metrics do not outperform using only source code metrics. Moreover, we conclude that models built with metrics from the test metric set with minimal infor- mation of the source code can in fact predict faults in the source code.

  • 127.
    Lindberg, Magnus
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Imitation-Learning based Agentplaying Super Mario2014Student thesis
    Abstract [en]

    Context. Developing an Artificial Intelligence (AI) agent that canpredict and act in all possible situations in the dynamic environmentsthat modern video games often consists of is on beforehand nearly im-possible and would cost a lot of money and time to create by hand. Bycreating a learning AI agent that could learn by itself by studying itsenvironment with the help of Reinforcement Learning (RL) it wouldsimplify this task. Another wanted feature that often is required is AIagents with a natural acting behavior and a try to solve that problemcould be to imitating a human by using Imitation Learning (IL). Objectives. The purpose of this investigation is to study if it is pos-sible to create a learning AI agent feasible to play and complete somelevels in a platform game with the combination of the two learningtechniques RL and IL. Methods. To be able to investigate the research question an imple-mentation is done that combines one RL technique and one IL tech-nique. By letting a set of human players play the game their behavioris saved and applied to the agents. The RL is then used to train andtweak the agents playing performance. A couple of experiments areexecuted to evaluate the differences between the trained agents againsttheir respective human teacher. Results. The results of these experiments showed promising indica-tions that the agents during different phases of the experiments hadsimilarly behavior compared to their human trainers. The agents alsoperformed well when comparing them to other already existing ones. Conclusions. To conclude there is promising results of creating dy-namical agents with natural behavior with the combination of RL andIL and that it with additional adjustments would make it performeven better as a learning AI with a more natural behavior.

  • 128.
    Lindqvist, Sebastian
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance Evaluation of Boids on the GPU and CPU2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Agent based models are used to simulate complex systems by using multiple agents that follow a set of rules. One such model is the boid model which is used to simulate movements of synchronized groups of animals. Executing agent based models partially or fully on the GPU has previously shown to increase performance, opening up the possibility for larger simulations. However, few articles have previously compared a full GPU implementation of the boid model with a multi-threaded CPU implementation.

    Objectives. The objectives of this thesis are to find how parallel execution of boid model performs when executed on the CPU and GPU respectively, based on the variables frames per second and average boid computation time per frame.

    Methods. A performance benchmark experiment will be set up where three implementations of the boid model are implemented and tested.

    Results. The collected data is summarized in both tables and graphs, showing the result of the experiment for frames per second and average boid computation time per frame. Additionally, the average results are summarized in two tables.

    Conclusions. For the largest flock size the GPGPU implementation performs the best with an average FPS of 42 times over the single-core implementation while the multi-core implementation performs with an average FPS 6 times better than the single-core implementation. For the smallest flock size the single-core implementation is most efficient while the GPGPU implementation has 1.6 times slower average update time and the multi-cor eimplementation has an average update time of 11 times slower compared to the single-core implementation.

  • 129.
    Ljungberg, Christian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Nilsson, Erik
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Reduction of surveillance video playback time using event-based playback: based on object tracking metadata2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 130.
    Lundström, Emrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Vector Displacement Mapping2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Kontext: Displacement Mapping är en teknik som används inom 3D-spel för att skapa detaljrikedom i geometri utan att behöva triangelobjekt bestående av oönskad geometrikomplexitet. Tekniken har även andra användningsområden i 3D-spel, till exempel terränggeometri. Tekniken skänker detaljrikedom genom att i samband med tesselering förskjuta geometri i en normalriktning eller längs annan specificerad riktning. Vector Displacement Mapping är en teknik liknande Displacement Mapping där skillnaden är att Vector Displacement Mapping förskjuter geometri i tre dimensioner. Mål: Syftet med arbetet är utforska Vector Displacement Mapping i sammanhanget 3D-Spel och att antyda att tekniken kan användas i 3D-spel likt Displacement Mapping. Arbetet jämför Vector Displacement Mapping med Displacement Mapping för att urskilja skillnader i exekveringstid mellan teknikernas centrala skillnader. Skillnaderna i exekveringstid ställs i kontrast mot diskussion av teknikernas grafikminnesanvändning. Metoder: Jämförelsen baseras på en implementation av de båda teknikerna tillsammans med tesselering. Prestandamätningar genereras med implementationen som grund. Implementationen använder sig av Direct3D 11. Resultat: Resultatet som erhålls genom jämförelsen visar att exekveringstiderna mellan teknikernas centrala skillnader varierar svagt. Grafikminnesanvändningen mellan teknikerna skiljer sig med en faktor 3 eller en faktor 4 där Vector Displacement Mapping använder mer grafikminne. Slutsatser: Slutsatser som dras baserat på resultatet är att Vector Displacement Mapping i situationer där överhängande geometri är ett önskat resultat kan ersätta Displacement Mapping. Vidare diskussion förs kring slutsatser, avgränsningar och framtida forskning som arbetet berör.

  • 131.
    Lövgren, Hans
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI2014Student thesis
    Abstract [en]

    Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\newline Objectives. We investigate whether modern hardware and software, GPGPU, combined with state-of-art Evolution Strategies, CMA-Neuro-ES, can potentially increase the efficiency of solving RL problems.\newline Methods. In order to do this, both an implementational as well as an experimental research method is used. The implementational research mainly involves developing and setting up an experimental framework in which to measure efficiency through benchmarking. In this framework, the GPGPU/ES solution is later developed. Using this framework, experiments are conducted on a conventional sequential solution as well as our own parallel GPGPU solution.\newline Results. The results indicate that utilizing GPGPU and state-of-art ES when attempting to solve RL problems can be more efficient in terms of time consumption in comparison to a conventional and sequential CPU approach.\newline Conclusions. We conclude that our proposed solution requires additional work and research but that it shows promise already in this initial study. As the study is focused on primarily generating benchmark performance data from the experiments, the study lacks data on RL efficiency and thus motivation for using our approach. However we do conclude that the GPGPU approach suggested does allow less time consuming RL problem solving.

  • 132.
    Ma, Liyao
    et al.
    Univ Jinan, CHI.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, Dept Creat Technol, Karlskrona, Sweden..
    Han, Chunyan
    Univ Jinan, CHI.
    Learning Decision Forest from Evidential Data: the Random Training Set Sampling Approach2017In: International Conference on Systems and Informatics, IEEE , 2017, p. 1423-1428Conference paper (Refereed)
    Abstract [en]

    To learn decision trees from uncertain data modelled by mass functions, the random training set sampling approach for learning belief decision forests is proposed. Given an uncertain training set, a collection of simple belief decision trees are trained separately on each corresponding new set drawn by random sampling from the original one. Then the final prediction is made by majority voting. After discussing the selection of parameters for belief decision forests, experiments on Balance scale data are carried on for performance validation. Results show that with different kinds of uncertainty, the proposed method guarantees an obvious improvement in classification accuracy.

  • 133.
    Ma, Liyao
    et al.
    University of Jinan, CHN .
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Han, Chunyan
    University of Jinan, CHN .
    Training Instance Random Sampling Based Evidential Classification Forest Algorithms2018In: 2018 21st International Conference on Information Fusion, FUSION 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 883-888Conference paper (Refereed)
    Abstract [en]

    Modelling and handling epistemic uncertainty with belief function theory, different ways to learn classification forests from evidential training data are explored. In this paper, multiple base classifiers are learned on uncertain training subsets generated by training instance random sampling approach. For base classifier learning, with the tool of evidential likelihood function, gini impurity intervals of uncertain datasets are calculated for attribute splitting and consonant mass functions of labels are generated for leaf node prediction. The construction of gini impurity based belief binary classification tree is proposed and then compared with C4.5 belief classification tree. For base classifier combination strategy, both evidence combination method for consonant mass function outputs and majority voting method for precise label outputs are discussed. The performances of different proposed algorithms are compared and analysed with experiments on VCI Balance scale dataset. © 2018 ISIF

  • 134.
    Ma, Liyao
    et al.
    University of Jinan, CHI.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Li, Ziyi
    University of Science and Technology of China, CHI.
    Bagging likelihood-based belief decision trees2017In: 20th International Conference on Information Fusion, Fusion 2017: Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 321-326, article id 8009664Conference paper (Refereed)
    Abstract [en]

    To embed ensemble techniques into belief decision trees for performance improvement, the bagging algorithm is explored. Simple belief decision trees based on entropy intervals extracted from evidential likelihood are constructed as the base classifiers, and a combination of individual trees promises to lead to a better classification accuracy. Requiring no extra querying cost, bagging belief decision trees can obtain good classification performance by simple belief tree combination, making it an alternative to single belief tree with querying. Experiments on UCI datasets verify the effectiveness of bagging approach. In various uncertain cases, the bagging method outperforms single belief tree without querying, and is comparable in accuracy to single tree with querying. © 2017 International Society of Information Fusion (ISIF).

  • 135.
    Markanovic, Michel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Persson, Simeon
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Trusted memory acquisition using UEFI2014Student thesis
    Abstract [en]

    Context. For computer forensic investigations, the necessity of unmodified data content is of vital essence. The solution presented in this paper is based on a trusted chain of execution, that ensures that only authorized software can run. In the study, the proposed application operates in an UEFI environment where it has a direct access to physical memory, which can be extracted and stored on a secondary storage medium for further analysis. Objectives. The aim is to perform this task while being sheltered from influence from a potentially contaminated operating system. Methods. By identifying key components and establishing the foundation for a trusted environment where the memory imaging tool can, unhindered, operate and produce a reliable result Results. Three distinct states where trust can be determined has been identified and a method for entering and traversing them is presented. Conclusions. Tools that does not follow the trusted model might be subjected to subversion, thus they might be considered inadequate when performing memory extraction for forensic purposes.

  • 136.
    Martell, Victor
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sandberg, Aron
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance Evaluation of A* Algorithms2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. There have been a lot of progress made in the field of pathfinding. One of the most used algorithms is A*, which over the years has had a lot of variations. There have been a number of papers written about the variations of A* and in what way they specifically improve A*. However, few papers have been written comparing A* with several different variations of A*.

    Objectives. The objectives of this thesis is to find how Dijkstra's algorithm, IDA*, Theta* and HPA* compare against A* based on the variables computation time, number of opened nodes, path length as well as number of path nodes.

    Methods. To find the answers to the question in Objectives, an experiment was set up where all the algorithms were implemented and tested over a number of maps with varying attributes.

    Results. The experimental data is compiled in a table showing the result of the tested algorithms for computation time, number of opened nodes, path length and number of path nodes over a number of different maps as well as the average performance over all maps.

    Conclusions. A* is shown to perform well overall, with Dijkstra's algorithm trailing shortly behind in computation time and expanded nodes. Theta* finds the best path, with overall good computation time marred by a few spikes on large, open maps. HPA* performs poorly overall when fully computed, but has by far the best computation time and node expansion when partially pre-computed. IDA* finds the same paths as A* and Dijkstra's algorithm but has a notably worse computation time than the other algorithms and should generally be avoided on octile grid maps.

  • 137.
    Månsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Perception of Colors in Games as it Applies to Good and Evil2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Color can be used to convey allot of information but in particularly when it comes to tellingwho is good and who is evil. The most common color to use when displaying good and evil is blue forgood and red for evil.

    Objectives. This study will take a look at what colors people automatically associated with good andevil respectively.

    Methods. The two methods that are used in this paper is a survey in the form of a questionnaire andthe second method is a statistical hypothesis testing that was done on the data collected in the survey.The statistical hypothesis testing was done in the form of a chi-square test. From the chi-square testyou get a chi-square value and a p-value.

    Results. The result of the survey was that most participants thought of green, white and blue as goodcolors, while black and red where thought of as evil. The statistical hypothesis testing revealed thatthere where a significant statistical difference when comparing two colors in all but two cases. Thosecases where white vs. blue and orange vs. purple.

    Conclusions. The conclusions that can be drawn are that there is a significant statistical differencebetween how a color is perceived as good or evil. The perceived convention for what a good charactershould have, as a color is that it should be green and the perceived convention for an evil character isthat it should be either red or black.

  • 138.
    Napieralla, Jonah
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparing Graphical Projection Methods at High Degrees of Field of View2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Graphical projection methods define how virtual 3D environments are depicted on 2D monitors. No projection method provides a flawless reproduction, and the look of the resulting projections vary considerably. Field of view is a parameter of these projection methods, it determines the breadth of vision of the virtual camera used in the projection process. Field of view is represented by a degree, that defines the angle from the left to the right extent of the projection, as seen from the camera.

    Objectives. The aim of this study was to investigate the perceived quality of high degrees of field of view, using different graphical projection methods. The Perspective, the Panini, and the Stereographic projection methods were evaluated at 110, 140, and 170 degrees of field of view.

    Methods. To evaluate the perceived quality of the three projection methods at varying degrees of field of view; a user study was conducted in which 24 participants rated 81 tests each. This study was held in a conference room where the participants sat undisturbed, and could experience the tests under consistent conditions. The tests took three different usage scenarios into account, presenting scenes in which the camera was still, where it moved, and where the participants could control it. Each test was rated separately, one at a time, using every combination of projection method and degree of field of view.

    Results. The perceived quality of each projection method dropped at an exponential rate, relative to the increase in the degree of field of view. The Perspective projection method was always rated the most favorably at 110 degrees of field of view, but unlike the other projections, it would be rated much more poorly at higher degrees. The Panini and the Stereographic projections received favorable ratings at up to 140-170 degrees, but the perceived quality of these projection methods varied significantly, depending on the usage scenario and the virtual environment displayed.

    Conclusions. The study concludes that the Perspective projection method is optimal for use at up to 110 degrees of field of view. At higher degrees of field of view, no consistently optimal choice remains, as the perceived quality of the Panini and the Stereographic projection method vary significantly, depending on the usage scenario. As such, the perceived quality becomes a function of the graphical projection method, the degree of field of view, the usage scenario, and the virtual environment displayed.

  • 139.
    Nasir, Ali Arshad
    et al.
    King Fahd University of Petroleum and Minerals, SAU.
    Tuan, Hoangduong
    Odessa I.I.Mechnikov National University, UKR.
    Duong, Trung Quang
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Fractional Time Exploitation for Serving IoT Users with Guaranteed QoS by 5G Spectrum2018In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 56, no 10, p. 128-133, article id 8493131Article in journal (Refereed)
    Abstract [en]

    It is generally understood that forthcoming 5G communication technologies such as full duplex (FD), massive multiple-input multiple-output (MIMO), non-orthogonal multiple access (NOMA), and simultaneous wireless information and power transfer (SWIPT) aim at the maximal use of communication spectrum to provide a new experience of service for users. FD provides simultaneous signal transmission and reception over the same frequency band. Massive MIMO uses massive numbers of antennas to provide high throughput connectivity for users. NOMA improves network throughput by allowing some users to access information intended for other users. SWIPT provides simultaneous information and power transfer. However, it is still very challenging to utilize these spectrum exploitation technologies to secure the needed quality of service for users in the age of the Internet of Things. In FD, the signal transmission interference to signal reception, even after analog and digital self-interference cancellation, is considerable, which downgrades both transmission and reception throughput. To maintain the favored channel characteristics, massive MIMO means to serve a few users per time unit only. In NOMA, the users' throughput is improved by compromising communication privacy. Information and power transmissions head to conflicting targets that are difficult to achieve simultaneously with SWIPT. This article introduces a new technique, called the fractional-time approach, which ensures guaranteed and better transmission and reception throughput without the need for complex FD, enables serving a massive number of users in a massive MIMO system, provides guaranteed users' throughput without security compromise as in NOMA, and delivers high volumes of both information and power transfer within a time unit. © 1979-2012 IEEE.

  • 140.
    Navarro, Diego
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Simplifying Game Mechanics: Gaze as an Implicit Interaction Method2017In: SIGGRAPH Asia 2017 Technical Briefs, SA 2017, ACM Digital Library, 2017, article id 132534Conference paper (Refereed)
  • 141.
    Nell, Henrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Quantifying the noise tolerance of the OCR engine Tesseract using a simulated environment2014Student thesis
    Abstract [en]

    ->Context. Optical Character Recognition (OCR), having a computer recognize text from an image, is not as intuitive as human recognition. Even small (to human eyes) degradations can thwart the OCR result. The problem is that random unknown degradations are unavoidable in a real-world setting. ->Objectives. The noise tolerance of Tesseract, a state-of-the-art OCR engine, is evaluated in relation to how well it handles salt and pepper noise, a type of image degradation. Noise tolerance is measured as the percentage of aberrant pixels when comparing two images (one with noise and the other without noise). ->Methods. A novel systematic approach for finding the noise tolerance of an OCR engine is presented. A simulated environment is developed, where the test parameters, called test cases (font, font size, text string), can be modified. The simulation program creates a text string image (white background, black text), degrades it iteratively using salt and pepper noise, and lets Tesseract perform OCR on it, in each iteration. The iteration process is stopped when the comparison between the image text string and the OCR result of Tesseract mismatches. ->Results. Simulation results are given as changed pixels percentage (noise tolerance) between the clean text string image and the text string image the degradation iteration before Tesseract OCR failed to recognize all characters in the text string image. The results include 14400 test cases: 4 fonts (Arial, Calibri, Courier and Georgia), 100 font sizes (1-100) and 36 different strings (4*100*36=14400), resulting in about 1.8 million OCR attempts performed by Tesseract. ->Conclusions. The noise tolerance depended on the test parameters. Font sizes smaller than 7 were not recognized at all, even without noise applied. The font size interval 13-22 was the peak performance interval, i.e. the font size interval that had the highest noise tolerance, except for the only monospaced font tested, Courier, which had lower noise tolerance in the peak performance interval. The noise tolerance trend for the font size interval 22-100 was that the noise tolerance decreased for larger font sizes. The noise tolerance of Tesseract as a whole, given the experiment results, was circa 6.21 %, i.e. if 6.21 % of the pixel in the image has changed Tesseract can still recognize all text in the image.

  • 142.
    Nilsson, Lina
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Health.
    Eriksen, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Borg, Christel
    Linneaus Univ, SWE.
    The influence of social challenges when implementing information systems in a Swedish health-care organisation2016In: Journal of Nursing Management, ISSN 0966-0429, E-ISSN 1365-2834, Vol. 24, no 6, p. 789-797Article in journal (Refereed)
    Abstract [en]

    Aim To describe and obtain a deeper understanding of social challenges and their influence on the implementation process when implementing Information systems in a Swedish health-care organisation. Background Despite positive effects when implementing Information systems in health-care organisations, there are difficulties in the implementation process. Nurses' experiences of being neglected have been dismissed as reasons for setbacks in implementation. Methods An Institutional Ethnography design was used. A deductive content analysis was made influenced by empirically identified social challenges of power, professional identity and encounters. An abstraction was made of the analysis. Results Nineteen nurses at macro, meso and micro levels were interviewed in focus groups. Organisational levels are lost in different ways in how to control the reformation, how to introduce Information systems as reformation strategies and in how to translate new tools and assumptions that do not fit traditional ways of working in shaping professional identities. Conclusion and implication for nurse management Different focus may affect the reformation of health-care organisations and implementation and knowledge processes. An implementation climate is needed where the system standards fit the values of the users. Nursing management needs to be visionary, engaged and work with risk factors in order to reform the hierarchical health-care organisation.

  • 143.
    Nilsson, Lina
    et al.
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Eriksén, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Borg, Christel
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Social Challenges When Implementing Information Systems in Everyday Work in a Nursing Context2014In: Computers, Informatics, Nursing, ISSN 1538-2931, E-ISSN 1538-9774, Vol. 32, no 9, p. 442-450Article in journal (Refereed)
    Abstract [en]

    Implementation of information systems in healthcare has become a lengthy process where healthcare staff (eg, nurses) are expected to put information into systems without getting the overall picture of the potential usefulness for their own work. The aim of this study was to explore social challenges when implementing information systems in everyday work in a nursing context. Moreover, this study aimed at putting perceived social challenges in a theoretical framework to address them more constructively when implementing information systems in healthcare. Influenced by institutional ethnography, the findings are based on interviews, observations, and written reflections. Power (changing the existing hierarchy, alienation), professional identity (calling on hold, expert becomes novice, changed routines), and encounter (ignorant introductions, preconceived notions) were categories (subcategories) presented in the findings. Social Cognitive Theory, Diffusion of Innovations, organizational culture, and dramaturgical analysis are proposed to set up a theoretical framework. If social challenges are not considered and addressed in the implementation process, it will be affected by nurses’ solidarity to existing power structures and their own professional identity. Thus, implementation of information systems affects more aspects in the organization than might have been intended. These aspects need to be taken in to account in the implementation process.

  • 144.
    Nilsson, Robin Lindh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Contact Sound Synthesis in Real-time Applications2014Student thesis
    Abstract [en]

    Synthesizing sounds which occur when physically-simulated objects collide in a virtual environment can give more dynamic and realistic sounds compared to pre-recorded sound effects. This real-time computation of sound samples can be computationally intense. In this study we investigate a synthesis algorithm operating in the frequency domain, previously shown to be more efficient than time domain synthesis, and propose a further optimization using multi-threading on the CPU. The multi-threaded synthesis algorithm was designed and implemented as part of a game being developed by Axolot Games. Measurements were done in three stress-testing cases to investigate how multi-threading improved the synthesis performance. Compared to our single-threaded approach, the synthesis speed was improved by 80% when using 8 threads, running on an i7 processor with hyper-threading enabled. We conclude that synthesis of contact sounds is viable for games and similar real-time applications, when using the investigated optimization. 140000 mode shapes were synthesized 30% faster than real-time, and this is arguably much more than a user can distinguish.

  • 145.
    Norlin, Albin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    En jämförelse mellan teckenbaserade och grafiska lösenord: Fokuserad på användarvänlighet och säkerhet2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Kontext. För att inte glömma sitt lösenord så väljer användare korta lösenord. Att användare väljer korta lösenord är en säkerhetsrisk som behöver förebyggas. Det finns forskning som pekar på att det är lättare för människor att komma ihåg en bild jämfört med en text. Om grafiska lösenord i form av en bild används istället för teckenbaserade lösenord i form av text skulle användare kunna välja svårare lösenord i form av komplexa lösenordsmönster och samtidigt minska risken att glömma dem.

    Mål. I projektet utförs en jämförelse mellan teckenbaserade och grafiska lösenord av typen DAS inom områdena användarvänlighet och säkerhet. Inom användarvänlighet så jämförs tiden det tar att registrera samt logga in med ett lösenord av respektive typ. Det jämfördes även hur många lyckade kontra misslyckade inloggningar som gjordes med respektive lösenordstyp.

    För säkerhet så jämfördes tiden det tar att utföra en lyckad bruteforce-attack mot lösenordssträngarna.

    Metoder. Den första metoden som användes var en litteraturstudie på vad som tidigare gjorts på området. Litteraturstudien uppföljdes av implementation av två program där det första tillämpade traditionella teckenbaserade lösenord och det andra tillämpade grafiska lösenord av typen DAS. De implementerade programmen användes i experiment med deltagare som gav värden att jämföra och analysera.

    Resultat. Experimentresultat gav en indikation på att de grafiska lösenorden var bättre i båda aspekterna för användarvänlighet. Liknande indikation gavs för säkerhetsaspekten där även experimentet för de grafiska lösenorden returnerade bättre värde. Det enda värde som pekade för teckenbaserade lösenord var att deltagare av experimentet föredrog de teckenbaserade över grafiska lösenorden.

    Slutsatser. Det ges en indikation för att grafiska lösenord kan vara mer användarvänliga samt säkrare jämfört med traditionella teckenbaserade lösenord för de aspekter som jämfördes. Det behöver göras fler och större experiment innan det går att fastslå en slutsats om vilken lösenordstyp som är bäst för användarvänlighet samt säkerhet.

  • 146.
    Nässén, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Motion controls in a first person game: A comparative user study using various input methods2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Virtual reality is getting closer and closer to being realized with new technologies emerging. This will lead to new ways to experience interactive worlds such as games. In order to keep the highest immersion possible new ways of interaction are needed. Objectives. In this thesis a control method using motion tracking devices such as the PlayStation Move and the Microsoft Kinect is examined as a method of interaction. This is then compared to the use of a gamepad in a prototype first person puzzle game without a virtual reality device. The aim is to discover how it affects the experience in terms of ease of use, immersion and fun factor as well as how it affects the efficiency of the player when completing the same in-game tasks. With this information it’s hoped to get an indication of how viable motion controls can be in a first person game and as a theoretical interaction method for virtual reality. Methods. To compare the control methods user studies are conducted with eight participants who play the prototype game using the two control methods and complete test chambers where their effectiveness is recorded. They are then interviewed to learn what they thought about the control methods and the experience. Results. Results consisting of time and points from the test chambers and answers from the interviews are compiled and analyzed. Conclusions. Analyzing the results of the user studies it is concluded that using motion controls rather than a traditional gamepad decreases the effectiveness of completing in-game tasks. There is an indication that motion controls increases the fun factor and immersion of the experience overall. The motion controls examined needs some getting used to and higher precision in order to be as effective as a gamepad but developing motion controlled games with the limitations in mind can give benefits such as higher immersion and fun factor.

  • 147.
    ODHIAMBO, PASCAL
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Linking Health Workers’ perceptions to design for state of the art mobile health information systems and support tools.2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Typical hospital setups comprise units such as clinics, inpatient wards, outpatient services, casualty services, operating theatres, laboratories, medical schools (for university hospitals) and out-reach medical camps. Healthcare professionals are required to support these different units hence the need to be constantly mobile in undertaking their duties. These duties require that they frequently consult colleagues, receive handover from previous duty staff or share information on previous work undertaken. Successful use and adoption of handheld devices such as PC tablets, PDAs and smartphones integrated to health information systems can minimize the physical mobility.

    Information sharing using M-health solutions in complex and diverse healthcare settings draw focus beyond the spatiality gains to the coordination of the teams, processes and shared artefacts in healthcare. CSCW research abounds with various concepts that can be useful in characterizing mobility and communication amongst collaborating health workers. Design for mobile health solutions, therefore, provides an opportunity to further ground theoretical frameworks from exemplary studies on health information systems.

    The overall objective of the study is to propose design suggestions that target successful information sharing in the deployment and use of M-health solutions. To achieve this objective, the thesis investigates and analyses factors influencing the use and adoption of M-health solutions.

    A qualitative literature review is used in the study to explore significant factors in the acceptance and use of health information systems. A questionnaire developed from these key factors is used to determine the perceptions of healthcare professionals on M-health solutions based on related literature and on a field study. Finally, the findings are discussed using concepts from CSCW literature namely, mobility, common information spaces, temporality and cognitive and coordinative artefacts.

    As a result, a conceptual model integrating constructs from the Technology acceptance model (TAM) and IS Success model was developed that can be useful in investigating perceptions in the use of M-health solutions. Design suggestions were proposed for the development of future M-health solutions that aim to achieve successful information sharing amongst healthcare professionals.

  • 148.
    Olofsson, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Direct3D 11 vs 12: A Performance Comparison Using Basic Geometry2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Computer rendered imagery such as computer games is a field with steady development. To render games an application programming interface (API) is used to communicate with a graphical processing unit (GPU). Both the interfaces and processing units are a part of the steady development in order to be able to push the limits of graphical rendering.

    Objectives. This thesis investigates if the Direct3D 12 API provides higher rendering performance when compared to its predecessor Direct3D 11.

    Methods. The method used is an experiment, in which a benchmark rendering basic shaded geometry using both of the APIs while measuring their performance was developed. The focus was aimed at testing API interaction and comparing Direct3D 11 against Direct3D 12.

    Results. Statistics gained from the benchmark suggest that in this experiment Direct3D 11 offered the best rendering performance in the majority of the cases tested, although Direct3D 12 had specific scenarios where it performed better.

    Conclusions. As a conclusion the benchmark gave contradicting results when compared to other studies. This could be dependent on the implementation, software or hardware used. In the tests Direct3D 12 was closer to its Direct3D 11 counterpart when more cores were used. A platform with more processing cores available to execute in parallel could reveal if Direct3D 12 could offer better performance in that experimental setting. In this study Direct3D 12 was implemented as to imitate Direct3D 11. If the implementation was further aligned with Direct3D 12 recommendations other results might be observed. Further study could be conducted to give a better evaluation of rendering performance.

  • 149.
    Olsson, Anna
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Evaluating Immersion in Video Games Through Graphical User Interfaces2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The feeling of immersion in a video game is something most game developers try to achieve. Immersion makes the player feeling like they are a part of the game they are playing, and causes them to lose track of time and space. Immersion can be achieved in many ways, one of them being through different kinds of Graphical User Interfaces. An integrated interface might help players to get more immersed.

    Objectives. The objective of this thesis is to find out whether an integrated or "diegetic" interface improves the feeling of immersion in players.To fullfil this objective, a prototype must be made and then tested using participants who then can answer questions about their level of immersion.

    Methods. The prototype was built using Unity, a free game engine. The participants of the study played two different levels with two different interfaces and then answered a number of questions in a questionnaire. The questions were then analysed using the students t-test.

    Results. The results of the study showed there were no difference in terms of immersion in the two different interfaces.

    Conclusions. These results suggest there is no improvement of immersion when using a diegetic interface, compared to using a non diegetic one.

  • 150.
    OLUMUYIWA DELE, OLANIYAN
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    AN ANALYSIS OF  FACTORS INFLUENCING THE SUCCESS OF SOCIAL NETWORKING WEBSITES2017Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
12345 101 - 150 of 208
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf