Change search
Refine search result
12345 1 - 50 of 206
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Advaita, Advaita
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Gali, Mani Meghala
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Outage Probability of MIMO Cognitive Cooperative Radio Networks with Multiple AF Relays Using Orthogonal Space-Time Block Codes2017In: 2017 IEEE 13TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB), IEEE , 2017, p. 84-89Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the outage probability of multiple-input multiple-output cognitive cooperative radio networks (CCRNs) with multiple opportunistic amplify-and-forward relays. The CCRN applies underlay spectrum access accounting for the interference power constraint of a primary network and utilizes orthogonal space-time block coding to transmit multiple data streams across a number of antennas over several time slots. As such, the system exploits both time and space diversity to improve the transmission reliability over Nakagami.. fading. The CCRN applies opportunistic relaying in which the relay offering the highest signal-to-noise ratio at the receiver is selected to forward the transmit signal. Furthermore, selection combining is adopted at the secondary receiver to process the signal from the direct and relaying transmissions. To evaluate system performance, we derive an expression for the outage probability which is valid for an arbitrary number of antennas at the source, relays, and receiver of the CCRN. Selected numerical results are provided using Mathematica for analysis and Matlab for simulations, to reveal the effect of network parameters on the outage probability of the system.

  • 2.
    Ahlström, Eric
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Holmqvist, Lucas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparing Traditional Key Frame and Hybrid Animation2017In: SCA '17 Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Digital Library, 2017, article id nr. a20Conference paper (Other academic)
    Abstract [en]

    In this research the authors explore a hybrid approach which usesthe basic concept of key frame animation together with proceduralanimation to reduce the number of key frames needed for an animationclip. The two approaches are compared by conducting anexperiment where the participating subjects were asked to ratethem based on their visual appeal.

    Download full text (pdf)
    fulltext
  • 3.
    Akama-kisseh, Jerome
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    EXPLORING COMPUTERIZED TROUBLE TICKETING SYSTEM AND ITS BENEFITS IN VODAFONE GHANA2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today more than ever, Computerized Trouble Ticketing System is becoming a booming information technology system that makes the difference between staying in business in a competitive global telecommunication arena.

    This quantitative exploratory survey utilised conveniently selected research subjects to explore computerized trouble ticketing system and its inherent benefits in Vodafone Ghana Plc. Cross section of vital data set collected with the aid of structured questionnaires haven been analyzed using descriptive statistics model.

    The study revealed that, effective and efficient usage of computerized trouble ticketing systems benefit the company in terms of its customer satisfaction, competitive advantage and business intelligence in competitive telecom arena. Nevertheless, the smooth realization of these inherent benefits are constantly challenged by complexity in managing volumes of data generated, intense era of competition, high cost of trouble ticketing system, as well as, rapid technological obsolesce in computerized trouble ticketing applications in telecommunication market.

    The study recommended for the quick and effective adoption of differentiation strategy, cost leadership strategy and customer relationship management, which are customer-centric measures that can build sustainable long-term customer relationship that can create value for the company, as well as, for the customers.

    Download full text (pdf)
    fulltext
  • 4.
    Anderdahl, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Darner, Alice
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems Using 3D Vector Fields with OpenGL Compute Shaders2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed. Objectives. This thesis proposes a texture-based 3D vector field particle system, computed on the Graphics Processing Unit, and compares it to an equation-based particle system. Methods. Several tests were conducted comparing different situations and parameters for the methods. All of the tests measured the computational time needed to execute the different methods. Results. We show that the texture-based method was effective in very specific situations where it was expected to outperform the equation-based. Otherwise, the equation-based particle system is still the most efficient. Conclusions. Generally the equation-based method is preferred, except for in very specific cases. The texture-based is most efficient to use for static particle systems and when a huge number of forces is applied to a particle system. Texture-based vector fields is hardly useful otherwise.

    Download full text (pdf)
    FULLTEXT01
  • 5.
    Andersen, Dennis
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Screen-Space Subsurface Scattering, A Real-time Implementation Using Direct3D 11.1 Rendering API2015Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    Context Subsurface scattering - the effect of light scattering within a material. Lots of materials on earth possess translucent properties. It is therefore an important factor to consider when trying to render realistic images. Historically the effect has been used for offline rendering with ray tracers, but is now considered a real-time rendering technique and is done based on approximations off previous models. Early real-time methods approximates the effect in object texture space which does not scale well with real-time applications such as games. A relatively new approach makes it possible to apply the effect as a post processing effect using GPGPU capabilities, making this approach compatible with most modern rendering pipelines.

    Objectives The aim of this thesis is to explore the possibilities of a dynamic real-time solution to subsurface scattering with a modern rendering API to utilize GPGPU programming and modern data management, combined with previous techniques

    Methods The proposed subsurface scattering technique is implemented in a delimited real-time graphics engine using a modern rendering API to evaluate the impact on performance by conducting several experiments with specific properties.

    Results The result obtained hints that by using a flexible solution to represent materials, execution time lands at an acceptable rate and could be used in real-time. These results shows that the execution time grows nearly linearly with consideration to the number of layers and the strength of the effect. Because the technique is performed in screen space, the performance scales with subsurface scattering screen coverage and screen resolution.

    Conclusions The technique could be used in real-time and could trivially be integrated to most existing rendering pipelines. Further research and testing should be done in order to determine how the effect scales in a complex 3D-game environment.

    Download full text (pdf)
    fulltext
  • 6.
    Andersson, Anders Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Feature Tracking and Head Pose Tracking as Input for Platform Games2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers.

    This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking.

    The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently.

    The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method.

    Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control

    Download full text (pdf)
    fulltext
  • 7.
    Andersson, Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of Anti-Aliasing in Motion2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Aliasing is a problem that every 3D game has because of the resolutions that monitors are using right now is not high enough. Aliasing is when you look at an object in a 3D world and see that it has jagged edges where it should be smooth. This can be reduced by a technique called anti-aliasing.

    Objectives. The object of this study is to compare three different techniques, Fast approximate anti-aliasing (FXAA), Subpixel Morphological Anti Aliasing (SMAA) and Temporal anti-aliasing (TAA) in motion to see which is a good default for games.

    Methods. An experiment was run where 20 people participated and tested a real-time prototype which had a camera moving through a scene multiple times with different anti-aliasing techniques.

    Results. The results showed that TAA was consistently performing best in the tests of blurry picture quality, aliasing and flickering. Both SMAA and FXAA were only comparable to TAA in the blur area of the test and falling behind all the other parts.

    Conclusions. TAA is a great anti-aliasing technique to use for avoiding aliasing and flickering while in motion. Blur was thought to be a problem but as the test shows most people did not feel that blur was a problem for any of the techniques that were used.

    Download full text (pdf)
    BTH2018AnderssonL
  • 8.
    Arvola Bjelkesten, Kim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

    Download full text (pdf)
    fulltext
  • 9. Astor, Philipp
    et al.
    Adam, Marc
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Schaaff, Kristina
    Weinhardt, Christof
    Integrating biosignals into information systems: A NeuroIS tool for improving emotion regulation2013In: Journal of Management Information Systems, ISSN 0742-1222, E-ISSN 1557-928X, Vol. 30, no 3, p. 247-277Article in journal (Refereed)
    Abstract [en]

    Traders and investors are aware that emotional processes can have material consequences on their financial decision performance. However, typical learning approaches for debiasing fail to overcome emotionally driven financial dispositions, mostly because of subjects' limited capacity for self-monitoring. Our research aims at improving decision makers' performance by (1) boosting their awareness to their emotional state and (2) improving their skills for effective emotion regulation. To that end, we designed and implemented a serious game-based NeuroIS tool that continuously displays the player's individual emotional state, via biofeedback, and adapts the difficulty of the decision environment to this emotional state. The design artifact was then evaluated in two laboratory experiments. Taken together, our study demonstrates how information systems design science research can contribute to improving financial decision making by integrating physiological data into information technology artifacts. Moreover, we provide specific design guidelines for how biofeedback can be integrated into information systems

  • 10.
    Axelsson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of user accuracy and speed when performing 3D game target practice using a computer monitor and virtual reality headset2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Consumer grade Virtual Reality (VR)-headsets are on the rise, and with them comes an increasing number of digital games which support VR. How players perceive the gameplay and how well they perform at the games tasks can be key factors to designing new games.

    This master’s thesis aims to evaluate if a user can performa game task, specifically a target practice, in less time and/or more accurately when using a VR-headset as opposed to a computer screen and mouse. To gather statistics and measure the differences, an experiment was conducted using a test application developed alongside this report. The experiment recorded accuracy scores and time taken in tests performed by 35 test participants using both a VR-headset and computer screen.

    The resulting data sets are presented in the results chapter of this report. A Kolmogorov-Smirnov Normality Test and Student’s paired samples t-test was performed on the data to establish its statistical significance. After analysis, the results are reviewed, discussed and conclusions are made.

    This study concludes that when performing the experiment, the use of a VR-headset decreased the users accuracy and to a lesser extent also increased the time the user took to hit all targets. An argument was made that the longer previous experience with computer screen and mouse of most users gave this method an unfair advantage. With equally long training, VR use might score similar results.

    Download full text (pdf)
    fulltext
  • 11.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Organic View of Prototyping in Information System Development2014In: 2014 IEEE 17th International Conference on Computational Science and Engineering (CSE) / [ed] Liu, X; ElBaz, D; Hsu, CH; Kang, K; Chen, W, ChengDu: IEEE, 2014, Vol. Article number 07023844, p. 1814-1818Conference paper (Refereed)
    Abstract [en]

    This paper presents an organic view of prototyping for managing dynamic factors involved in evolutionary design of information systems (IS). Those dynamic factors can be caused by, for example, continuing suggestions from users, changes in the technologies, and users-designers learning related stepwise progresses. Expanding the evolutionary prototyping to ‘start small and grow’, the organic view of prototyping proposes two prerequisites to do so, namely 1) a sustainable and adaptive ‘embryo’ – an organic structure of the future system, and 2) an embedded learning and feedback management that the actors of the system (users, designers, decision makers, administrators) can communicate with each other. An example of eHealth system design demonstrates how the prerequisites can be implemented.

    Download full text (pdf)
    fulltext
  • 12.
    Bengtsson, Daniel
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Melin, Johan
    Constrained procedural floor plan generation for game environments2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Procedural content generation (PCG) has become an important subject as the demand for content in modern games has increased. Paradox Arctic is a game development studio that aims to be at the forefront of technological solutions and is therefore interested in furthering their knowledge in PCG. To this end, Paradox Arctic has expressed their interest in a collaborative effort to further explore the subject of procedural floor plan generation.

    Objective: The main goal of this work is to test whether a solution based on growth, subdivision or a combination thereof, can be used to procedurally generate believable and varied floor plans for game environments, while also conforming to predefined constraints.

    Method: A solution capable of generating floor plans with the use of growth, subdivision and a combination of both has been implemented and a survey testing the believability and variation of the generated layouts has been conducted.

    Results & Conclusions: While the results of the subdivision and combined solutions show that more work is necessary before the generated content can be considered believable, the growth based solution presents promising results in terms of believability when generating smaller to medium sized layouts. This believability does however come at the cost of variation.

    Download full text (pdf)
    fulltext
  • 13.
    Berg, Wilhelm
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Terränggenerering och dess påverkan på spelupplevelse2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Kontext. Inom speldesign är terräng ofta en viktig aspekt, särskilt i sammanhanget med spelare som ska aktivt interagera med terräng. Dess utformning och design kan både positivt och negativt påverka hur spelaren uppfattar spelet.

    Mål. I detta arbete beskrivs ett arbete i terränggenerering och om terräng kan påverka spelaren i ett interaktivt media för att få bättre förståelse inom ämnet. Har terrängen en påverkan på hur spelaren uppfattar situationer i spel samt deras sätt att spela? Kan den påverka om de uppfattar upplevelsen som negativ eller positiv? Vad är mest påverkande för en spelare och hur?

    Metoder. I arbetet kommer slutsatser och arbetssätt beskrivas tillsammans med data insamlad från ett praktiskt test. Designen för det spel som används för att testa kommer även att beskrivas. I arbetets testande låter vi deltagare spela ett spel som använder sig av en algoritm för att skapa terräng. Efter testet kommer spelare svara på frågor om testet.

    Resultat. Från testandet får vi in svar som används för att nå vissa slutsatser.

    Slutsatser. Från testets resultat kommer vi dra slutsatsen att terräng verkligen kan ha en påverkan på spelarupplevelsen och att terräng kan påverka när den har störst aktivt inverkan på hur spelaren interagerar med spelet.

    Download full text (pdf)
    fulltext
  • 14.
    Bergsten, John
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Öhman, Konrad
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Player Analysis in Computer Games Using Artificial Neural Networks2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Star Vault AB is a video game development company that has developed the video game Mortal Online. The company has stated that they believe that players new to the game repeatedly find themselves being lost in the game. The objective of this study is to evaluate whether or not an Artificial Neural Network can be used to evaluate when a player is lost in the game Mortal Online. This is done using the free open source library Fast Artifical Neural Network Library. People are invited to a data collection event where they play a tweaked version of the game to facilitate data collection. Players specify whether they are lost or not and the data collected is flagged accordingly. The collected data is then prepared with different parameters to be used when training multiple Artificial Neural Networks. When creating an Artificial Neural Network there exists several parameters which have an impact on its performance. Performance is defined as the balance of high prediction accuracy against low false positive rate. These parameters vary depending on the purpose of the Artificial Neural Network. A quantitative approach is followed where these parameters are varied to investigate which values result in the Artificial Neural Network which best identifies when a player is lost. The parameters are grouped into stages where all combinations of parameter values within each stage are evaluated to reduce the amount of Artificial Neural Networks which have to be trained, with the best performing parameters of each stage being used in subsequent stages. The result is a set of values for the parameters that are considered as ideal as possible. These parameter values are then altered one at a time to verify that they are ideal. The results show that a set of parameters exist which can optimize the Artificial Neural Network model to identify when a player is lost, however not with the high performance that was hoped for. It is theorized that the ambiguity of the word "lost" and the complexity of the game are critical to the low performance.

    Download full text (pdf)
    fulltext
  • 15.
    Blidkvist, Jesper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Westgren, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Re-texturing and compositing new material on pre-rendered media: Using DirectX and UV sampling2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: This thesis investigates a new method for re-texturing and com- positing new or additional material on specific pre-rendered images using various blend equations. This is done by sampling a number of render passes created alongside the original source material, most notably a UV pass for accurate texture positioning and different lighting passes to enhance the control over the final result. This will allow comparatively simple and cheap compositing without the overhead that other commercially available tools might add.

    Objectives: Render the necessary UV coordinates and lighting calculations from a 3D application to two separate textures.Sample said textures in DirectX and use the information to accurately light and position the additional dynamic material for blending with the pre-rendered media.

    Method: The thesis uses an implementation method in which quantita- tive data is gathered by comparing the resulting composited images using two common image comparison methods, the Structured Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR), against a Gold Standard render.

    Results: The results of this implementation indicates that both the per- ceived and measured similarity is close enough to prove the validity of this method. Conclusions. This thesis shows the possibility and practical use of DirectX as tool capable of the most fundamental compositing operations. In its current state, the implementation is limited in terms of flexibility and func- tionality when compared to other proprietary compositing software packages and some visual artefacts and quality issues are present. There are however no indications that these issues could not be solved with additional work. 

    Download full text (pdf)
    fulltext
  • 16.
    Bloom, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Competitive Coevolution for micromanagement in StarCraft: Brood War2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Interest in and research on neural networks and their capacity for finding solutions to nonlinear problems has increased greatly in recent years.

    Objectives. This thesis attempts to compare competitive coevolution to traditional neuroevolution in the game StarCraft: Brood War.

    Methods. Implementing and evolving AI-controlled players for the game StarCraft and evaluating their performance.

    Results. Fitness values and win rates against the default StarCraft AI and between the networks were gathered.

    Conclusions. The neural networks failed to improve under the given circumstances. The best networks performed on par with the default StarCraft AI.

    Download full text (pdf)
    fulltext
  • 17.
    Boer, de, Wiebe Douwe
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Participatory Design Ideals2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Swedish academic discipline Informatics has roots in the Scandinavian design approach Participatory Design (PD). PD’s point of departure is to design ICT and the new environment it becomes part of together with the future users driven by the ideal to bring more democracy to the workplace. PD builds further on the Action Research and industrial democracy tradition already starting in the 1960s in Scandinavia, in which the powerful Scandinavian trade unions have a central role. The aim of the unions is to prepare the workers and have influence on the introduction of new technologies that (are expected to) change the work and work environment of the workers. In the 1970s, when more computers emerge in the work place, this leads to the development of PD. Important difference with AR is that the aim of PD is to actually design new ICT and the new environment it becomes part of.

    During the in PD literature much referred to project UTOPIA in the first half of the 1980s, led by project leader and PD pioneer Pelle Ehn, it is discovered that bringing the different expertise of designers/researchers and workers together in design-by-doing processes also result in more appropriate ICT.

     

    With ICT being ubiquitous nowadays, influencing most aspects of our lives, inside and outside the workplace, and another role of trade unions in (Scandinavian) society, a question is how PD should further develop. PD pioneer Morten Kyng (also a UTOPIA designer/researcher) proposes a framework for next PD practices in a discussion paper. The first element he mentions in the framework is ideals; The designer/researcher should as a first step consider what ideals to pursue as a person and for the project, and then to consider how to discuss the goals of the project partners, for which Kyng does no further suggestions how to approach this.

    This design and research thesis has as aim to design and propose some PD processes to come at the beginning of a PD/design project to shared ideals to pursue, based on a better understanding of the political and philosophical background of PD, including design as a discipline in its own right.

     

    For a better understanding of the political and philosophical roots of PD, and design as a discipline in its own right, Pelle Ehns’s early (PD research) work and (PD) influences and supporting theories are explored, next to Kyng’s discussion paper (framework) and reactions from his debate partners on this. Find out is that politics and what ideals to pursue in PD are sensitive and (still) important subjects in PD, and in a broader sense also for design in general one could argue. In relation to this also related disciplines like Computer Ethics, Value Sensitive Design, and more recent formulated ideals for PD and its relation to ethics are explored. As a result a proposal for a redesigned framework for next PD practices as a design artefact is designed, in which the element ideals is most elaborated.

    Before the understanding of design as a discipline in its own right is further explored by exploring a selection of different models and quotes from related (design) literature, on which is reflected also in relation to PD, and which are used as reminders in a design process to come to a proposal for a model that tries to reframe the relation between design, practice and research.

     

    Finally some methods, processes and techniques used in PD, design, AR and related literature that can contribute to design proposals for design processes that enable the design of ideals using a PD approach, are explored. These are used as reminders in design-by-doing processes, in which suggestions for techniques and processes to design ideals together with participants are tried out in real live situations, reflected on and iteratively further developed. Trying to avoid framing as much as possible, (semi-) anonymity and silence seem to be important ingredients in these processes to stimulate the generation of idea(l)s as much as possible free from bias and dominance patterns. An additional design artefact developed in this context is a template for an annotated portfolio used to describe and reflect on the different processes. 

  • 18.
    Brodén, Alexander
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Pihl Bohlin, Gustav
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Towards Real-Time NavMesh Generation Using GPU Accelerated Scene Voxelization2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Producing NavMeshes for pathfinding in computer games is a time-consuming process. Recast and Detour is a pair of stateof-the-art libraries that allows automation of NavMesh generation. It builds on a technique called Scene Voxelization, where triangle geometry is converted to voxels in heightfields. The algorithm is expensive in terms of execution time. A fast voxelization algorithm could be useful in real-time applications where geometry is dynamic. In recent years, voxelization implementations on the GPU have been shown to outperform CPU implementations in certain configurations.

    Objectives. The objective of this thesis is to find a GPU-based alternative to Recast’s voxelization algorithm, and determine when the GPU-based solution is faster than the reference. Methods. This thesis proposes a GPU-based alternative to Recast’s voxelization algorithm, designed to be an interchangeable step in Recast’s pipeline, in a real-time application where geometry is dynamic. Experiments were conducted to show how accurately the algorithm generates heightfields, how fast the execution time is in certain con- figurations, and how the algorithm scales with different sets of input data.

    Results. The proposed algorithm, when run on an AMD Radeon RX 480 GPU, was shown to be both accurate and fast in certain configurations. At low voxelfield resolutions, it outperformed the reference algorithm on typical Recast reference models. The biggest performance gain was shown when the input contained large numbers of small triangles. The algorithm performs poorly when the input data has triangles that are big in relation to the size of the voxels, and an optional optimization was presented to address this issue. Another optimization was presented that further increases performance gain when many instances of the same mesh are voxelized.

    Conclusions. The objectives of the thesis were met. A fast, GPUbased algorithm for voxelization in Recast was presented, and conclusions about when it can outperform the reference algorithm were drawn. Possibilities for even greater performance gains were identified for future research.

    Download full text (pdf)
    fulltext
  • 19. Bulling, Andreas
    et al.
    Dachselt, Raimund
    Duchowski, Andrew T.
    Jacob, Robert J.
    Stellmach, Sophie
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Gaze Interaction in the Post-WIMP World2012Conference paper (Refereed)
    Abstract [en]

    With continuous progression away from desktop to post-WIMP applications, including multi-touch, gestural, or tangible interaction, there is high potential for eye gaze as a more natural human-computer interface in numerous contexts. Examples include attention-aware adaptations or the combination of gaze and hand gestures for interaction with distant displays. This SIG meeting provides a discussion venue for researchers and practitioners interested in gaze interaction in the post-WIMP era. We wish to draw attention to this emerging field and eventually formulate fundamental research questions. We will discuss the potential of gaze interaction for diverse application areas, interaction tasks, and multimodal user interface combinations. Our aims are to promote this research field, foster a larger research community, and establish the basis for a workshop at CHI 2013.

    Download full text (pdf)
    fulltext
  • 20.
    Che, X.
    et al.
    Sichuan University, China.
    Niu, Y.
    Sichuan University, China.
    Shui, B.
    Sichuan University, China.
    Fu, J.
    Sichuan University, China.
    Fei, G.
    Communication University of China, China.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zhang, Y.
    Sichuan University, China.
    A novel simulation framework based on information asymmetry to evaluate evacuation plan2015In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 31, no 6-8, p. 853-861Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a novel framework to simulate the crowd behavior under emergency situations in a confined space with multiple exits. In our work, we take the information asymmetry into consideration, which is used to model the different behaviors presented by pedestrians because of their different knowledge about the environment. We categorize the factors influencing the preferred velocity into two groups, the intrinsic and extrinsic factors, which are unified into a single space called influence space. At the same time, a finite state machine is employed to control the individual behavior. Different strategies are used to compute the preferred velocity in different states, so that our framework can produce the phenomena of decision change. Our experimental results prove that our framework can be employed to analyze the factors influencing the escape time, such as the number and location of exits, the density distribution of the crowd and so on. Thus it can be used to design and evaluate the evacuation plans. © 2015 Springer-Verlag Berlin Heidelberg

  • 21.
    Chu, Thi My Chin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On Capacity of Full-Duplex Cognitive Cooperative Radio Networks with Optimal Power Allocation2017In: 2017 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we examine a full-duplex transmission scheme for cognitive cooperative radio networks (CCRNs) to improve capacity. In this network, the secondary transmitter and secondary relay are allowed to utilize the licensed spectrum of the primary user by using underlay spectrum access. We assume that the CCRN is subject to the interference power constraint of the primary receiver and maximum transmit power limit of the secondary transmitter and secondary relay. Under these constraints, we propose an optimal power allocation policy for the secondary transmitter and the secondary relay based on average channel state information (CSI) to optimize capacity. Then, we derive an expression for the corresponding achievable capacity of the secondary network over Nakagami-m fading. Numerical results are provided for several scenarios to study the achievable capacity that can be offered by this full-duplex underlay CCRN using the proposed optimal power allocation scheme.

  • 22.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Capacity Analysis of Two-Tier Networks with MIMO Cognitive Small Cells in Nagakami-m Fading2017In: 2017 IEEE 13TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB), IEEE , 2017, p. 457-463Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider a two-tier cellular network consisting of a primary macro cell base station (PMBS) which is overlaid by cognitive small cell base stations (CSBSs) to achieve efficient spectrum utilization. The deployment of two-tier cellular networks can provide higher capacity for the system but also causes cross-tier, intra-tier, and inter-tier interference within the cellular networks. Thus, we employ transmit and receive beamforming in the considered two-tier cellular network to mitigate interference. We first design the receive beamforming vector for a primary user (PU) such that it cancels all inter-tier interference from other PUs. Then, the transmit beamforming vectors at the secondary users (SUs) are designed to null out the cross-tier interference to the PUs. Further, the receive beamforming vectors at the SUs are designed to mitigate the crosstier interference from the PUs to the SUs. Finally, the transmit beamforming vector at the PMBS is designed to maximize the signal-to-interference-plus-noise ratio at the PUs. To quantify the performance of the system, we derive an expression for the channel capacity in the downlink from the CSBSs to the SUs. Numerical results are provided to reveal the effect of network parameters such as intra-tier interference distances, fading conditions, and number of antennas on the channel capacity of the SUs.

  • 23.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Downlink outage analysis for cognitive cooperative radio networks employing non-orthogonal multiple access2018In: 2018 IEEE 7th International Conference on Communications and Electronics, ICCE 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 27-32Conference paper (Refereed)
    Abstract [en]

    In this paper, we employ power-domain non-orthogonal multiple access (NOMA) to simultaneously transmit signals to both a primary user (PU) and a secondary user (SU) of a cognitive cooperative radio network (CCRN). Higher priority is given to the PU over the SU by keeping the power allocation coefficients at the base station (BS) and relay (R) above a certain threshold. In this way, similar as the interference power limit imposed by the PU in a conventional underlay CCRN, the power allocation coefficients at the BS and R of the CCRN can be controlled to maintain a given outage performance. Analytical expressions of the cumulative distribution function of the end-to-end signal-to-interference-plus-noise ratios at the PU and SU are derived and then used to assess the outage probabilities of both users. Numerical results are presented to study the impact of system parameters on outage performance of the CCRN with power-domain NOMA. In addition, it is illustrated that increased downlink performance can be obtained by combining power-domain NOMA with CCRNs. © 2018 IEEE.

  • 24.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Non-orthogonal multiple access for DF cognitive cooperative radio networks2018In: IEEE International Conference on Communications Workshops, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a power domain non-orthogonal multiple access (NOMA) scheme for cognitive cooperative radio networks (CCRNs). In the proposed scheme, a secondary transmitter communicates with two secondary users (SUs) by allocating transmit powers inversely proportional to the channel power gains on the links to the respective SUs. A decode-and-forward (DF) secondary relay is deployed which decodes the superimposed signals associated with the two SUs. Then, power domain NOMA is used to forward the signals from the relay to the two SUs based on the channel power gains on the corresponding two links. Mathematical expressions for the outage probability and ergodic capacity of each SU and the overall power domain NOMA CCRN are derived. Numerical results are provided to reveal the impact of the power allocation coefficients at the secondary transmitter and secondary relay, the interference power threshold at the primary receiver, and the normalized distances of the SUs on the outage probability and ergodic capacity of each SU and the whole NOMA system. © 2018 IEEE.

  • 25.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On Secrecy Capacity of Full-Duplex Cognitive Cooperative Radio Networks2017In: 2017 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the secrecy capacity of a full-duplex underlay cognitive cooperative radio network (CCRN) in the presence of an eavesdropper and under the interference power constraint of a primary network. The full-duplex mode is used at the secondary relay to improve the spectrum efficiency which in turn leads to an improvement of the secrecy capacity of the full-duplex CCRN. We utilize an approximation-and-fitting method to convert the complicated expression of the signal-to-interference-plus-noise ratio into polynomial form which is then utilized to derive an expression for the secrecy capacity. Numerical results are provided to illustrate the effect of network parameters such as transmit power, interference power limit, self-interference parameters of the full-duplex mode, and distances among links on the secrecy capacity. To reveal the benefits of the full-duplex CCRN, we compare the secrecy capacity obtained when the secondary relay operates in full-duplex and half-duplex mode.

  • 26.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Optimal Power Allocation for Hybrid Cognitive Cooperative Radio Networks with Imperfect Spectrum Sensing2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 10365-10380Article in journal (Refereed)
    Abstract [en]

    In this paper, two optimal power allocation strategies for hybrid interweave-underlay cognitive cooperative radio networks (CCRNs) are proposed to maximize channel capacity and minimize outage probability. The proposed power allocation strategies are derived for the case of Rayleigh fading taking into account the impact of imperfect spectrum sensing on the performance of the hybrid CCRN. Based on the optimal power allocation strategies, the transmit powers of the secondary transmitter and secondary relay are adapted according to the fading conditions, the interference power constraint imposed by the primary network (PN), the interference from the PN to the hybrid CCRN, and the total transmit power limit of the hybrid CCRN. Numerical results are provided to illustrate the effect of the interference power constraint of the PN, arrival rate of the PN, imperfect spectrum sensing, and the transmit power constraint of the hybrid CCRN on channel capacity and outage probability. Finally, comparisons of the channel capacity and outage probability of underlay, overlay, and hybrid interweaveunderlay CCRNs are presented to show the advantages of the hybrid spectrum access. OAPA

  • 27.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Outage Probability and Secrecy Capacity of a Non-orthogonal Multiple Access System2017In: 11th International Conference on Signal Processing and Communication Systems, ICSPCS, 2017 / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the outage probability and secrecy capacity of a non-orthogonal multiple access (NOMA) system in the presence of an eavesdropper. In order to enhance spectral efficiency, a base station communicates with two users simultaneously in the same frequency band by superpimposing the transmit signals to the users in the power domain. Specifically, the user with the worse channel conditions is allocated higher power such that it is able to directly decode its signal from the received superimposed signal. At the user with the better channel conditions, the interference due to NOMA is processed by successive interference cancelation. Given these system settings and accounting for decoding thresholds, we analyze the outage probability of the NOMA system over Rayleigh fading channels. Furthermore, based on the locations of the users and eavesdropper, the secrecy capacity is analyzed to assess the level of security provided to the legitimate users in the presence of an eavesdropper. Here, the decoding thresholds of legitimate users and eavesdropper are also included in the analysis of the secrecy capacity. Through numerical results, the effects of network parameters on system performance are assessed as well as the the superiority of NOMA in terms of secrecy capacity over traditional orthogonal multiple access.

  • 28.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Outage Probability of a Hybrid AF-DF Protocol for Two-Way Underlay Cognitive Cooperative Radio Networks2017In: 11th International Conference on Signal Processing and Communication Systems, ICSPCS 2017 / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2017, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a hybrid amplify-and-forward (AF) and decode-and-forward (DF) scheme for two-way cognitive cooperative radio networks (CCRNs). The proposed scheme applies the AF scheme when the signal-to-interferenceplus-noise ratio (SINR) at the relay is below a predefined threshold such that the relay cannot successfully decode the signal. On the other hand, when the SINR at the relay is greater than the predefined threshold, it decodes the signal and then forwards it to the destination, i.e. avoids noise and interference amplification at the relay. An analytical expression of the outage probability of the hybrid AF-DF two-way CCRN is derived based on the probability density function and cumulative distribution function of the SINR in AF and DF mode. Numerical results are provided to illustrate the influence of network parameters such as transmit power, interference power constraint of the primary network, fading conditions, and link distances on the outage probability. Finally, the numerical results show that the hybrid strategy is able to improve system performance significantly compared to conventional AF or DF relaying.

  • 29.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance of a Non-orthogonal Multiple Access System with Full-Duplex Relaying2018In: IEEE Communications Letters, ISSN 1089-7798, E-ISSN 1558-2558, Vol. 22, no 10, p. 2084-2087Article in journal (Refereed)
    Abstract [en]

    In this paper, we study a power-domain nonorthogonal multiple access (NOMA) system in which a base station (BS) superimposes the transmit signals to the users. To enhance spectral efficiency and link reliability for the far-distance user, a full-duplex (FD) relay assists the BS while the neardistance user is reached over the direct link. For this setting, we analyze outage probability and sum rate of the NOMA system over Nakagami-m fading with integer fading severity parameter m. Numerical results are provided for outage probability and sum rate to show the effect of system parameters on the performance of the FD NOMA system over Nakagami-m fading. IEEE

  • 30.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance Optimization for Hybrid Two-Way Cognitive Cooperative Radio Networks with Imperfect Spectrum Sensing2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 70582-70596Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider a two-way cognitive cooperative radio network (TW-CCRN) with hybrid interweaveunderlay spectrum access in the presence of imperfect spectrum sensing. Power allocation strategies are proposed that maximize the sum-rate and minimize the outage probability of the hybrid TW-CCRN. Specifically, based on the state of the primary network (PN), fading conditions, and system parameters, suitable power allocation strategies subject to the interference power constraint of the PN are derived for each transmission scenario of the hybrid TW-CCRN. Given the proposed power allocation strategies, we analyze the sum-rate and outage probability of the hybrid TW-CCRN over Rayleigh fading taking imperfect spectrum sensing into account. Numerical results are presented to illustrate the effect of the arrival rate, interference power threshold, transmit power of the PN, imperfect spectrum sensing, and maximum total transmit power on the sum-rate and outage probability of the hybrid TW-CCRN. OAPA

  • 31.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Symbol error rate and achievable rate of cognitive cooperative radio networks utilizing non-orthogonal multiple access2018In: 2018 IEEE 7th International Conference on Communications and Electronics, ICCE 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, Vol. Code 141951, p. 33-38Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the employment of power-domain non-orthogonal multiple access (NOMA) concepts for a cooperative cognitive relay network (CCRN) downlink system in order to allow a base station (BS) to simultaneously transmit signals to a primary user (PU) and a secondary user (SU). As such, the considered system falls into the field of cognitive radio inspired power-domain NOMA. In this scheme, the interference power constraint of the PU imposed to SUs in conventional underlay CCRNs is replaced by controlling the power allocation coefficients at the BS and relay. Specifically, expressions for the symbol error rates at the PU and SU for different modulation schemes as well as expressions for the achievable rates are derived. On this basis, the effect of system parameters such as total transmit power and power allocation coefficients on the performance of the CCRN with power-domain NOMA is numerically studied. These numerical results provide insights into selecting favorable operation modes of the CCRN employing power-domain NOMA. © 2018 IEEE.

  • 32.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Phan, H.
    Duy Tan University, VNM.
    MAC Protocol for Opportunistic Spectrum Access in Multi-Channel Cognitive Relay Networks2017In: IEEE Vehicular Technology Conference, Institute of Electrical and Electronics Engineers Inc. , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a medium access control (MAC) protocol for multi-channel cognitive cooperative radio networks (CCRNs). In this protocol, each secondary user (SU) senses for spectrum opportunities within M licensed bands of the primary users (PUs). To enhance the accuracy of spectrum sensing, we employ cooperative sequential spectrum sensing where SUs mutually exchange their sensing results. Moreover, the information obtained from cooperative spectrum sensing at the physical layer is integrated into the channel negotiation process at the MAC layer to alleviate the hidden terminal problem. Finally, the performance of the proposed MAC protocol in terms of aggregate throughput of the CCRNs is analyzed. Numerical results are provided to assess the impact of channel utilization by PUs, number of contenting CCRNs, number of licensed bands, and false alarm probability of SUs on the aggregate throughput. © 2017 IEEE.

  • 33.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Analysis of Variance of Opinion Scores for MPEG-4 Scalable and Advanced Video Coding2018In: 2018 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS) / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2018Conference paper (Refereed)
    Abstract [en]

    In this paper, we conduct an analysis of variance (ANOVA) on opinion scores for MPEG-4 scalable video coding (SVC) and advanced video coding (AVC) standards. This work resorts on a publicly available database providing opinion scores from subjective experiments for several scenarios such as different bit rates and resolutions. In particular, ANOVA is used for statistical hypothesis testing to compare two or more sets of opinion scores instead of being constrained to pairs of sets of opinion scores as would be the case for t-tests. As the ANOVA tests of the different scenarios are performed for mean opinion scores (MOS), box plots are also provided in order to assess the distribution of the opinion scores around the median. It is shown that the opinion scores given to the reference videos in SVC and AVC for different resolutions are statistically significantly similar regardless of the content. Further, for the opinion scores of the considered database, the ANOVA tests support the hypothesis that AVC generally outperforms SVC although the performance difference may be less pronounced for higher bit rates. This work also shows that additional insights on the results of subjective experiments can be obtained by extending the analysis of opinion scores beyond MOS to ANOVA tests and box plots.

  • 34.
    Clementson, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Augustsson, John
    User Study of Quantized MIP Level Data In Normal Mapping Techniques2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     The standard MIP mapping technique halves the resolution of textures for each level of the MIP chain. In this thesis the bits per pixel(bpp) is reduced as well. Normal maps are generally used with MIP maps, and todays industry standard for these are usually 24 bpp.The reduction is simulated as there is currently no support for the lower bpp in GPU hardware.

    Objectives: To render images of normal mapped objects with decreasing bpp for each level in a MIP chain and evaluate these against the standard MIP mapping technique using a subjective user study and an objective image comparison method.

    Methods: A custom software is implemented to render the images with quantized normal maps manually placed in a MIP chain. For the subjective experiment a 2AFC test is used, and the objective part consists of a PDIFF test for the images.

    Results: The results indicate that as the MIP level is increased and the bpp is lowered, users can increasingly see a difference.

    Conclusions: The results show that participants can see a difference as the bpp is reduced, which indicates normal mapping as not suitable for this method, however further study is required before this technique can be dismissed as an applicable method

    Download full text (pdf)
    fulltext
  • 35.
    Danielsson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Viability of Feature Detection on Sony Xperia Z3 using OpenCL2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Embedded platforms GPUs are reaching a level of perfor-mance comparable to desktop hardware. Therefore it becomes inter-esting to apply Computer Vision techniques to modern smartphones.The platform holds different challenges, as energy use and heat gen-eration can be an issue depending on load distribution on the device.

    Objectives. We evaluate the viability of a feature detector and de-scriptor on the Xperia Z3. Specifically we evaluate the the pair basedon real-time execution, heat generation and performance.

    Methods. We implement the feature detection and feature descrip-tor pair Harris-Hessian/FREAK for GPU execution using OpenCL,focusing on embedded platforms. We then study the heat generationof the application, its execution time and compare our method to twoother methods, FAST/BRISK and ORB, to evaluate the vision per-formance.

    Results. Execution time data for the Xperia Z3 and desktop GeForceGTX660 is presented. Run time temperature values for a run ofnearly an hour are presented with correlating CPU and GPU ac-tivity. Images containing comparison data for BRISK, ORB andHarris-Hessian/FREAK is shown with performance data and discus-sion around notable aspects.

    Conclusion. Execution times on Xperia Z3 is deemed insufficientfor real-time applications while desktop execution shows that there isfuture potential. Heat generation is not a problem for the implemen-tation. Implementation improvements are discussed to great lengthfor future work. Performance comparisons of Harris-Hessian/FREAKsuggest that the solution is very vulnerable to rotation, but superiorin scale variant images. Generally appears suitable for near duplicatecomparisons, delivering much greater number of keypoints. Finally,insight to OpenCL application development on Android is given

    Download full text (pdf)
    fulltext
  • 36. Dittrich, Yvonne
    et al.
    Eriksén, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wessels, Bridgette
    Learning through Situated Innovation. Why the specific is crucial for Participatory Design Research2014In: Scandinavian Journal of Information Systems, ISSN 0905-0167 , Vol. 26, no 1Article in journal (Refereed)
    Abstract [en]

    Specific, situated Participatory Design (PD) practices have always been at the heart of Participatory Design research. The role of the very situat­edness and specificity of PD practice for theory-building within PD research is, however, seldom discussed explicitly. In this article, we explore why and in which ways the specificity and situatedness of PD practices are crucial for PD research. We do so by developing the notion of PD as situated innovation based on a pragmatic epistemology. PD research aims at devel­oping and continuously unfolding what PD can, might and should be. We show implica­tions of such a pragmatic epistemology of PD on understanding and arguing for PD research approaches. These concepts are illustrated referring to PD practices as experienced in PD research projects. Our epistemological argu­mentation supports the emphasis on ex­ploring new PD practices and learning and theorizing about PD from the spec­ificities, in line with recent debate contributions.

  • 37.
    Edänge, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Implementation and Performance Evaluation of a Peer-to-Peer Chat System2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: Chat applications have been around since the beginning of the modern internet. Today, there are many different chat systems with various communication solutions, but only a few utilize the fully decentralized Peer-to-Peer concept.

    Objectives: In this report, we want to investigate to see if a fully decentralized P2P concept is a suitable choice for chat applications. In order to investigate, a P2P architecture was selected and a simulation was implemented in Java. The simulation was used to make a performance evaluation in order see if the P2P concept could meet the requirements of a chat application, and to identify problems and difficulties.

    Methods: Two main methods were used in this thesis. First, a qualitative design method was used to identify and discuss different possibilities of designing a distributed chat application. Second, a performance evaluation was conducted to verify the selected and implemented mechanisms are able to obtain their general performance capabilities and to tune them towards anticipated performance.

    Results: The simulation proved that a decentralized P2P system can scale and find resources in a network quite efficiently without the need of any centralized service. It also proved to be simpler for the user to use the P2P concept, as no special configurations are needed. However, the selected protocol (Chord) had problems with high rates of churn, which could cause problems in big chat environments. The P2P concept was also shown to be highly complex to implement.

    Conclusion: P2P technology is a more complex technology, but it gives the host a lower cost in terms of hardware and maintenance. It also makes the system more robust and fault-tolerant. As we have seen in this report, P2P can scale and find other resources efficiently without the need of a centralized service. However, it will consume more power for each user, which makes mobile devices bad peers.

    Download full text (pdf)
    fulltext
  • 38.
    Ejdemyr, Niclas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Eye Tracking as an Additional Input Method in Video Games: Using Player Gaze to Improve Player Immersion and Performance2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Gaze based interaction in video games is still a developing field, and is mostly used as an off-line evaluation tool or a replacement for traditional input methods. This thesis will look closer at the prospect of using eye tracking as an additional input to be used alongside the traditional methods of input to improve the immersion and performance of the player.

    Objectives. To implement a gaze based interaction method into arst person adventure in a way to improve player performance and immersion.

    Method. Using the Tobii REX eye tracker, 18 volunteers participated in an experiment. They played two versions of a game in an controlled environment. The versions had the same mechanics and game elements but only one of them had eye tracking implemented. After the experiment the participants answered nine questions about which prototype they preferred.

    Results. All participants' scores were in all cases but one, lower when using the eye tracking input method, compared to the traditional one.The time it took for the participants to complete the game was longer for everybody. 16 out of 18 players also felt more immersed in the game while using eye tracking compared to playing with the traditional input method.

    Conclusions. The results from the experiments provided evidence that the interaction method designed for this thesis did not improve player performance. The results also showed that the interaction method did improve immersion for most players.

    Download full text (pdf)
    fulltext
  • 39. Ekelin, Annelie
    et al.
    Eriksén, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Citizen-Driven Design: Leveraging Participatory Design of E-Government 2.0 Through Local and Global Collaborations.2014In: Case Studies in e-Government 2.0. Changing Citizen Relationships. / [ed] Boughzala, Imed; Janssen, Marijn; Assar, Saïd, Springer , 2014Chapter in book (Refereed)
    Abstract [en]

    The goal of this paper is to present how citizen-driven design of e-government can be promoted through trans-local cooperation. Our case study consists of the Augment project, which focuses on the design of a mobile service for co-creation of local accessibility. Our approach is action research based in the Scandinavian tradition of Participatory design. Experiences from this project highlight issues concerning how to reconfigure the basis for design of public services. In order to cultivate spaces for citizen-driven design and local innovation, we made iterative use of global collaborations. In the initial phase, influences from R&D cooperation with India provided new spaces for participatory design practices. In the next phase, a proof-of-concept process allowed for broader local stake-holder involvement. In the third phase, the service concept was shared and expanded with partner regions in Europe through exchange of Best Practices. Currently, we are moving towards phase four, the commercialization process. Beyond the iterative design of the mobile service itself, and what trans-local collaboration contributed in this context, we also discuss reconceptualization of innovation as incremental change. We argue that transnational collaboration can be deliberately made use of for leveraging incremental change on a local level and strengthening regional innovation systems and practices.

  • 40.
    Eliasson, André
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Franzén, Pontus
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Accelerating IISPH: A Parallel GPGPU Solution Using CUDA2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Simulating realistic fluid behavior in incompressible fluids for computer graphics has been pioneered with the implicit incompressible smoothed particle hydrodynamics (IISPH) solver. The algorithm converges faster than other incompressible SPH-solvers, but real-time performance (in the perspective of video games, 30 frames per second) is still an issue when the particle count increases.

    Objectives. This thesis aims at improving the performance of the IISPH-solver by proposing a parallel solution that runs on the GPU using CUDA. The solution should not compromise the physical accuracy of the original solution. Investigated aspects are execution time, memory usage and physical accuracy.

    Methods. The proposed implementation uses a fine-grained approach where each particle is calculated on a separate thread. It is compared to a sequential and a parallel OpenMP implementation running on the CPU.

    Results and Conclusions. It is shown that the parallel CUDA solution allow for real-time performance for approximately 19 times the amount of particles than that of the sequential implementation. For approximately 175 000 particles the simulation runs at the constraint of real-time performance, more particles are still considered interactive. The visual result of the proposed implementation deviated slightly from the ones on the CPU.

    Download full text (pdf)
    fulltext
  • 41.
    Eliasson, Christopher
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Natural Language Generation for descriptive texts in interactive games2014Student thesis
    Abstract [en]

    Context. Game development is a costly process and with today's advanced hardware the customers are asking for more playable content, and at higher quality. For many years providing this content procedurally has been done for level creation, modeling, and animation. However, there are games that require content in other forms, such as executable quests that progress the game forward. Quests have been procedurally generated to some extent, but not in enough detail to be usable for game development without providing a handwritten description of the quest. Objectives. In this study we combine a procedural content generation structure for quests with a natural language generation approach to generate a descriptive summarized text for quests, and examine whether the resulting texts are viable as quest prototypes for use in game development. Methods. A number of articles on the area of natural language generation is used to determine an appropriate way of validating the generated texts produced in this study, which concludes that a user case study is appropriate to evaluate each text for a set of statements. Results. 30 texts were generated and evaluated from ten different quest structures, where the majority of the texts were found to be good enough to be used for game development purposes. Conclusions. We conclude that quests can be procedurally generated in more detail by incorporating natural language generation. However, the quest structure used for this study needs to expand into more detail at certain structure components in order to fully support an automated system in a flexible manner. Furthermore due to semantics and grammatics being key components in the flow and usability of a text, a more sophisticated system needs to be implemented using more advanced techniques of natural language generation.

    Download full text (pdf)
    FULLTEXT01
  • 42.
    Engman, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    HPA* Used With a Triangulation-Based Graph2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context: Pathfinding is an important phase when it comes to AI. The AI needs to know how to get from one point to another when there are obstacles ahead. For that reason, different pathfinding algorithms have been created. Objective: In this paper a new pathfinding algorithm, THPA*, is described, and it will also be compared to the more common algorithms, A*, and HPA* which THPA* is based on. Methods: These algorithms are then tested on an extensive array of maps with different paths and the results consisting of the execution times will be compared against each other. Results: The result of those tests conclude that THPA* performs better in terms of execution time in the average case; however it does suffer from low quality paths. Conclusions: This paper concludes that THPA* is a promising algorithm albeit in need of more refinement to make up for the negative points.

    Download full text (pdf)
    FULLTEXT01
  • 43.
    Erik, Wikström
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Expected Damage of Projectile-Like Spell Effects in Games2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Many video games make use of particle effects to portray magic abilities known as spells. Different spells may have large variation in behaviour and colour. Aside from their different appearance, the spells often deal a different amount of damage.

    Objectives. The aim of this paper is to evaluate how velocity, scale, and direction, as well as the colour orange and blue affect the expected damage of a projectile-like spell.Methods. A perceptual experiment with a 2AFC was conducted where participants compared various spells with different values of velocity, scale, direction, and colour. The participants were asked to select the spell that they expect to deal the most damage.

    Results. Scale had a larger impact on the expected damage of a spell than velocity. The largest and fastest spells with an added sinus based direction in the x-axis were expected to cause the most damage. However, the difference between these spells and the largest and fastest spells without the added direction was not found to be statistically significant. The orange spells were rated as more damage causing in all cases compared to the blue spells. The difference between the blue and orange preference in two of these cases were however not large enough to be statistically significant.

    Conclusions. The results showed that the visual attributes of a particle-based spell affect its perceived damage with the scale having a greater impact than velocity and orange being the colour most often associated with higher damage. The effect of an added direction could not be evaluated due the result from the direction spells not being statistically significant.

    Download full text (pdf)
    BTH2018Wikström
  • 44.
    Eriksen, Sara
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hofflander, Malin
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Nilsson, Lina
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health. Blekinge Inst Technol, Karlskrona, Sweden..
    Lundberg, Jenny
    Lund Univ, Dept Design Sci, Lund, Sweden..
    Health in Hand: Putting mHealth Design in Context2014In: 2014 IEEE 2ND INTERNATIONAL WORKSHOP ON USABILITY AND ACCESSIBILITY FOCUSED REQUIREMENTS ENGINEERING (USARE), 2014, p. 36-39Conference paper (Refereed)
    Abstract [en]

    Wireless technologies, cloud computing and connectivity have enabled mobile services that extend the coverage of health services, resulting in a branch of eHealth now commonly referred to as mHealth. However, at least in Sweden, where the healthcare sector is heavily institutionalized and regulated, mHealth has so far mainly evolved in the form of applications for support of healthy life-style and self-management of chronic diseases, implemented outside of the firewalls of traditional healthcare delivery environments. In this paper we present an on-going Indo-Swedish research and development project in which we are putting mHealth design into context both from a patient's perspective and from the perspective of a healthcare team working within a professional healthcare organization. Our research approach is inspired by the Scandinavian tradition of Participatory Design of ICT and informed by studies of how to measure usability, user experience and impact of mHealth interventions. The involved research teams are multi-disciplinary, including researchers from engineering, computing and health sciences. The project includes, on the Swedish side, a partner from the public healthcare sector, three SME:s and an industrial partner who is currently providing Electronic Patient Record and other healthcare information system solutions and who is interested in developing mobile solutions for healthcare professionals. We are currently in the process of collaborative articulation and specification of problems, goals and requirements within the framework of the first Swedish case study of the project, focused on mobile support for patients with diabetes type 2 and their healthcare teams.

  • 45.
    Eriksén, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    The curse of the smart manager?: Digitalisation and the children of management science2016In: Scandinavian Journal of Information Systems, ISSN 0905-0167, Vol. 28, no 2, p. 76-77, article id 6Article in journal (Refereed)
    Abstract [en]

    In this commentary of Carsten Sørensen's keynote address and commentary, I argue that it may be the concept of the smart manager—so fundamental to management science—rather than the concept of the smart machine, which is still haunting IS research today. © Scandinavian Journal of Information Systems, 2016.

  • 46.
    Eriksén, Sara
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Lundberg, Jenny
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Nilsson, Lina
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Hofflander, Malin
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Borg, Christel
    Blekinge Institute of Technology, Faculty of Health Sciences, Department of Health.
    Transforming Healthcare Delivery: ICT Design for Self-Care of Type 2 Diabetes2014Conference paper (Refereed)
    Abstract [en]

    In this position paper we present an on-going case study where the aim is to design and implement mobile technologies for self-care for patients with type 2 diabetes. The main issue we are addressing in this paper is how to bridge clinical and non-clinical settings when designing self-care technologies. Usability, User Experience and Participatory Design are central aspects of our research approach. For designing with and for patients in home settings and everyday life situations, this approach has so far not been problematic. However, when it comes to designing with and for user groups located within a large healthcare organization, in a highly institutionalized clinical setting, the situation is different. We have recently introduced the Health Usability Maturity Model (UMM) to our project partners as a potential tool for bringing usability and participa-tory design issues to the fore as strategic assets for transforming healthcare delivery with ICT.

  • 47.
    Falkenby, Jesper Hansson
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Physically-based fluid-particle system using DirectCompute for use in real-time games2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context: Fluid-particle systems are seldom used in games, the apparent performance costs of simulating a fluid-particle system discourages the developer to implement a system of such. The processing power delivered by a modern GPU enables the developer to implement complex particle systems such as fluid-particle systems. Writing efficient fluid-particle systems is the key when striving for real-time fluid-particle simulations with good scalability. Objectives: This thesis ultimately tries to provide the reader with a well-performing and scalable fluid-particle system simulated in real-time using a great number of particles. The fluid-particle system implements two different fluid physics models for diversity and comparison purposes. The fluid-particle system will then be measured for each fluid physics model and provide results to educate the reader on how well the performance of a fluid-particle system might scale with the increase of active particles in the simulation. Finally, a performance comparison of the particle scalability is made by completely excluding the fluid physics calculations and simulate the particles using only gravity as an affecting force to be able to demonstrate how taxing the fluid physics calculations are on the GPU. Methods: The fluid-particle system has been run using different simulation scenarios, where each scenario is defined by the amount of particles being active and the dimensions of our fluid-particle simulation space. The performance results from each scenario has then been saved and put into a collection of results for a given simulation space. Results: The results presented demonstrate how well the fluid-particle system actually scales being run on a modern GPU. The system reached over a million particles while still running at an acceptable frame rate, for both of the fluid physics models. The results also shows that the performance is greatly reduced by simulating the particle system as a fluid-particle one, instead of only running it with gravity applied. Conclusions: With the results presented, we are able to conclude that fluid-particle systems scale well with the number of particles being active, while being run on a modern GPU. There are many optimizations to be done to be able to achieve a well-performing fluid-particle system, when developing fluid-particle system you should be wary of the many performance pitfalls that comes with it.

    Download full text (pdf)
    FULLTEXT01
  • 48.
    Flöjt, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Exploiting temporal coherence in scene structure for incremental draw call recording in Vulkan2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Draw calls in interactive applications are often recorded on a per-frame basis, despite already residing in memory after recording in the previous one. At the same time, scenes tend to be structurally stable; what exists during one frame is likely to exist in the next ones as well.

    Objectives. By exploiting the observed temporal coherence in scene structures, this thesis aims to devise a solution to record draw calls incrementally. The purpose of such recording is to reuse what has been recorded previously instead of recording it anew. Two approaches to incremental recording are implemented and compared to regular naïve recording in terms of overhead time. One of them makes use of an extension to the Vulkan graphics application programming interface (API) to evaluate indirect pipeline changes.

    Methods. A simulation is used as the method of evaluation, using a simple scene where triangles are rendered in individual draw calls. Two sizes of the scene are used. One matches the upper end of draw call count in samples of modern games and the other is an exaggerated size to test viability for even larger ones. Graphics processing unit (GPU) time is measured along with total execution time to provide numbers on the overhead time caused by the different recording strategies.

    Results. When considering the frequency of incremental updates, the multi-draw indirect (MDI) strategy performs very well, outperforming the other strategies even with 100% updates compared to 0% of the others. However, it scales poorly with increasing number of pipeline switches, where the other incremental recording strategy performs best instead. In this case, MDI soon becomes more expensive than regular recording.

    Conclusions. It is shown that the incremental recording strategies have an observable reduction in overhead time, and may be worth considering. With few pipeline switches, MDI is a viable candidate for its performance and ease of implementation. A large ratio of pipeline switches may not be a realistic scenario, but in those cases the device generated commands (DGC) strategy is a better choice than MDI. Note that the DGC strategy does not perform true incremental recording because calls are still recorded by the GPU. Overhead margins are comparatively low in the smaller scene, but even in that case incremental recording could be beneficial because depending on the implementation, one could avoid traversing parts of data structures that remain unchanged.

    Download full text (pdf)
    fulltext
  • 49.
    Frank, Elias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Olsson, Niclas
    Procedural city generation using Perlin noise2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural content generation is to algorithmically generate content. This has been used in games and is an important tool to create games with large amounts of content using fewer resources. This may allow small developers to create big worlds, which makes the investigation into this area interesting.

    Objectives. The Procedural generation of cities using Perlin noise is explored. The goal is to nd out if a procedurally generated city using Perlin noise is viable to use in games.

    Method. An implementation generating cities using Perlin noise has been created and a user study along with data collection tests the cities' viability in games.

    Result. The implementation succeeds with all the technical requirements such as performance and determinism. The user study shows that the cities created are perceived as viable in games.

    Conclusion. The cities generated with the implementation seems to be viable in games. The results show that the generated content are percieved as more viable than random generated cities. Furthermore the generation speed is fast enough to be used in an online setting.

    Download full text (pdf)
    fulltext
  • 50.
    Frid Kastrati, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hybrid Ray-Traced Reflections in Real-Time: in OpenGL 4.32015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Reaching photo realistic results when rendering 3D graphics in real-time is a hard computational task. Ray-tracing gives results close to this but is too expensive to be run at real-time frame rates. On the other hand rasterized methods such as deferred rendering are able to keep the tight time constraints with the support of modern hardware.

    Objectives. The basic objective is to merge deferred rendering and ray-tracing into one rasterized pipeline for dynamic scenes. In the thesis the proposed method is explained and compared to the methods it merges. Image quality, execution time and VRAM usage impact are investigated.

    Methods. The proposed method uses deferred rendering to render the result of the primary rays. Some pixels are marked, based on material properties for further rendering with ray-tracing. Only reflections are presented in the thesis but it has been proven that other global illumination effects can be implemented in the ray-tracing framework used.

    Results and Conclusions. The hybrid method is proved through experiments to be between 2.49 to 4.19 times faster than pure ray-tracing in the proposed pipeline. For smaller scenes it can be run at frame rates close to real-time, but, for larger scenes such as the Crytek Sponza scene the real-time feeling is lost. However, interactivity is never lost. It is also proved that a simple adjustment to the original framework can save almost 2/3 of the memory spent on A-buffers. Image comparisons prove that the technique can compete with offline ray tracers in terms of image quality.

    Download full text (pdf)
    fulltext
12345 1 - 50 of 206
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf