Change search
Refine search result
1234567 51 - 100 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 6, p. 2937-2971Article in journal (Refereed)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 52.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards a mapping of software technical debt onto testware2017In: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 404-411, article id 8051379Conference paper (Refereed)
    Abstract [en]

    Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD. © 2017 IEEE.

  • 53.
    Alégroth, Emil
    et al.
    Chalmers, SWE.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, no 5, p. 53-59, article id 8048660Article in journal (Refereed)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 54.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Karlsson, Arvid
    Cilbuper IT, Gothenburg, SWE.
    Radway, Alexander
    Techship Krokslatts Fabriker, SWE.
    Continuous Integration and Visual GUI Testing: Benefits and Drawbacks in Industrial Practice2018In: Proceedings - 2018 IEEE 11th International Conference on Software Testing, Verification and Validation, ICST 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 172-181Conference paper (Refereed)
    Abstract [en]

    Continuous integration (CI) is growing in industrial popularity, spurred on by market trends towards faster delivery and higher quality software. A key facilitator of CI is automated testing that should be executed, automatically, on several levels of system abstraction. However, many systems lack the interfaces required for automated testing. Others lack test automation coverage of the system under test's (SUT) graphical user interface (GUI) as it is shown to the user. One technique that shows promise to solve these challenges is Visual GUI Testing (VGT), which uses image recognition to stimulate and assert the SUT's behavior. Research has presented the technique's applicability and feasibility in industry but only limited support, from an academic setting, that the technique is applicable in a CI environment. In this paper we presents a study from an industrial design research study with the objective to help bridge the gap in knowledge regarding VGT's applicability in a CI environment in industry. Results, acquired from interviews, observations and quantitative analysis of 17.567 test executions, collected over 16 weeks, show that VGT provides similar benefits to other automated test techniques for CI. However, several significant drawbacks, such as high costs, are also identified. The study concludes that, although VGT is applicable in an industrial CI environment, its severe challenges require more research and development before the technique becomes efficient in practice. © 2018 IEEE.

  • 55.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, p. 550-551Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 56.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 57.
    Ambreen, T.
    et al.
    Int Islamic Univ, PAK.
    Ikram, N.
    Riphah Int Univ, PAK.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Niazi, M.
    King Fahd Univ Petr & Minerals, SAU.
    Empirical research in requirements engineering: trends and opportunities2018In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 23, no 1, p. 63-95Article in journal (Refereed)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 58.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rohani, Sarmad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

  • 59.
    Amjad, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Malhi, Rohail Khan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Burhan, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

  • 60.
    Ammar, Doreid
    et al.
    Norwegian Univ Sci & Technol, NOR.
    De Moor, Katrien
    Norwegian Univ Sci & Technol, NOR.
    Xie, Min
    Next Generat Serv, Telenor Res, NOR.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Heegaard, Poul
    Norwegian Univ Sci & Technol, NOR.
    Video QoE Killer and Performance Statistics in WebRTC-based Video Communication2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate session-related performance statistics of a Web-based Real-Time Communication (WebRTC) application called appear. in. We explore the characteristics of these statistics and explore how they may relate to users' Quality of Experience (QoE). More concretely, we have run a series of tests involving two parties and according to different test scenarios, and collected real-time session statistics by means of Google Chrome's WebRTC-internals tool. Despite the fact that the Chrome statistics have a number of limitations, our observations indicate that they are useful for QoE research when these limitations are known and carefully handled when performing post-processing analysis. The results from our initial tests show that a combination of performance indicators measured at the sender's and receiver's end may help to identify severe video freezes (being an important QoE killer) in the context of WebRTC-based video communication. In this paper the performance indicators used are significant drops in data rate, non-zero packet loss ratios, non-zero PLI values, and non-zero bucket delay.

  • 61.
    AMUJALA, NARAYANA KAILASH
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    SANKI, JOHN KENNEDY
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Video Quality of Experience through Emulated Mobile Channels2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Over the past few years, Internet traffic took a ramp increase. Of which, most of the traffic is video traffic. With the latest Cisco forecast it is estimated that, by 2017 online video will be highly adopted service with large customer base. As the networks are being increasingly ubiquitous, applications are turning equally intelligent. A typical video communication chain involves transmission of encoded raw video frames with subsequent decoding at the receiver side. One such intelligent codec that is gaining large research attention is H.264/SVC, which can adapt dynamically to the end device configurations and network conditions. With such a bandwidth hungry, video communications running over lossy mobile networks, its extremely important to quantify the end user acceptability. This work primarily investigates the problems at player user interface level compared to the physical layer disturbances. We have chosen Inter frame time at the Application layer level to quantify the user experience (player UI) for varying lower layer metrics like noise and link power with nice demonstrator telling cases. The results show that extreme noise and low link level settings have adverse effect on user experience in temporal dimension. The video are effected with frequent jumps and freezes.

  • 62.
    ananth, Indirajith Vijai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

  • 63.
    Anderberg, Ted
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Rosén, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Follow the Raven: A Study of Audio Diegesis within a Game’s Narrative2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Virtual Reality is one of the next big things in gaming, more and more games delivering an immersive VR-experience are popping up. Words such as immersion and presence has quickly become buzzwords that’s often used to describe a VR-game or experience. This interactive simulation of reality is literally turning people’s heads. The crowd pleaser, the ability to look around in 360-degrees, is however casting a shadow on the aural aspect. This study focused on this problem in relation to audio narrative. We examined which differences we could identify between a purely diegetic audio narrative and one utilizing a mix between diegetic and non-diegetic sound. How to grab the player’s attention and guide them to places in order for them to progress in the story. By spatializing audio using HRTF, we tested this dilemma through a game comparison with the help of soundscapes by R. Murray Schafer and auditory hierarchy by David Sonnenschein, as well as inspiration from Actor Network Theory. In our game comparison we found that while the synthesized sound, non-diegetic, ensured that the sound grabs the player’s attention, the risk of breaking the player’s immersion also increases.

  • 64.
    Anderdahl, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Darner, Alice
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems Using 3D Vector Fields with OpenGL Compute Shaders2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed. Objectives. This thesis proposes a texture-based 3D vector field particle system, computed on the Graphics Processing Unit, and compares it to an equation-based particle system. Methods. Several tests were conducted comparing different situations and parameters for the methods. All of the tests measured the computational time needed to execute the different methods. Results. We show that the texture-based method was effective in very specific situations where it was expected to outperform the equation-based. Otherwise, the equation-based particle system is still the most efficient. Conclusions. Generally the equation-based method is preferred, except for in very specific cases. The texture-based is most efficient to use for static particle systems and when a huge number of forces is applied to a particle system. Texture-based vector fields is hardly useful otherwise.

  • 65.
    Andersen, Dennis
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Screen-Space Subsurface Scattering, A Real-time Implementation Using Direct3D 11.1 Rendering API2015Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    Context Subsurface scattering - the effect of light scattering within a material. Lots of materials on earth possess translucent properties. It is therefore an important factor to consider when trying to render realistic images. Historically the effect has been used for offline rendering with ray tracers, but is now considered a real-time rendering technique and is done based on approximations off previous models. Early real-time methods approximates the effect in object texture space which does not scale well with real-time applications such as games. A relatively new approach makes it possible to apply the effect as a post processing effect using GPGPU capabilities, making this approach compatible with most modern rendering pipelines.

    Objectives The aim of this thesis is to explore the possibilities of a dynamic real-time solution to subsurface scattering with a modern rendering API to utilize GPGPU programming and modern data management, combined with previous techniques

    Methods The proposed subsurface scattering technique is implemented in a delimited real-time graphics engine using a modern rendering API to evaluate the impact on performance by conducting several experiments with specific properties.

    Results The result obtained hints that by using a flexible solution to represent materials, execution time lands at an acceptable rate and could be used in real-time. These results shows that the execution time grows nearly linearly with consideration to the number of layers and the strength of the effect. Because the technique is performed in screen space, the performance scales with subsurface scattering screen coverage and screen resolution.

    Conclusions The technique could be used in real-time and could trivially be integrated to most existing rendering pipelines. Further research and testing should be done in order to determine how the effect scales in a complex 3D-game environment.

  • 66.
    Andersson, Anders Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Feature Tracking and Head Pose Tracking as Input for Platform Games2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers.

    This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking.

    The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently.

    The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method.

    Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control

  • 67.
    Andersson, David
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, Eric
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Internet of Things: A survey about knowledge and thoughts2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 68.
    Andersson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Silhouette-based Level of Detail: A comparison of real-time performance and image space metrics2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The geometric complexity of objects in games and other real-time applications is a crucial aspect concerning the performance of the application. Such applications usually redraw the screen between 30-60 times per second, sometimes even more often, which can be a hard task in an environment with a high number of geometrically complex objects. The concept called Level of Detail, often abbreviated LoD, aims to alleviate the load on the hardware by introducing methods and techniques to minimize the amount of geometry while still maintaining the same, or very similar result.

    Objectives. This study will compare four of the often used techniques, namely Static LoD, Unpopping LoD, Curved PN Triangles, and Phong Tessellation. Phong Tessellation is silhouette-based, and since the silhouette is considered one of the most important properties, the main aim is to determine how it performs compared to the other three techniques.

    Methods. The four techniques are implemented in a real-time application using the modern rendering API Direct3D 11. Data will be gathered from this application to use in several experiments in the context of both performance and image space metrics.

    Conclusions. This study has shown that all of the techniques used works in real-time, but with varying results. From the experiments it can be concluded that the best technique to use is Unpopping LoD. It has good performance and provides a good visual result with the least average amount of popping of the compared techniques. The dynamic techniques are not suitable as a substitute to Unpopping LoD, but further research could be conducted to examine how they can be used together, and how the objects themselves can be designed with the dynamic techniques in mind.

  • 69.
    Andersson, Linda
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluation of HMI Development for Embedded System Control2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context:The interface development is increasing in complexity and applications with a lot of functionalities that are reliable, understandable and easy to use have to be developed. To be able to compete, the time-to-market has to be short and cost effective. The development process is important and there are a lot of aspects that can be improved. The needs of the development and the knowledge among the developers are key factors. Here code reuse, standardization and the usability of the development tool plays an important role which could have a lot of positive impact on the development process and the quality of the final product. Objectives: A framework for describing important properties for HMI development tools is presented. A representative collection of two development tools are selected, described and based on the experiences from the case study its applicability is mapped to the evaluation framework. Methods: Interviews were made with HMI developers to get information from the field. Following that, a case study of two different development tools were made to highlight the pros and cons of each tool. Results: The properties presented in the evaluation framework are that the toolkit should be open for multiple platforms, accessible for the developer, it should support custom templates, require non-extensive coding knowledge and be reusable. The evaluated frameworks shows that it is hard to meet all the demands. Conclusions: To find a well suited development toolkit is not an easy task. The choice should be made depending on the needs of the HMI applications and the available development resources.

  • 70.
    Andersson, Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of Anti-Aliasing in Motion2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Aliasing is a problem that every 3D game has because of the resolutions that monitors are using right now is not high enough. Aliasing is when you look at an object in a 3D world and see that it has jagged edges where it should be smooth. This can be reduced by a technique called anti-aliasing.

    Objectives. The object of this study is to compare three different techniques, Fast approximate anti-aliasing (FXAA), Subpixel Morphological Anti Aliasing (SMAA) and Temporal anti-aliasing (TAA) in motion to see which is a good default for games.

    Methods. An experiment was run where 20 people participated and tested a real-time prototype which had a camera moving through a scene multiple times with different anti-aliasing techniques.

    Results. The results showed that TAA was consistently performing best in the tests of blurry picture quality, aliasing and flickering. Both SMAA and FXAA were only comparable to TAA in the blur area of the test and falling behind all the other parts.

    Conclusions. TAA is a great anti-aliasing technique to use for avoiding aliasing and flickering while in motion. Blur was thought to be a problem but as the test shows most people did not feel that blur was a problem for any of the techniques that were used.

  • 71.
    Andersson, Marcus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nilsson, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Student thesis
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

  • 72.
    Andersson, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Analys av Arbetsmiljöverkets tillämpning av enkätverktyget NOSACQ-502013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    År 2013 införde Arbetsmiljöverket ett nytt webbaserat enkätverktyg som skall kunna ge ett mått av säkerhetskulturen hos företag och organisationer. Denna enkät baserades på tidigare forskning som tog fram en enkät, NOSACQ-50, för just detta ändamål. Värt att notera är att Arbetsmiljöverkets version är förkortad där vissa påståenden togs bort och vissa skrevs om. Det är detta som analysen behandlar. Hur påverkas resultaten av dessa förändringar som genomfördes av Arbetsmiljöverket? Analysen undersöker den möjliga felmarginalen på två olika sätt. Först räknas en teoretisk felmarginal ut som påvisar hur mycket resultatet kan skilja sig. Därefter analyseras resultaten från enkätundersökningen med samma variabler som fastställdes i den teoretiska analysen. Det visar sig att Arbetsmiljöverkets version av enkäten kan ge upphov till en felmarginal på närmare 0,8175 poäng. Denna marginal är förvånansvärt stor även om den baserar sig på en väldigt osannolik situation. Vid nästa del av analysen visar det sig att enkätundersökningen har en felmarginal på <0,00 poäng, vilket innebär att resultatet inte påverkas i någon större utsträckning. Detta ger ett intressant slutresultat där det påvisats en stor felmarginal i teorin, men som i praktiken är närmare obefintlig. Hur resultatet skall tolkas är inte helt klart. Det finns ett antal felkällor som måste beaktas, såsom lågt deltagarantal i undersökningen. Analysen bygger även i stor utsträckning på subjektiva bedömningar, vilket minskar trovärdigheten för resultaten. Därav har författaren dragit slutsatsen att det finns en uppenbar skillnad i resultaten mellan analysobjekten i teorin. Dock finns det inte tillräckligt med data för att fastställa någon skillnad i praktiken. Det går inte heller att avgöra huruvida den teoretiska analysen och dess resultat stämmer, endast att skillnaden finns där.

  • 73.
    Andersson, Tobias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Brenden, Christoffer
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Parallelism in Go and Java: A Comparison of Performance Using Matrix Multiplication2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis makes a comparison between the performance of Go and Java using parallelizedimplementations of the Classic Matrix Multiplication Algorithm (CMMA). The comparisonattempts to only use features for parallelization, goroutines for Go and threads for Java,while keeping other parts of the code as generic and comparable as possible to accuratelymeasure the performance during parallelization.In this report we ask the question of how programming languages compare in terms of multi-threaded performance? In high-performance systems such as those designed for mathemati-cal calculations or servers meant to handle requests from millions of users, multithreadingand by extension performance are vital. We would like to find out if and how much of a dif-ference the choice of programming language could benefit these systems in terms of parallel-ism and multithreading.Another motivation is to analyze techniques and programming languages that have emergedthat hide the complexity of handling multithreading and concurrency from the user, lettingthe user specify keywords or commands from which the language takes over and creates andmanages the thread scheduling on its own. The Go language is one such example. Is this newtechnology an improvement over developers coding threads themselves or is the technologynot quite there yet?To these ends experiments were done with multithreaded matrix multiplication and was im-plemented using goroutines for Go and threads for Java and was performed with sets of4096x4096 matrices. Background programs were limited and each set of calculations wasthen run multiple times to get average values for each calculation which were then finallycompared to one another.Results from the study showed that Go had ~32-35% better performance than Java between 1and 4 threads, with the difference diminishing to ~2-5% at 8 to 16 threads. The differencehowever was believed to be mostly unrelated to parallelization as both languages maintainednear identical performance scaling as the number of threads increased until the scaling flat-lined for both languages at 8 threads and up. Java did continue to gain a slight increase goingfrom 4 to 8 threads, but this was believed to be due to inefficient resource utilization onJava’s part or due to Java having better utilization of hyper-threading than Go.In conclusion, Go was found to be considerably faster than Java when going from the mainthread and up to 4 threads. At 8 threads and onward Java and Go performed roughly equal.For performance difference between the number of threads in the languages themselves nonoticeable performance increase or decrease was found when creating 1 thread versus run-ning the matrix multiplication directly on the main thread for either of the two languages.Coding multithreading in Go was found to be easier than in Java while providing greater toequal performance. Go just requires the ‘go’ keyword while Java requires thread creation andmanagement. This would put Go in favor for those trying to avoid the complexity of multi-threading while also seeking its benefits.

  • 74.
    Andrej, Sekáč
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance evaluation based on data from code reviews2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process.

    Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation.

    Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis.

    Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.

  • 75.
    Andresen, Mario
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Johnsson, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Analysmetod för designade ljudbilder: Skapandet av nya förhållningssätt för ljuddesign2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Syftet med detta kandidatarbete var att framställa en analysmetod för dekonstruktion av ljuddesign i film. Målet med analysmetoden var att skapa ett förhållningssätt för ljuddesigners som gör det enklare att lära sig från ljudläggningar och genom övning eventuellt kunna bidra till ett mer avancerat sätt att tänka på vad ljud kan göra i en filmproduktion.

    Metoden är tänkt som ett stöd för blivande ljuddesigners som lärt sig tekniska kunskaper men kämpar med den kreativa biten där problemen inte har lika konkreta lösningar. Metoden är också tänkt som ett supplement till mer erfarna ljuddesigners för övning på sina kunskaper ellerförbättring av deras egen process.

    Vi tycker metoden nådde upp till de målen, men den visade sig också vara mer flexibel än så. Genom applicering av metoden inför arbete med vår gestaltning där vi ljudlade ett filmklipp blev designprocessen mycket enklare att komma igenom. Vi tror därför att en metod som vår kan vara en viktig del i att få in ljud tidigare i en filmproduktion.

  • 76.
    Andrén, Emma
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Davidsson, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Ego formatae: En studie i designprocessens agentiella perspektiv och påverkan på dess designer2019Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    I denna undersökning tar vi upp hur en designprocess kan ses ur ett agentiellt perspektiv. Detagentiella perspektivet är framtaget av Karen Barad som i sin forskning i kvantfysik börjadeundersöka hur olika fenomen kunde ta olika former, som till exempel hur ljus kunde ta bådepartikel- och vågform (2003). Genom att applicera detta perspektiv på hur man uppfattar

    verklighet och situationer utifrån olika håll ges möjlighet till en bredare bild av hur verklighets-skapande blir till. Vi ville därmed undersöka detta i relation till designprocess och hur

    verkligheten formas om man ser designprocessen ur dess eget perspektiv istället för designerns,samt hur designern blir formad av processen. Vi har valt att analysera faktorerna tid,informationshantering, process, designperspektiv, Psykofysiologi och medietekniskinstallationskonst. Vi har även valt att studera Chions (1994) begrepp audiovisuella kontraktet,som har varit en grund till vår valda design, genom agentiell realism.Vår design är byggd på intra-aktion, som förklarar mötet mellan olika faktorer som påverkar ensituation (agenser), och audiovisuella kontraktet, som förklarar samspelet mellan ljud, bild ochåskådare, där vi valt att utgå ifrån medieteknisk installationskonst. Med dessa somutgångspunkt skapades en design av en interaktiv Projection mapping, som kan förklaras somen projektion i flera lager, som samspelar med en ljudbild.

    Vi har under processens gång funnit fler exempel på hur vi som designers och individer harformats och förändrat olika ställningstaganden till både processen och vår verklighet, som viresonerat och diskuterat kring. Valet av vår undersökning är grundat på att ge ännu enkunskapsbas för vad som händer under en designprocess och för att medvetandegöra perspektivsom kan underlätta arbetsflödet. Vad händer egentligen med oss i en designprocess, och hurpåverkas designen?

  • 77.
    Andén, Calle
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Moyle, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Artem ex Machina: En undersökning av emergence som fenomen och som metod vid skapandet av posthumanistisk konst2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Detta kandidatarbete är en undersökning av begreppet emergence och hur det kan användas i skapandet av digital interaktiv konst. Vi undersöker sambandet mellan designern och datorn, och hur användaren kan interagera med båda dessa för att bidra till och utforma skapandet.

     

    För att demonstrera detta har vi skapat en simulation, som är tänkt att efterlikna tidigt mänskligt beteende på hög nivå: uppståndelsen av civilizationer, interaktion mellan folkgrupper, och utnyttjande av naturliga resurser. Vi diskuterar de etiska och politiska konsekvenserna som följer på skapandet av en sådan simulation, och vilken sorts interaktion vi främjar i vår design.

  • 78.
    Angelova, Milena
    et al.
    Technical University of Sofia-branch Plovdiv, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linde, Peter
    Blekinge Institute of Technology, The Library.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Expertise Recommender System Based on Data from an Institutional Repository (DiVA)2018In: Proceedings of the 22nd edition of the International Conference on ELectronic PUBlishing, 2018Conference paper (Refereed)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors inacademy.

  • 79.
    Angelova, Milena
    et al.
    Technical University of sofia, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Linde, Peter
    Blekinge Institute of Technology, The Library.
    Lavesson, Niklas
    An Expertise Recommender System based on Data from an Institutional Repository (DiVA)2019In: Connecting the Knowledge Common from Projects to sustainable Infrastructure: The 22nd International conference on Electronic Publishing - Revised Selected Papers / [ed] Leslie Chan, Pierre Mounier, OpenEdition Press , 2019, p. 135-149Chapter in book (Refereed)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors in academy.

  • 80.
    Annavarjula, Vaishnavi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Computer-Vision Based Retinal Image Analysis for Diagnosis and Treatment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context- Vision is one of the five elementary physiologial senses. Vision is enabled via the eye, a very delicate sense organ which is highly susceptible to damage which results in loss of vision. The damage comes in the form of injuries or diseases such as diabetic retinopathy and glaucoma. While it is not possible to predict accidents, predicting the onset of disease in the earliest stages is highly attainable. Owing to the leaps in imaging technology,it is also possible to provide near instant diagnosis by utilizing computer vision and image processing capabilities.

    Objectives- In this thesis, an algorithm is proposed and implemented to classify images of the retina into healthy or two classes of unhealthy images, i.e, diabetic retinopathy, and glaucoma thus aiding diagnosis. Additionally the algorithm is studied to investigate which image transformation is more feasible in implementation within the scope of this algorithm and which region of retina helps in accurate diagnosis.

    Methods- An experiment has been designed to facilitate the development of the algorithm. The algorithm is developed in such a way that it can accept all the values of a dataset concurrently and perform both the domain transforms independent of each other.

    Results- It is found that blood vessels help best in predicting disease associations, with the classifier giving an accuracy of 0.93 and a Cohen’s kappa score of 0.90. Frequency transformed images also presented a accuracy in prediction with 0.93 on blood vessel images and 0.87 on optic disk images.

    Conclusions- It is concluded that blood vessels from the fundus images after frequency transformation gives the highest accuracy for the algorithm developed when the algorithm is using a bag of visual words and an image category classifier model.

    Keywords-Image Processing, Machine Learning, Medical Imaging

  • 81.
    Ansari, Yousuf Hameed
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Siddiqui, Sohaib Ahmed
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Quality Assessment for HEVC Encoded Videos: Study of Transmission and Encoding Errors2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There is a demand for video quality measurements in modern video applications specifically in wireless and mobile communication. In real time video streaming it is experienced that the quality of video becomes low due to different factors such as encoder and transmission errors. HEVC/H.265 is considered as one of the promising codecs for compression of ultra-high definition videos. In this research, full reference based video quality assessment is performed. The raw format reference videos have been taken from Texas database to make test videos data set. The videos are encoded using HM9 reference software in HEVC format. Encoding errors has been set during the encoding process by adjusting the QP values. To introduce packet loss in the video, the real-time environment has been created. Videos are sent from one system to another system over UDP protocol in NETCAT software. Packet loss is induced with different packet loss ratios into the video using NETEM software. After the compilation of video data set, to assess the video quality two kind of analysis has been performed on them. Subjective analysis has been carried on different human subjects. Objective analysis has been achieved by applying five quality matrices PSNR, SSIM, UIQI, VFI and VSNR. The comparison is conducted on the objective measurement scores with the subjective and in the end results deduce from classical correlation methods.

  • 82.
    Antman, Benjamin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Cybernetic Social Space: A Theoretical Comparison of Mediating Spaces in Digital Culture2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This essay does a theoretical comparison of the intricate social production in digital and real spaces, proposing a model for the non-technical exploration of the social production of spaces relating to human digital technology. The ‘social space’ proposed by Henri Lefebvre (1974) - responsible for producing material space - and the holistic model of ‘cybernetic space’ proposed by Ananda Mitra and Rae Lynn Schwartz (2001) - responsible for supporting the production of real and digital spaces - are argued as collaboratively producing cybernetic social spaces, serving as the definition of a unified model for the production of spaces in contemporary society. The digital spaces are argued as being a similar analogue to classical ‘social space’. Two native cybernetic spaces are presented and discussed, argued as being responsible for the transitive production of digital and real spaces as they survey and situate the production of cybernetic social space. Finally, two case studies exemplifying the aesthetics and politics of cybernetic space are presented, analyzed and discussed in accordance with the proposed model of cybernetic social space.

  • 83.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Digitalization in Container Terminal Logistics: A Literature Review2019In: 27th Annual Conference of International Association of Maritime Economists (IAME), Athens, Greece, 2019, p. 1-25, article id 141Conference paper (Refereed)
    Abstract [en]

    Many terminals that are located in large ports, such as Port of Rotterdam, Port of Singapore, Port of Hamburg, etc. employ various emerging digital technologies to handle container and information. Some technologies deemed attractive by large ports are: Artificial Intelligence (AI), Cloud Computing, Blockchain and Internet of Things (IoT). The objective of this paper is to review the “state-of-the-art” of scientific literature on digital technologies that facilitate operations management for container terminal logistics. The studies are synthesized in form of a classification matrix and analysis performed. The primary studies consisted of 57 papers, out of the initial pool of over 2100 findings. Over 94% of the publications identified focused on AI; while 29% exploited IoT and Cloud Computing technologies combined. The research on Blockchain within the context of container terminal was nonexistent. Majority of the publications utilized numerical experiments and simulation for validation. A large amount of the scientific literature was dedicated to resource management and scheduling of intra-logistic equipment/vessels or berth or container storage in the yard. Results drawn from the literature survey indicate that various research gaps exist. A discussion and an analysis of review is presented, which could be of benefit for stakeholders of small-medium sized container terminals.

  • 84.
    Anwar, Mahwish
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    The feasibility of Blockchain solutions in the maritime industry2019Conference proceedings (editor) (Other academic)
    Abstract [en]

    Purpose / Value

    The concept of Blockchain technology in supply chain management is well discussed, yet

    inadequately theorized in terms of its applicability, especially within the maritime industry,

    which forms a fundamental node of the entire supply chain network. More so, the assumptive

    grounds associated with the technology have not been openly articulated, leading to unclear

    ideas about its applicability.

    Design/methodology/approach

    The research is designed divided into two Stages. This paper (Stage one) enhanced

    literature review for data collection in order to gauge the properties of the Blockchain

    technology, and to understand and map those characteristics with the Bill of Lading

    process within maritime industry. In Stage two an online questionnaire is conducted to

    assess the feasibility of Blockchain technology for different maritime use-cases.

    Findings

    The research that was collected and analysed partly from deliverable in the

    Connect2SmallPort Project and from other literature suggests that Blockchain can be an

    enabler for improving maritime supply chain. The use-case presented in this paper highlights

    the practicality of the technology. It was identified that Blockchain possess characteristics

    suitable to mitigate the risks and issues pertaining to the paper-based Bill of Lading process.

    Research limitations

    The study would mature further after the execution of the Stage Two. By the end of both

    Stages, a framework for Blockchain adoption with a focus on the maritime industry would

    be proposed.

    Practical implications

    The proposed outcome indicated the practicality of technology, which could be beneficial

    for the port stakeholders that wish to use Blockchain in processing Bill of Lading or

    contracts.

    Social implications

    The study may influence the decision makers to consider the benefits of using the Blockchain

    technology, thereby, creating opportunities for the maritime industry to leverage the

    technology with government’s support.

  • 85.
    ANWAR, WALEED
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Quality Characteristics Tested For Mobile Application Development: Literature Review and Empirical Survey2015Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Smart phones use is increasing day by day as there is large number of app users. Due to more use of apps, the testing of mobile application should be done correctly and flawlessly to ensure the effectiveness of mobile applications.

  • 86.
    Aouachria, Moufida
    et al.
    Universite du Quebec a Montreal, CAN.
    Leshob, Abderrahmane
    Universite du Quebec a Montreal, CAN.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ghomari, Abdessamed Réda
    Ecole nationale superieure d'Informatique, DZA.
    Hadaya, Pierre
    Universite du Quebec a Montreal, CAN.
    Business Process Integration: How to Achieve Interoperability through Process Patterns2017In: Proceedings - 14th IEEE International Conference on E-Business Engineering, ICEBE 2017 - Including 13th Workshop on Service-Oriented Applications, Integration and Collaboration, SOAIC 207, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 109-117Conference paper (Refereed)
    Abstract [en]

    Business process integration (BPI) is a crucial technique for supporting inter-organizational business interoperability. BPI allows automation of business processes and the integration of systems across numerous organizations. The integration of organizations' process models is one of the most addressed and used approach to achieve BPI. However, this model integration is complex and requires that designers have extensive experience in particular when organizations' business processes are incompatible. This paper considers the issue of modeling cross-organization processes out of a collection of organizations' private process models. To this end, we propose six adaptation patterns to resolve incompatibilities when combining organizations' processes. Each pattern is formalized with workflow net. © 2017 IEEE.

  • 87.
    Aoun, Peter
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berg, Nils
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Moving an on-screen cursor with the Emotiv Insight EEG headset: An evaluation through case studies2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today smartphones are everywhere and they ease the lives of millions of people every day. However there are people who, because of various reasons, are unable to receive the benefits of these devices because they are not able to interact with a smartphone in the intended way; using their hands.

    In this thesis we investigate an alternative method for interacting with a smartphone; using a commercially available electroencephalography (EEG) headset. EEG is a technique for measuring and recording brain activity, often through the use of sensors placed along the scalp of the user.

    We developed a prototype of a brain-computer interface (BCI) for use with android and the Emotiv Insight commercial EEG headset. The prototype allows the user to control an on-screen cursor in one dimension within an android application using the Emotiv Insight.

    We performed three case studies with one participant in each. The participants had no prior experience with EEG headsets or BCIs. We had them train to use the Emotiv Insight with our BCI prototype. After the training was completed they performed a series of tests in order to measure their ability to control an on-screen cursor in one dimension. Finally the participants filled out a questionnaire regarding their subjective experiences of using the Emotiv Insight.

    These case studies showed the inadequacies of the Emotiv Insight. All three participants had issues with training and using the headset. These issues are reflected in our tests, where 44 out of 45 attempts at moving the cursor to a specific area resulted in a failure. All participants also reported fatigue and headaches during the case studies. We also concluded that the Emotiv Insight provides a poor user experience because of fatigue in longer sessions and the amount of work needed to train the headset.

  • 88.
    APPELQVIST, ALBIN
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    TIMALM, DANIEL
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Ljuddesign i andrapersonsperspektivet2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Med det här kandidatarbetet vill vi utforska hur spelvärldens ljuddesign kan användas och anpassas för att skapa ljuddesign i andrapersonsperspektivet. Eftersom andrapersonsperspektivet främst framkommer i spel vill vi bilda oss en uppfattning om hur ljudbilden kan se ut utifrån andrapersonsperspektivet för audiovisuella medier. Med hjälp av Michael Pauens kriterier för första, andra och tredjepersonsperspektivet bildar vi oss en förståelse för den filosofiska aspekten på de olika perspektiven. Pauens kriterier förandrapersonsperspektivet utgår ifrån distinktionen och förståelsen mellan ens egna medvetande och den andra personens medvetande. Andra viktiga begrepp som använts i vår undersökning är Syncresis, vår förmåga att koppla ljud till bild och Diegesis, det som avgör vilka element som finns i en films egna värld. Begreppen Diffraktion och Situerad kunskap har hjälpt oss att använda den kunskap som vi redan besitter och fokusera på alla aspekter av ljuddesignen och göra det möjligt för oss att utforska olika synvinklar på hur andrapersonsperspektivet kan gestaltas. Därefter nyttjas denna kunskap och appliceras på tekniker som ljudinspelning, sound mapping, foley och mixning. Våra experiment med inspelning och ljudläggning för olika vyer har resulterat i en audiovisuell gestaltning som visar hur andrapersonsperspektivet kan se ut och hur ljudbilden skulle kunna vara. Vad vi har kommit fram till är att skiftet mellan protagonisten och andrapersonen i den narrativa aspekten måste framgå tydligt i det audiovisuella mediet och även att det hade varit tydligare för åskådaren att se att det är andrapersonsperspektivet om det hade gestaltats i längre narrativa ark. Vi har som utforskare blivit mer medvetna om hur andrapersonsperspektivet kan gestaltas.

  • 89.
    Ardito, Luca
    et al.
    Politecnico di Torino, ITA.
    Coppola, Riccardo
    Politecnico di Torino, ITA.
    Torchiano, Marco
    Politecnico di Torino, ITA.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards automated translation between generations of GUI-based tests for mobile devices2018In: Companion Proceedings for the ISSTA/ECOOP 2018 Workshops, Association for Computing Machinery, Inc , 2018, p. 46-53Conference paper (Refereed)
    Abstract [en]

    Market demands for faster delivery and higher software quality are progressively becoming more stringent. A key hindrance for software companies to meet such demands is how to test the software due to to the intrinsic costs of development, maintenance and evolution of testware. Especially since testware should be defined, and aligned, with all layers of system under test (SUT), including all graphical user interface (GUI) abstraction levels. These levels can be tested with different generations of GUI-based test approaches, where 2nd generation, or Layout-based, tests leverage GUI properties and 3rd generation, or Visual, tests make use of image recognition. The two approaches provide different benefits and drawbacks and are seldom used together because of the aforementioned costs, despite growing academic evidence of the complementary benefits. In this work we propose the proof of concept of a novel two-step translation approach for Android GUI testing that we aim to implement, where a translator first creates a technology independent script with actions and elements of the GUI, and then translates it to a script with the syntax chosen by the user. The approach enables users to translate Layout-based to Visual scripts and vice versa, to gain the benefits (e.g. robustness, speed and ability to emulate the user) of both generations, whilst minimizing the drawbacks (e.g. development and maintenance costs). We outline our approach from a technical perspective, discuss some of the key challenges with the realization of our approach, evaluate the feasibility and the advantages provided by our approach on an open-source Android application, and discuss the potential industrial impact of this work. © 2018 ACM.

  • 90.
    Arlock, Jonatan
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Digitala Hantverk: Kunskapstraditioner i informationssamhället2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
  • 91.
    Armon, Negin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Illustrerad narrativ2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Konsten att berätta historier med bilder och illustrationer är något som har funnits i flera tusen år. Detta kandidatarbete undersöker ämnet illustrerad narrativ och redogör olika aspekter och principer som är avgörande för att man ska kunna förmedla ett budskap med hjälp av illustrationer. Idag förknippas illustrerad narrativ framför allt med serietidningar, bilderböcker och grafiska romaner som berättar en narration i form av en serie av bilder. Under den teoretiska delen av arbetet har jag först gått igenom illustrerat narrativs historia och sedan redovisat principer för det moderna illustrerat narrativet. Jag har också forskat kring bildens kommunikativa värde och samspelet mellan bild och text. Under produktionsdelen av detta kandidatarbete har jag skapat en illustrerad bok. Tanken har varit att tillämpa de teoretiska stöden från forskningsdelen i ett praktiskt arbete. Jag har även gått igenom den kreativa processen för att skapa illustrerade berättelser och redovisat principer som är avgörande för att man ska kunna skapa balans och harmoni i illustrationer. The art of telling stories with images and illustrations has been around for thousands of years. This bachelor thesis explores the subject of illustrated narrative, and describes various aspects and principles that are crucial in order to be able to convey a message using illustrations. Today is illustrated narrative especially associated with comic books, picture books and graphic novels that tell a story in the form of series of images. In the theoretical part of the work I first go through the history of illustrated narrative and then present its modern principles. Moreover, I report on my research on image communicative value and the interplay between image and text. In order to transform my theoretical understanding and research into practical form, in the production section of this work, I present an illustrated book. In doing so, I explain the creative process to create illustrated stories as well as the important principles in creating balance and harmony in illustrations.

  • 92.
    Arnesson, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Codename one and PhoneGap, a performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Creating smartphone applications for more than one operating system requires knowledge of several code languages, more code maintenance, higher development costs and longer development time. To make this easier cross-platform tools (CPTs) exist. But using a CPT can decrease performance of the application. Applications with low performance are more likely to get uninstalled and this makes developers lose income. There are four main CPT approaches hybrid, interpreter, web and cross-compiler. Each has different disadvantages .and advantages. This study will examine the performance difference between two CPTs, Codename One and PhoneGap. The performance measurements, CPU load, memory usage, energy consumption, time execution and application size will be made to compare the CPTs. If cross-compilers have better performance than other CPT approaches will also be investigated. An experiment where three applications are created with native Android, Codename One and PhoneGap will be made and performance measurements will be made. A literature study with research from IEEE and Engineering village will be conducted on different CPT approaches. PhoneGap performed best with shortest execution time, least energy consumption and least CPU usage while Codename One had smallest application size and least memory usage. The research available on performance for CPTs is short and not well done. The difference between PhoneGap and Codename One is not big except for writing to SQLite. No basis was found for the statement that cross-compilers have better performance than other CPT approaches.  

  • 93.
    Arnesson, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lewenhagen, Kenneth
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison and Prediction of Temporal Hotspot Maps2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. To aid law enforcement agencies when coordinating and planningtheir efforts to prevent crime, there is a need to investigate methods usedin such areas. With the help of crime analysis methods, law enforcementare more efficient and pro-active in their work. One analysis method istemporal hotspot maps. The temporal hotspot map is often represented asa matrix with a certain resolution such as hours and days, if the aim is toshow occurrences of hour in correlation to weekday. This thesis includes asoftware prototype that allows for the comparison, visualization and predic-tion of temporal data.

    Objectives. This thesis explores if multiprocessing can be utilized to im-prove execution time for the following two temporal analysis methods, Aoris-tic and Getis-Ord*. Furthermore, to what extent two temporal hotspotmaps can be compared and visualized is researched. Additionally it wasinvestigated if a naive method could be used to predict temporal hotspotmaps accurately. Lastly this thesis explores how different software packag-ing methods compare to certain aspects defined in this thesis.

    Methods. An experiment was performed, to answer if multiprocessingcould improve execution time of Getis-Ord* or Aoristic. To explore howhotspot maps can be compared, a case study was carried out. Another ex-periment was used to answer if a naive forecasting method can be used topredict temporal hotspot maps. Lastly a theoretical analysis was executedto extract how different packaging methods work in relation to defined as-pects.

    Results. For both Getis-Ord* and Aoristic, the sequential implementationsachieved the shortest execution time. The Jaccard measure calculated thesimilarity most accurately. The naive forecasting method created, provednot adequate and a more advanced method is preferred. Forecasting Swedishburglaries with three previous months produced a mean of only 12.1% over-lap between hotspots. The Python package method accumulated the highestscore of the investigated packaging methods.

    Conclusions. The results showed that multiprocessing, in the languagePython, is not beneficial to use for Aoristic and Getis-Ord* due to thehigh level of overhead. Further, the naive forecasting method did not provepractically useful in predicting temporal hotspot maps.

  • 94.
    Arredal, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Eye Tracking’s Impact on Player Performance and Experience in a 2D Space Shooter Video Game.2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Although a growing market, most of the commercially available gamestoday that features eye tracking support is rendered in a 3D perspective. Games ren-dered in 2D have seen little support for eye trackers from developers. By comparing the differences in player performance and experience between an eye tracker and acomputer mouse when playing a classic 2D genre: space shooter, this thesis aim tomake an argument for the implementation of eye tracking in 2D video games.

    Objectives. Create a 2D space shooter video game where movement will be handledthrough a keyboard but the input method for aiming will alter between a computermouse and an eye tracker.

    Methods. Using a Tobii EyeX eye tracker, an experiment was conducted with fif-teen participants. To measure their performance, three variables was used: accuracy,completion time and collisions. The participants played two modes of a 2D spaceshooter video game in a controlled environment. Depending on which mode wasplayed, the input method for aiming was either an eye tracker or a computer mouse.The movement was handled using a keyboard for both modes. When the modes hadbeen completed, a questionnaire was presented where the participants would ratetheir experience playing the game with each input method.

    Results. The computer mouse had a better performance in two out of three per-formance variables. On average the computer mouse had a better accuracy andcompletion time but more collisions. However, the data gathered from the question-naire shows that the participants had on average a better experience when playingwith an eye tracker

    Conclusions. The results from the experiment shows a better performance for par-ticipants using the computer mouse, but participants felt more immersed with the eyetracker and giving it a better score on all experience categories. With these results,this study hope to encourage developers to implement eye tracking as an interactionmethod for 2D video games. However, future work is necessary to determine if theexperience and performance increase or decrease as the playtime gets longer.

  • 95.
    Arvola Bjelkesten, Kim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

  • 96.
    Asif, Sajjad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating Web Size Metrics for Early Web Cost Estimation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects.

    Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation.

    Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project.

    Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation.

    Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.

  • 97.
    Asklund, Ulf
    et al.
    Lund University, SWE.
    Höst, Martin
    Lund University, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Monitoring Effect of Architectural Changes2016In: Software Quality.: The Future of Systems- and Software Development / [ed] Winkler, Dietmar, Biffl, Stefan, Bergsmann, Johannes, 2016, p. 97-108Conference paper (Refereed)
    Abstract [en]

    A common situation is that an initial architecture has been sufficient in the initial phases of a project, but when the size and complexity of the product increases the architecture must be changed. In this paper experiences are presented from changing an architecture into independent units, providing basic reuse of main functionality although giving higher priority to independence than reuse. An objective was also to introduce metrics in order to monitor the architectural changes. The change was studied in a case-study through weekly meetings with the team, collected metrics, and questionnaires. The new architecture was well received by the development team, who found it to be less fragile. Concerning the metrics for monitoring it was concluded that a high abstraction level was useful for the purpose.

  • 98.
    Askwall, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Utvärderingsmetod Säkerhetskultur: Ett första steg i en valideringsprocess2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Företag investerar idag väldigt mycket pengar på att säkra sina fysiska och logiska tillgångar med hjälp av tekniska skyddsmekanismer. Dock är all säkerhet på något sätt beroende av den enskilde individens omdöme och kunskap. Hur går det avgöra att organisationen kan lita på individens omdöme och kunskap? Hur går det avgöra om en organisation har en god kultur kring säkerhet? Genom att utvärdera säkerhetskulturen kan organisationer få ett utökat underlag i riskhanteringsarbetet samt en bättre förmåga att hantera det som hotar verksamhetens tillgångar. Den forskning som finns idag på området säkerhetskultur är både oense kring vad som utgör god säkerhetskultur men framför allt hur kulturen ska utvärderas. Denna forskningsansats är således ett försök att ta fram en intuitiv utvärderingsmetod som organisationer kan använda för att utvärdera sin säkerhetskultur. Utvärderingsmetoden liknar en gap-analys där en organisations önskade kultur fastställs och datainsamling sker genom en enkätundersökning. Dataunderlaget sammanställs och används för att skapa ett index för den rådande kulturen i jämförelse med den önskade kulturen. I detta inledande försök testas undersökningens reliabilitet genom Cronbach's alpha och validiteten testas genom en form av konfirmatorisk faktoranalys. Resultatet visar hur ett index som representerar en organisations säkerhetskultur skapas. Det går att påvisa god reliabilitet på utvärderingsmetoden och författaren finner goda argument för nyttan av en sådan metod i det proaktiva säkerhetsarbetet. Dock har omständigheter gjort det mycket svårt att påvisa god validitet i denna inledande undersökning.

  • 99. Astor, Philipp
    et al.
    Adam, Marc
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Schaaff, Kristina
    Weinhardt, Christof
    Integrating biosignals into information systems: A NeuroIS tool for improving emotion regulation2013In: Journal of Management Information Systems, ISSN 0742-1222, E-ISSN 1557-928X, Vol. 30, no 3, p. 247-277Article in journal (Refereed)
    Abstract [en]

    Traders and investors are aware that emotional processes can have material consequences on their financial decision performance. However, typical learning approaches for debiasing fail to overcome emotionally driven financial dispositions, mostly because of subjects' limited capacity for self-monitoring. Our research aims at improving decision makers' performance by (1) boosting their awareness to their emotional state and (2) improving their skills for effective emotion regulation. To that end, we designed and implemented a serious game-based NeuroIS tool that continuously displays the player's individual emotional state, via biofeedback, and adapts the difficulty of the decision environment to this emotional state. The design artifact was then evaluated in two laboratory experiments. Taken together, our study demonstrates how information systems design science research can contribute to improving financial decision making by integrating physiological data into information technology artifacts. Moreover, we provide specific design guidelines for how biofeedback can be integrated into information systems

  • 100.
    Atchukatla, Mahammad suhail
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Algorithms for efficient VM placement in data centers: Cloud Based Design and Performance Analysis2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.

     

    Objectives:  In this research, our objectives are as follows

    • Perform simulation of algorithms in CloudSim simulator.
    • Estimate and compare the energy consumption of different packing algorithms.
    • Design an OpenStack testbed to implement the Bin packing algorithm.

     

    Methods:

    We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.

     

    Results:

    Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.

     

    Conclusions:

    The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.

1234567 51 - 100 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf