Change search
Refine search result
12345 151 - 200 of 208
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Onyeche, Ikechukwu
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    EVALUATING THE INFLUENCE OF INFORMATION TECHNOLOGY ON 3D PRINTING FOR PRODUCT DEVELOPMENT IN LAGOS, NIGERIA2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Today, many organizations in global market now frequently scan and assess their environment as consumers’ taste and preference became the sole target. Consumers’ preferred product of yesterday might unexpectedly devalue or be perceived staled the next day, due to the varieties, quality options and advancement in information technology from other firms. 3D printing technology describes a series of digital manufacturing technologies, which produce component parts layer-by-layer through the additional use of materials. 3D printing technology consists of three core phases that is, the modelling, the printing and the finishing of the product. In the modelling, 3D printing technology could proffer additional improvements depending on the engaged manufacturing method.

    Objectives: This research seeks to critically evaluate the influence of information technology on 3D printing for product development in Lagos, Nigeria. In this project, variables such as product design tools, decision support systems and file transfer protocol were categorized into three phases, namely; discovery, development and commercialization as the major sub-construct measuring the independent variable (Information Technology) on the dependent variable (3D printing and product development).

    Methods: The research was purely quantitative as questionnaires were used as the major evaluating instrument under descriptive and cross sectional research design (survey methods). The population consisted of seven selected printing press companies in Lagos, Nigeria. The survey was used to generate responses related to the research questions and objectives.

    Results: The result of the study reveals that product design tools in the discovery and development phase, the decision support system in the development and commercialization phase, and the file transfer protocols in the discovery, development and commercialization phase have significant influence on 3D printing for product development. Also, it was found that product design tools in the commercialization phase and in the discovery phase of decision support system do not have significant influence on 3D printing for product development.

    Conclusions: The study concludes that product design tools in the discovery and development phase, the decision support system in the development and commercialization phase, and the file transfer protocols in the discovery, development and commercialization phase have significant influence on 3D printing for product development. Thus, the study recommends that there is need for more innovations, there is need to standardise packages for loan applications, which will act as a decision support system for the employees in order to make consistent applications for loans, credit analysis, etc. Finally, more research work can be devoted through the use of other related textbooks and journals.

  • 152.
    Orsvärn, Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Automatic spotlight distribution for indirect illumination2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Indirect illumination – the light contribution from bounce light in an environment – is an important effect when creating realistic images. Historically it has been approximated very poorly by applying a constant ambient term. This approximation is unacceptable if the goal is to create realistic results as bouncing light contributes a lot of light in the real world. Objectives. This thesis proposes a technique to use a reflective shadow map to place and configure spotlights in an environment to approximate global illumination. Methods. The proposed spotlight distribution technique is implemented in a delimited real time graphics engine, and the results are compared to a naive spotlight distribution method. Results. The image resulting from the proposed technique has a lower quality than the comparison in our test scene. Conclusions. The technique could be used in its current state for applications where the view can be controlled by the developer such as in 3D side scrolling games or as a tool to generate editable indirect illumination. Further research needs to be conducted to make it more broadly viable.

  • 153.
    Oscar, Roosvall
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Procedural Terrain Generation Using Ray Marching2016Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
  • 154. Peltonen, L.-M.
    et al.
    Alhuwail, D.
    Ali, S.
    Badger, M.K.
    Eler, G.J.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Islam, T.
    Jeon, E.
    Jung, H.
    Kuo, C.-H.
    Lewis, A.
    Pruinelli, L.
    Ronquillo, C.
    Sarmiento, R.F.
    Sommer, J.
    Tayaben, J.L.
    Topaz, M.
    Current trends in nursing informatics: Results of an international survey2016In: Studies in Health Technology and Informatics / [ed] Sermeus W.,Weber P.,Procter P.M., IOS Press, 2016, Vol. 225, p. 938-939Conference paper (Refereed)
    Abstract [en]

    Nursing informatics (NI) can help provide effective and safe healthcare. This study aimed to describe current research trends in NI. In the summer 2015, the IMIA-NI Students Working Group created and distributed an online international survey of the current NI trends. A total of 402 responses were submitted from 44 countries. We identified a top five NI research areas: standardized terminologies, mobile health, clinical decision support, patient safety and big data research. NI research funding was considered to be difficult to acquire by the respondents. Overall, current NI research on education, clinical practice, administration and theory is still scarce, with theory being the least common. Further research is needed to explain the impact of these trends and the needs from clinical practice. © 2016 IMIA and IOS Press.

  • 155. Peltonen, L.-M.
    et al.
    Topaz, M.
    Ronquillo, C.
    Pruinelli, L.
    Sarmiento, R.F.
    Badger, M.K.
    Ali, S.
    Lewis, A.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Jeon, E.
    Tayaben, J.L.
    Kuo, C.-H.
    Islam, T.
    Sommer, J.
    Jung, H.
    Eler, G.J.
    Alhuwail, D.
    Nursing informatics research priorities for the future: Recommendations from an international survey2016In: NURSING INFORMATICS 2016: EHEALTH FOR ALL: EVERY LEVEL COLLABORATION - FROM PROJECT TO REALIZATION, IOS Press, 2016, Vol. 225, p. 222-226Conference paper (Refereed)
    Abstract [en]

    We present one part of the results of an international survey exploring current and future nursing informatics (NI) research trends. The study was conducted by the International Medical Informatics Association Nursing Informatics Special Interest Group (IMIA-NISIG) Student Working Group. Based on findings from this cross-sectional study, we identified future NI research priorities. We used snowball sampling technique to reach respondents from academia and practice. Data were collected between August and September 2015. Altogether, 373 responses from 44 countries were analyzed. The identified top ten NI trends were big data science, standardized terminologies (clinical evaluation/implementation), education and competencies, clinical decision support, mobile health, usability, patient safety, data exchange and interoperability, patient engagement, and clinical quality measures. Acknowledging these research priorities can enhance successful future development of NI to better support clinicians and promote health internationally. © 2016 IMIA and IOS Press.

  • 156.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Using Tag based Semantic Annotation to Empower Client and REST Service Interaction2018In: COMPLEXIS 2018 - Proceedings of the 3rd International Conference on Complexity, Future Information Systems and Risk, SciTePress, 2018, p. 64-71Conference paper (Refereed)
    Abstract [en]

    The utilization of Web services is becoming a human labor consuming work as the rapid growth of Web. The semantic annotated service description can support more automatic ways on tasks such as service discovery, invocation and composition. But the adoption of existed Semantic Web Services solutions is hindering by their overly complexity and high expertise demand. In this paper we propose a highly lightweight and non-intrusive method to enrich the REST Web service resources with semantic annotations to support a more autonomous Web service utilization and generic client service interaction. It is achieved by turning the service description into a semantic resource graph represented in RDF, with the added tag based semantic annotation and a small vocabulary. The method is implemented with the popular OpenAPI service description format, and illustrated by a simple use case example.

  • 157.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Ontological Approach to Integrate Health Resources from Different Categories of Services2018In: HEALTHINFO 2018, The Third International Conference on Informatics and Assistive Technologies for Health-Care, Medical Support and Wellbeing, International Academy, Research and Industry Association (IARIA), 2018, p. 48-54Conference paper (Refereed)
    Abstract [en]

    Effective and convenient self-management of health requires collaborative utilization of health data from different services provided by healthcare providers, consumer-facing products and even open data on the Web. Although health data interoperability standards include Fast Healthcare Interoperability Resources (FHIR) have been developed and promoted, it is impossible for all the different categories of services to adopt in the near future. The objective of this study aims to apply Semantic Web technologies to integrate the health data from heterogeneously built services. We present an Web Ontology Language (OWL)-based ontology that models together health data from FHIR standard implemented services, normal Web services and Linked Data. It works on the resource integration layer of the presented layered integration architecture. An example use case that demonstrates how this method integrates the health data into a linked semantic health resource graph with the proposed ontology is presented.

  • 158.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Fuzzy Matching of OpenAPI Described REST Services2018In: Procedia Computer Science, Elsevier, 2018, Vol. 126, p. 1313-1322Conference paper (Refereed)
    Abstract [en]

    The vast amount of Web services brings the problem of discovering desired services for composition and orchestration. The syntactic service matching methods based on the classical set theory have a difficulty to capture the imprecise information. Therefore, an approximate service matching approach based on fuzzy control is explored in this paper. A service description matching model to the OpenAPI specification, which is the most widely used standard for describing the defacto REST Web services, is proposed to realize the fuzzy service matching with the fuzzy inference method developed by Mamdani and Assilian. An evaluation shows that the fuzzy service matching approach performed slightly better than the weighted approach in the setting context.

  • 159.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Linking Health Web Services as Resource Graph by Semantic REST Resource Tagging2018In: Procedia Computer Science / [ed] Shakshuki E.,Yasar A., Elsevier, 2018, Vol. 141, p. 319-326Conference paper (Refereed)
    Abstract [en]

    Various health Web services host a huge amount of health data about patients. The heterogeneity of the services hinders the collaborative utilization of these health data, which can provide a valuable support for the self-management of chronic diseases. The combination of REST Web services and Semantic Web technologies has proven to be a viable approach to address the problem. This paper proposes a method to add semantic annotations to the REST Web services. The service descriptions and the resource representations with semantic annotations can be transformed into a resource graph. It integrates health data from different services, and can link to the health-domain ontologies and Linked Open Health Data to support health management and imaginative applications. The feasibility of out method is demonstrated by realizing with OpenAPI service description and JSON-LD representation in an example use case.

  • 160.
    Peng, Cong
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Flexible System Architecture of PHR to Support Sharing Health Data for Chronic Disease Self-Management2016In: Global Telemedicine and eHealth Updates: Knowledge Resources Vol. 9, 2016 / [ed] M Jordanova; F Lievens, International Society for Telemedicine & eHealth , 2016, Vol. 9, p. 11-15Conference paper (Refereed)
    Abstract [en]

    Health data sharing can benefit patients to self-manage the challenging chronic diseases out of hospital. The patient controlled electronic Personal Health Record (PHR), as a tool manages comprehensive health data, is absolutely a good entry point to share health data with multiple parties for mutual benefits in the long-term.

    However, sharing health data from PHR remains challenges. The sharing of health data has to be considered holistically together with the key issues such as privacy, compatibility, evolvement and so on. A PHR system should be flexible to aggregate health data of a patient from various sources to make it comprehensive and up-to-date, should be flexible to share different categories and levels of health data for various utilizations, should be flexible to embed emerging access control mechanisms to ensure privacy and security under different sceneries.

    Therefore, the flexibility of system architecture on the integration of existed and future diversifications is crucial for PHR’s practical long-term usability. This paper discussed the necessity and some advice of possible solution, by the reviewed literatures and the experience from a previous study, of flexible PHR system architecture on the mentioned aspects.

  • 161.
    Petersson Forsberg, Lena
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Spatial Planning.
    Eriksen, Sara
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Could mixed AC/DC power systems support more sustainable communities?2016In: 2016 1st International Conference on Sustainable Green Buildings and Communities, SGBC 2016, Institute of Electrical and Electronics Engineers Inc. , 2016Conference paper (Refereed)
    Abstract [en]

    The main contribution of this paper is to challenge the current understanding in Swedish spatial planning of power systems being, inherently, alternating current (AC) systems. We propose two areas for piloting local direct current (DC) systems as a way of introducing the concept of mixed AC/DC power systems with the aim of supporting more (self-)sustainable local communities. One is local recreational areas, so called 'green spaces' in urban planning. The other is eHealth, when it involves spatial planning and building, or rebuilding existing living space, for supporting healthcare provision in the patient's own home. The paper discusses how mixed AC/DC power systems might be introduced into the current planning discourse and practice in Sweden as part of the necessary reconceptualization of what sustainability means in spatial planning. The paper is based on the authors' previous and on-going research, in spatial planning and eHealth respectively, and has been inspired by the on-going research and development at IIT-M on robust and affordable local DC solutions for Indian households. It is an early-stage exploratory paper based on a recently initiated interdisciplinary dialogue between computer scientists and spatial planning researchers at Blekinge Institute of Technology, Sweden, about green infrastructuring for a more sustainable society. © 2016 IEEE.

  • 162.
    Petersson, Stefan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving image quality by SSIM based increase of run-length zeros in GPGPU JPEG encoding2014In: Conference Record of the Asilomar Conference on Signals Systems and Computers, IEEE Computer Society, 2014, p. 1714-1718Conference paper (Refereed)
    Abstract [en]

    JPEG encoding is a common technique to compress images. However, since JPEG is a lossy compression certain artifacts may occur in the compressed image. These artifacts typically occur in high frequency or detailed areas of the image. This paper proposes an algorithm based on the SSIM metric to improve the experienced quality in JPEG encoded images. The algorithm improves the quality in detailed areas by up to 1.29 dB while reducing the quality in less detailed areas of the image, thereby increasing the overall experienced quality without increasing the image data size. Further, the algorithm can also be used to decrease the file size (by up to 43%) while preserving the experienced image quality. Finally, an efficient GPU implementation is presented. © 2014 IEEE.

  • 163.
    Petersson, Stefan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Color demosaicing using structural instability2016In: Proceedings - 2016 IEEE International Symposium on Multimedia, ISM 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 541-544Conference paper (Refereed)
    Abstract [en]

    This paper introduces a new metric for approximating structural instability in Bayer image data. We show that the metric can be used to identify and classify validity of color correlation in local image regions. The metric is used to improve interpolation performance of an existing state of-the-art single pass linear demosaicing algorithm, with virtually no impact on computational GPGPU complexity and performance. Using four different image sets, the modification is shown to outperform the original method in terms of visual quality, by having an average increase in PSNR of 0.7 dB in the red, 1.5 dB in the green and 0.6 dB in the blue channel respectively. Because of fewer high-frequency artifacts, the average output data size also decreases by 2.5%. © 2016 IEEE.

  • 164.
    Petrini, Alexander
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Forslin, Henrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Evaluation of Player Performance with a Brain Computer Interface and Eye TrackingControl in an Entertainment Game Application2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 165.
    Pettersson, Erik
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Social Interaction with Real-time Facial Motion Capture2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Social interaction between player avatars is one of the fundamentals of online multiplayer games, where text- and voice chat is used as standard methods for communication. To express emotions, players have the option to use emotes, which are text-based commands to play animations of the player’s avatar.

    Objectives. This study investigates if real-time facial capture could be preferred and more realistic, compared to typing emote commands to play facial animations when expressing emotions to other players.

    Methods. By using two methods for social interaction between players, a user study with 24 participants was conducted in a private conference room. An experiment was created where each participant tested the two methods to perform facial expressions, one consisting of typing emote commands, the other by performing with the calibrated participant’s face in real-time, with a web camera. Each participant performed and ranked the realism of each facial expression, in a survey for each method. A final survey could then determine the method that each participant had greater performance with and the method that was preferred the most.

    Results. The results showed that there was a difference of realism between facial expressions in both methods, where happiness was the most realistic. Disgust and sadness were however poorly rated when expressing these with the face. There was no difference of realism between the methods and which method participants preferred the most. There was however a difference in each participant’s performance with the two methods, where typing emote commands had higher performance.

    Conclusions. The results show that there is no difference of realism and preference, in typing emote commands to play facial animations and performing facial expressions in real-time with the face. There was, however, a difference in performing each method, where performing facial expressions with the face was the lowest. This confirms that the real-time facial capture technology needs improvements, to fully recognize and track the facial features of the human face.

  • 166.
    Pettersson, Erik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    A Perceptual Evaluation of Social Interaction with Emotes and Real-time Facial Motion Capture2017In: MIG'17: PROCEEDINGS OF THE TENTH INTERNATIONAL CONFERENCE ON MOTION IN GAMES / [ed] Spencer, SN, ACM Publications, 2017Conference paper (Refereed)
    Abstract [en]

    Social interaction between players is an important feature in online games, where text- and voice chat is a standard way to communicate.To express emotions players can type emotes which are text-based commands to play animations from the player avatar. This paper presents a perceptual evaluation which investigates if instead expressing emotions with the face, in real-time with a web camera, is perceived more realistic and preferred in comparison to typing emote-based text commands. A user study with 24 participants was conducted where the two methods to express emotions described above were evaluated. For both methods the participants ranked the realism of facial expressions, which were based on the theory of seven universal emotions stated by American psychologist Paul Ekman: happiness, anger, fear, sadness, disgust, surprise and contempt.The participants also ranked their perceived efficiency of performing the two methods and selected the method they preferred. A significant difference was shown when analyzing the results of ranked facial expression realism. Happiness was perceived as the most realistic in both methods, while disgust and sadness were poorly rated when performed with the face. One conclusion of the perceptual evaluation was that the realism and preference between the methods showed no significant differences. However, participants had higher performance in typing with emotes. Real-time facial capture technology also needs improvements to obtain better recognition and tracking of facial features in the human face.

  • 167.
    Poreddy, Mahathi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    On Outage Probability of Cooperative Cognitive Radio Networks Over k-mu Shadowed Fading2017In: 2017 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the outage probability of a multiple relay cooperative cognitive radio network over kappa-mu shadowed fading channels where the relays use the decode and-forward (DF) protocol. The probability density function and cumulative distribution function of the total signal-to-noise ratio at the secondary receiver is derived for the case of selection combining. In particular, approximating the channels by Nakagami-m fading results in an analytical expression for the outage probability that can be expressed in terms of well- known functions. This approximation provides a good match when the parameters of the kappa-mu shadowed fading channels translate to integer values of fading severity parameter m. Numerical results are provided to illustrate the impact of system parameters on the outage probability for kappa-mu shadowed fading channels.

  • 168.
    Rahm, Marcus
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Forward plus rendering performance using the GPU vs CPU multi-threading.: A comparative study of culling process in Forward plus2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The rendering techniques in games have the goal of shading the scene with as high of a quality as possible while being as efficient as possible. With more advanced tools being developed such as a compute shader. It has allowed for more efficient speed up of the shading process. One rendering technique that makes use of this, is Forward plus rendering. Forward plus rendering make use of a compute shader to perform a culling pass of all the lights. However, not all computers can make use of compute shaders.

    Objectives. The aims of this thesis are to investigate the performance of using the CPU to perform the light culling required by the Forward plus rendering technique, comparing it to the performance of a GPU implementation. With that, the aim is also to explore if the CPU can be an alternative solution for the light culling by the Forward plus rendering technique.

    Methods. The standard Forward plus is implemented using a compute shader. After which Forward plus is then implemented using CPU multithreaded to perform the light culling. Both versions of Forward plus are evaluated by sampling the frames per second during the tests with specific properties.

    Results. The results show that there is a difference in performance between the CPU and GPU implementation of Forward plus. This difference is fairly significant as with 256 lights rendered the GPU implementation has 126% more frames per second over the CPU implementation of Forward plus. However, the results show that the performance of the CPU implementation of Forward plus is viable. As the performance stays above 30 frames per second with less than 2048 lights in the scene. The performance also outperforms the performance of basic Forward rendering.

    Conclusions. The conclusion of this thesis shows that multi-threaded CPU can be used for culling lights for Forward plus rendering. It is also a viable chose over basic Forward rendering. With 64 lights the CPU implementation performs with 133% more frames per second over the basic Forward rendering.

  • 169. Roberts, Kirk
    et al.
    Boland, Mary Regina
    Pruinelli, Lisiane
    Dcruz, Jina
    Berry, Andrew
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Hazen, Rebecca
    Sarmiento, Raymond F
    Backonja, Uba
    Yu, Kun-Hsing
    Jiang, Yun
    Brennan, Patricia Flatley
    Biomedical informatics advancing the national health agenda: the AMIA 2015 year-in-review in clinical and consumer informatics.2017In: JAMIA Journal of the American Medical Informatics Association, ISSN 1067-5027, E-ISSN 1527-974X, Vol. E1, p. E185-E190Article in journal (Refereed)
    Abstract [en]

    The field of biomedical informatics experienced a productive 2015 in terms of research. In order to highlight the accomplishments of that research, elicit trends, and identify shortcomings at a macro level, a 19-person team conducted an extensive review of the literature in clinical and consumer informatics. The result of this process included a year-in-review presentation at the American Medical Informatics Association Annual Symposium and a written report (see supplemental data). Key findings are detailed in the report and summarized here. This article organizes the clinical and consumer health informatics research from 2015 under 3 themes: the electronic health record (EHR), the learning health system (LHS), and consumer engagement. Key findings include the following: (1) There are significant advances in establishing policies for EHR feature implementation, but increased interoperability is necessary for these to gain traction. (2) Decision support systems improve practice behaviors, but evidence of their impact on clinical outcomes is still lacking. (3) Progress in natural language processing (NLP) suggests that we are approaching but have not yet achieved truly interactive NLP systems. (4) Prediction models are becoming more robust but remain hampered by the lack of interoperable clinical data records. (5) Consumers can and will use mobile applications for improved engagement, yet EHR integration remains elusive.

  • 170.
    Sandhu, Momin Jamil
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Harman/Becker Automotive Systems GmbH, Karlsbad, Germany.
    On Sequence Design for Integrated Radar and Communication Systems2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The motivation of having a joint radar and communication system on a single hardware is driven by space, military, and commercial applications. However, designing sequences that can simultaneously support radar and communication functionalities is one of the major hurdles in the practical implementation of these systems. In order to facilitate a simultaneous use of sequences for both radar and communication systems, a flexible sequence design is needed.

    The objective of this dissertation is to address the sequence design problem for integrated radar and communication systems. The sequence design for these systems requires a trade-off between different performance measures, such as correlation characteristics, integrated sidelobe ratio, peak-to-sidelobe ratio and ambiguity function. The problem of finding a trade-off between various performance measures is solved by employing meta-heuristic algorithms.

    This dissertation is divided into an introduction and three research parts based on peer-reviewed publications. The introduction provides background on binary and polyphase sequences, their use in radar and communication systems, sequence design requirements for integrated radar and communication systems, and application of meta-heuristic optimization algorithms to find optimal sets of sequences for these systems.

    In Part I-A, the performance of conventional polyphase pulse compression sequences is compared with Oppermann sequences. In Part I-B, weighted pulse trains with the elements of Oppermann sequences serving as complex-valued weights are utilized for the design of integrated radar and communication systems. In Part I-C, an analytical expression for the cross-ambiguity function of weighted pulse trains with Oppermann sequences is derived. Several properties of the related auto-ambiguity and cross-ambiguity functions are derived in Part I-D. In Part II, the potential of meta-heuristic algorithms for finding optimal parameter values of Oppermann sequences for radar, communications, and integrated radar and communication systems is studied. In Part III-A, a meta-heuristic algorithm mimicking the breeding behavior of Cuckoos is used to locate more than one solution for multimodal problems. Further, the performance of this algorithm is evaluated in additive white Gaussian noise (AWGN). It is shown that the Cuckoo search algorithm can successfully locate multiple solutions in both non-noise and AWGN with relatively high degree of accuracy. In Part III-B, the cross-ambiguity function synthesization problem is addressed. A meta-heuristic algorithm based on echolocation of bats is used to design a pair of sequences to minimize the integrated square error between the desired cross-ambiguity function and a synthesized cross-ambiguity function.

  • 171.
    Santos, Beatriz Sousa
    et al.
    Universidade de Aveiro, PRT.
    Dischler, Jean Michel
    Universite de Strasbourg, FRA.
    Adzhiev, Valery
    Bournemouth University, GBR.
    Anderson, Eike Falk
    Bournemouth University, GBR.
    Ferko, Andrej
    Comenius University, SVK.
    Fryazinov, Oleg
    Bournemouth University, GBR.
    Ilčík, Martin
    Technische Universitat Wien, AUT.
    Ilčíková, Ivana
    Comenius University, SVK.
    Slavik, Pavel
    Ceske vysoke uceni technicke v Praze, CZE.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Svobodova, Liba
    Ceske vysoke uceni technicke v Praze, CZE.
    Wimmer, Michael
    Technische Universitat Wien, AUT.
    Zara, Jiri
    Ceske vysoke uceni technicke v Praze, CZE.
    Distinctive Approaches to Computer Graphics Education2018In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 7, no 1, p. 403-412Article in journal (Refereed)
    Abstract [en]

    This paper presents the latest advances and research in Computer Graphics education in a nutshell. It is concerned with topics that were presented at the Education Track of the Eurographics Conference held in Lisbon in 2016. We describe works corresponding to approaches to Computer Graphics education that are unconventional in some way and attempt to tackle unsolved problems and challenges regarding the role of arts in computer graphics education, the role of research-oriented activities in undergraduate education and the interaction among different areas of Computer Graphics, as well as their application to courses or extra-curricular activities. We present related works addressing these topics and report experiences, successes and issues in implementing the approaches. © 2017 The Eurographics Association and John Wiley & Sons Ltd.

  • 172.
    Seidler, Patrick
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Investigating user participation in a participatory design inspired system development process: Insights and outcomes from a study in law enforcement2015Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In an environment of increased demand and competition, organisations are under pressure to provide better and more efficient services. With cuts to funding, public organisations specifically need to redevelop and innovate business processes and Information Technology plays an important part in the process. In order to understand an IT system’s role in an organisation, it has been indicated that it is key for new system developments to consider individuals and human factors, organisational factors and society. Based on studies on the success and acceptance of innovative IT systems, it has now been widely accepted that active participation and involvement of users is essential during system development and deployment. The means of participation that lead to success in a large system development process are however not well understood.In the first paper, we present a theoretical outline of a participatory and socio-technical system development process and describe how we empirically investigated the emergent processes of user participation during a case study that puts this development process in practice. An analytical framework has been deployed to gain a greater understanding of how exactly user participation is embedded in project conditions and participants’ relational structure, and what their consequences are for the project. For the investigated project, genuine participation in different forms was identified to be an important factor for project success with socio-technical design methods strongly supporting user participation during initial and later project stages. We further show how the limited options in facilitating user participation during the more technical phases and the sensitivity of the participants’ relational structure shows the need for more research of how, when and where socio-technical design can be integrated with system development processes in order to guarantee project success.In a second paper, we present outcomes of the case study’s design project phase for the intelligence analyst user group and a vision of a criminal network analysis system. This vision is the result of successful user participation and involvement and is seen as a contribution to further research in law enforcement and criminal network analysis.

  • 173.
    Siverland, Susanne
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Where do you save most money on refactoring?2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    A mature code-base of 1 300 000 LOC for a period of 20 months has been examined. This paper investigates if churn is a significant factor in finding refactoring candidates. In addition it looks at the variables Lines of Code (LOC), Technical Debt (TD), Duplicated lines and Complexity to find out if any of these indicators can inform a coder as to what to refactor. The result is that churn is the strongest variable out of the studied variables followed by LOC and TD.

  • 174.
    Sjöblom, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Investigating Gaze Attraction to Bottom-Up Visual Features for Visual Aids in Games2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Video games usually have visual aids guiding the players in 3D-environments. The designers need to know which visual feature is the most effective in attracting a player's gaze and what features are preferred by players as visual aid.

    Objectives. This study investigates which feature of the bottom-upvisual attention process attracts the gaze faster.

    Methods. With the use of the Tobii T60 eye tracking system, a user study with 32 participants was conducted in a controlled environment. An experiment was created where each participant looked at a slideshow consisting of 18 pictures with 8 objects on each picture. One object per picture had a bottom-up visual feature applied that made it stand out as different. Video games often have a goal or a task and to connect the experiment to video games a goal was set. This goal was to find the object with the visual feature applied. The eye tracker measured the gaze while the participant was trying to find the object. A survey to determine which visual feature was preferredby the players was also made.

    Results. The result showed that colour was the visual feature with the shortest time to attract attention. It was closely followed by intensity,motion and a pulsating highlight. Small size had the longest attraction time. Results also showed that the preferred visual feature for visual aid by the players was intensity and the least preferred was orientation.

    Conclusions. The results show that visual features with contrast changes in the texture seems to draw attention faster, with colour the fastest, than changes on the object itself. These features were also the most preferred as visual aid by the players with intensity the most preferred. If this study was done on a larger scale within a 3D-environment, this experiment could show promise to help designers in decisions regarding visual aid in video games.

  • 175.
    Soares, Joao
    et al.
    Ericsson Res, SWE.
    Wuhib, Fetahi
    Ericsson Res, SWE.
    Yadhav, Vinay
    Ericsson Res, SWE.
    Han, Xin
    Ericsson Res, SWE.
    Joseph, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Re-designing Cloud Platforms for Massive Scale using a P2P Architecture2017In: 2017 9TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM), IEEE , 2017, p. 57-64Conference paper (Refereed)
    Abstract [en]

    Cloud platforms need to scale with the number of resources and users they manage, while maintaining the needed performance levels with respect to service parameters such as application deployment time, service availability and response time. With the increase in capacity of todays data centers and distributed cloud deployment scenarios like edge computing, the scalability requirements of a cloud management software has become paramount. Most commercially available cloud management software are centralized from a software architecture point of view, and this often poses a limit on the number of resources that the software can manage while maintaining an acceptable performance level. In this paper, we present a generic design based on peer-to-peer (P2P) architecture for scaling cloud management software. We demonstrate how this design can be realized by applying it to the OpenStack cloud management platform. Our implementation alters neither the core functionality of OpenStack nor the way users interact with OpenStack. Performance evaluation results of the implementation show that, when used in a large system under high load, the solution greatly improves VM boot up times and VM startup success rates.

  • 176.
    Sosa, Gabriella
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Enhancing user experience when displaying 3D models and animation information on mobile platforms: an augmented reality approach2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Augmented Reality (AR) is a technique that provides additional and varied information to a real environment. The compatibility to smartphones makes AR applications suitable for location based, social, advertisement as well as education oriented applications.

    Objectives. This study explores if AR is a suitable method of information visualization that can enhance User Experience(UX) comparedto more traditional methods. The information this project focuses on is 3D model and animation information.

    Methods. This work utilizes a comparative experiment where subjects get to test and evaluate two prototypes, one consisting of static, rendered images and a video and the other being an interactive mobileAR application.

    Results. Results were gathered with the Attrakdiff™ User Experience questionnaire and an interview.

    Conclusions. The experiment showed that there is a possibility to enhance user experience when visualizing 3D model and animation information with the help of mobile AR applications.

  • 177.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Automated Traffic Time Series Prediction2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Intelligent transportation systems (ITS) are becoming more and more effective. Robust and accurate short-term traffic prediction plays a key role in modern ITS and demands continuous improvement. Benefiting from better data collection and storage strategies, a huge amount of traffic data is archived which can be used for this purpose especially by using machine learning.

    For the data preprocessing stage, despite the amount of data available, missing data records and their messy labels are two problems that prevent many prediction algorithms in ITS from working effectively and smoothly. For the prediction stage, though there are many prediction algorithms, higher accuracy and more automated procedures are needed.

    Considering both preprocessing and prediction studies, one widely used algorithm is k-nearest neighbours (kNN) which has shown high accuracy and efficiency. However, the general kNN is designed for matrix instead of time series which lacks the use of time series characteristics. Choosing the right parameter values for kNN is problematic due to dynamic traffic characteristics. This thesis analyses kNN based algorithms and improves the prediction accuracy with better parameter handling using time series characteristics.

    Specifically, for the data preprocessing stage, this work introduces gap-sensitive windowed kNN (GSW-kNN) imputation. Besides, a Mahalanobis distance-based algorithm is improved to support correcting and complementing label information. Later, several automated and dynamic procedures are proposed and different strategies for making use of data and parameters are also compared.

    Two real-world datasets are used to conduct experiments in different papers. The results show that GSW-kNN imputation is 34% on average more accurate than benchmarking methods, and it is still robust even if the missing ratio increases to 90%. The Mahalanobis distance-based models efficiently correct and complement label information which is then used to fairly compare performance of algorithms. The proposed dynamic procedure (DP) performs better than manually adjusted kNN and other benchmarking methods in terms of accuracy on average. What is better, weighted parameter tuples (WPT) gives more accurate results than any human tuned parameters which cannot be achieved manually in practice. The experiments indicate that the relations among parameters are compound and the flow-aware strategy performs better than the time-aware one. Thus, it is suggested to consider all parameter strategies simultaneously as ensemble strategies especially by including window in flow-aware strategies.

    In summary, this thesis improves the accuracy and automation level of short-term traffic prediction with proposed high-speed algorithms.

  • 178.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    How to Learn Research Ethics Regarding External Validity2017Report (Other academic)
    Abstract [en]

    There are some overlaps between external validity and research ethics. For example, how to handle raw data so that they can be processed by some algorithms and methods while making sure the handling is not misconducted. Or, to what extent the data should be shared to allow the readers to replicate the experiments while keeping sensitive data credential. To understand those problems, this work firstly presents several alternative methods, then uses a combined systematic process to analyse several cases. We can see that one problem often has more than one solution and they should be carefully considered to select a suitable one if any. The combined process is working well and should be considered to engage when analysing research ethical problems.

  • 179.
    Sun, Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Toward Automatic Data-Driven Traffic Time Series Prediction2017In: 5th Swedish Workshop on Data Science, 2017, Vol. 12, article id 12Conference paper (Refereed)
    Abstract [en]

    Short-term traffic prediction on freeways has been an active research subject in the past several decades. Various algorithms covering a broad range of topics regarding performance, data requirements and efficiency have been proposed. However, the implementation of machine learning based algorithms in traffic management centres is still limited. Two main reasons for this situation are, the data is messy or missing, and the parameter tuning requires experienced engineers.

    The main objective of this thesis was to develop a procedure that can improve the performance and automation level of short-term traffic prediction.

    Missing data is a problem that prevents many prediction algorithms in ITS from working effectively. Much work has been done to impute those missing data. Among different imputation methods, k-nearest neighbours (kNN) has shown excellent accuracy and efficiency. However, the general kNN is designed for matrix instead of time series so it lacks the usage of time series characteristics such as windows and weights that are gap-sensitive. We introduce gap-sensitive windowed kNN (GSW-kNN) imputation for time series. The results show that GSW-kNN is 34% more accurate than benchmarking methods, and it is still robust even if the missing ratio increases to 90%.

    Lacking accurate accident information (labels) is another problem that prevents huge amount of traffic data to be fully used. We improve a Mahalanobis distance based algorithm to be able to handle differential data to estimate flow fluctuations and detect accidents and use it to support correcting and complementing accident information. The outlier detection algorithm provides accurate suggestions for accident occurring time, duration and direction. We also develop a system with interactive user interface to realize this procedure. There are three contributions for data handling. Firstly, we propose to use multi-metric traffic data instead of single metric for traffic outlier detection. Secondly, we present a practical method to organise traffic data and to evaluate the organisation for Mahalanobis distance. Thirdly, we describe a general method to modify Mahalanobis distance algorithms to be updatable.

    For automatic parameter tuning, the experiments show that the flow-aware strategy performs better than the time-aware one. Thus, we use all parameter strategies simultaneously as ensemble strategies especially by including window in flow-aware strategies.

    Based on the above studies, we have developed online-orientated and offline-orientated algorithms for real-time traffic forecasting. The online automatic tuned version is performing near the optimal manual tuned performance. The offline version gives the performance that cannot be achieved using the manual tuning. It is also 3.05% better than XGB and 11.7% better than traditional SARIMA.

  • 180.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Cheng, Wei
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Correcting and complementing freeway traffic accident data using mahalanobis distance based outlier detection2017In: Technical Gazette, ISSN 1330-3651, E-ISSN 1848-6339, Vol. 24, no 5, p. 1597-1607Article in journal (Refereed)
    Abstract [en]

    A huge amount of traffic data is archived which can be used in data mining especially supervised learning. However, it is not being fully used due to lack of accurate accident information (labels). In this study, we improve a Mahalanobis distance based algorithm to be able to handle differential data to estimate flow fluctuations and detect accidents and use it to support correcting and complementing accident information. The outlier detection algorithm provides accurate suggestions for accident occurring time, duration and direction. We also develop a system with interactive user interface to realize this procedure. There are three contributions for data handling. Firstly, we propose to use multi-metric traffic data instead of single metric for traffic outlier detection. Secondly, we present a practical method to organise traffic data and to evaluate the organisation for Mahalanobis distance. Thirdly, we describe a general method to modify Mahalanobis distance algorithms to be updatable. © 2017, Strojarski Facultet. All rights reserved.

  • 181.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Cheng, Wei
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Bai, Guohua
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Short-Term Traffic Forecasting Using Self-Adjusting k-Nearest Neighbours2018In: IET Intelligent Transport Systems, ISSN 1751-956X, E-ISSN 1751-9578, Vol. 12, no 1, p. 41-48Article in journal (Refereed)
    Abstract [en]

    Short-term traffic forecasting is becoming more important in intelligent transportation systems. The k-nearest neighbours (kNN) method is widely used for short-term traffic forecasting.However, kNN parameters self-adjustment has been a problem due to dynamic traffic characteristics. This paper proposes a fully automatic dynamic procedure kNN (DP-kNN) that makes the kNN parameters self-adjustable and robust without predefined models or training. We used realworld data with more than one-year traffic records to conduct experiments. The results show that DP-kNN can perform better than manually adjusted kNN and other benchmarking methods with regards to accuracy on average. This study also discusses the difference between holiday and workday traffic prediction as well as the usage of neighbour distance measurement.

  • 182.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Liyao, Ma
    University of Jinan, CHI.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Wen
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Improved k-Nearest Neighbours Method for Traffic Time Series Imputation2017Conference paper (Refereed)
    Abstract [en]

    Intelligent transportation systems (ITS) are becoming more and more effective, benefiting from big data. Despite this, missing data is a problem that prevents many prediction algorithms in ITS from working effectively. Much work has been done to impute those missing data. Among different imputation methods, k-nearest neighbours (kNN) has shown excellent accuracy and efficiency. However, the general kNN is designed for matrix instead of time series so it lacks the usage of time series characteristics such as windows and weights that are gap-sensitive. This work introduces gap-sensitive windowed kNN (GSW-kNN) imputation for time series. The results show that GSW-kNN is 34% more accurate than benchmarking methods, and it is still robust even if the missing ratio increases to 90%.

  • 183.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Liyao, Ma
    University of Jinan, CHN.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Anomaly-Aware Traffic Prediction Based on Automated Conditional Information Fusion2018In: Proceedings of 21st International Conference on Information Fusion, IEEE conference proceedings, 2018Conference paper (Refereed)
    Abstract [en]

    Reliable and accurate short-term traffic prediction plays a key role in modern intelligent transportation systems (ITS) for achieving efficient traffic management and accident detection. Previous work has investigated this topic but lacks study on automated anomaly detection and conditional information fusion for ensemble methods. This works aims to improve prediction accuracy by fusing information considering different traffic conditions in ensemble methods. In addition to conditional information fusion, a day-week decomposition (DWD) method is introduced for preprocessing before anomaly detection. A k-nearest neighbours (kNN) based ensemble method is used as an example. Real-world data are used to test the proposed method with stratified ten-fold cross validation. The results show that the proposed method with incident labels improves predictions up to 15.3% and the DWD enhanced anomaly-detection improves predictions up to 8.96%. Conditional information fusion improves ensemble prediction methods, especially for incident traffic. The proposed method works well with enhanced detections and the procedure is fully automated. The accurate predictions lead to more robust traffic control and routing systems.

  • 184.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Overview of Parameter and Data Strategies for K-Nearest Neighbours Based Short-Term Traffic Prediction2017In: ACM International Conference Proceeding Series Volume Part F133326, Association for Computing Machinery (ACM), 2017, p. 68-74Conference paper (Refereed)
    Abstract [en]

    Modern intelligent transportation systems (ITS) requires reliable and accurate short-term traffic prediction. One widely used method to predict traffic is k-nearest neighbours (kNN). Though many studies have tried to improve kNN with parameter strategies and data strategies, there is no comprehensive analysis of those strategies. This paper aims to analyse kNN strategies and guide future work to select the right strategy to improve prediction accuracy. Firstly, we examine the relations among three kNN parameters, which are: number of nearest neighbours (k), search step length (d) and window size (v). We also analysed predict step ahead (m) which is not a parameter but a user requirement and configuration. The analyses indicate that the relations among parameters are compound especially when traffic flow states are considered. The results show that strategy of using v leads to outstanding accuracy improvement. Later, we compare different data strategies such as flow-aware and time-aware ones together with ensemble strategies. The experiments show that the flowaware strategy performs better than the time-aware one. Thus, we suggest considering all parameter strategies simultaneously as ensemble strategies especially by including v in flow-aware strategies.

  • 185.
    Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Flow-Aware WPT k-Nearest Neighbours Regression for Short-Term Traffic Prediction2017In: Proceedings - IEEE Symposium on Computers and Communications, Institute of Electrical and Electronics Engineers (IEEE), 2017, Vol. 07, p. 48-53, article id 8024503Conference paper (Refereed)
    Abstract [en]

    Robust and accurate traffic prediction is critical in modern intelligent transportation systems (ITS). One widely used method for short-term traffic prediction is k-nearest neighbours (kNN). However, choosing the right parameter values for kNN is problematic. Although many studies have investigated this problem, they did not consider all parameters of kNN at the same time. This paper aims to improve kNN prediction accuracy by tuning all parameters simultaneously concerning dynamic traffic characteristics. We propose weighted parameter tuples (WPT) to calculate weighted average dynamically according to flow rate. Comprehensive experiments are conducted on one-year real-world data. The results show that flow-aware WPT kNN performs better than manually tuned kNN as well as benchmark methods such as extreme gradient boosting (XGB) and seasonal autoregressive integrated moving average (SARIMA). Thus, it is recommended to use dynamic parameters regarding traffic flow and to consider all parameters at the same time.

  • 186.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    A Visualisation in Games Course Curriculum2016In: Eurographics (Education Papers) / [ed] Eurographics, 2016Conference paper (Refereed)
  • 187.
    Sundstedt, Veronica
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Navarro, Diego
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Mautner, Julian
    Stillalive Studios, SWE.
    Possibilities and challenges with eye tracking in video games and virtual reality applications2016In: SA 2016 - SIGGRAPH ASIA 2016 Courses, Association for Computing Machinery (ACM), 2016, article id a17Conference paper (Refereed)
    Abstract [en]

    Due to an increase in affordable, reliable and non-intrusive eye trackers the technology has recently been used by the video game industry. This course offers participants the opportunity to get an update on research and developments in gaze-based interaction techniques in combination with other sensors. The course consists of three parts: (1) a review of eye tracking analysis and interaction in video games and virtual reality applications, (2) possibilities and challenges with gaze-based interaction, and (3) lessons learned from developing a commercial video game application using eye tracking along with alternative virtual reality technologies. This course is relevant for everyone who is interested in developing games that use eye tracking as an interaction device. The content is suitable for beginners or experienced delegates who want to learn more about the state of the art and future possibilities in eye tracking combined with other sensors as interaction devices. We believe that games and virtual reality applications have just started to incorporate these new techniques and further research and developments are needed in order to evaluate novel ways to enhance gameplay.

  • 188.
    Svensson, Karin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Urvalsbaserad evalueringsfunktion2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Kontext. Evalueringsfunktioner är en viktig del inom artificiell intelligens. En evalueringsfunktion utvärderar ett spelläge och är svår och tidskrävande att utveckla. Inför denna uppsats utvecklades en ny teknik som kan användas på en evalueringsfunktion för att skapa en urvalsbaserad evalueringsfunktion.

    Mål och Objektiv. I denna uppsats utvärderas en ny teknik för att förbättra evalueringsfunktioner för Kalaha. Tekniken bygger på att utvärdera spelläget både som det är och genom att sampla möjliga framtida drag. Teknikens framgångar mäts i antal vinster mot andra evalueringsfunktioner.

    Metod. Detta arbete bygger på en implementationsmetod där kvantitativ data samlas in för analys. Spelet och de artificiella intelligenserna utvecklades i C++ med hjälp av Microsoft Visual Studio 12.

    Resultat. Utfallet från matcher mellan evalueringsfunktioner och urvalsbaserade evalueringsfunktioner sammanställdes till tabeller.

    Slutsats. Den urvalsbaserade tekniken hade framgångar i matcherna och anses därför vara en lyckad förbättring av evalueringsfunktionerna som användes.

  • 189.
    Swing, Oskar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Using Gyroscope Technology to implement a Leaning Technique for Game Interaction2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Smartphones contain advanced sensors called microelectromechanical systems(MEMS). By connecting a smartphone to a computer these sensors can be used to test new interaction techniques for games. Objective. This study aims to investigate an interaction technique implemented with a gyroscope that utilises the leaning of a user’s torso and compare it in terms of precision and enjoyment to using a joystick. Method. The custom interaction technique was implemented by using the gyroscope of a Samsung Galaxy s6 Edge and attaching it to to the torso of the user. The joystick technique was implementation by using the left joystick of an Xbox One controller. A user study was conducted and 19 people participated by playing a custom-made obstacle course game that tested the precision of the interaction techniques. After testing each technique participants took part in a survey consisting of questions regarding their enjoyment using the technique. Result. The results showed that the leaning technique was not as precise as the joystick implementation. The participants found the learning technique to be more fun to use and also more immersive compared to the joystick implementation. The leaning technique was however also more uncomfortable and difficult to use, and players felt less competent in their ability to control the player character with it. Conclusion. The performance difference might have been due to the lack of familiarity with the leaning technique compared to the joystick implementation. The leaning technique was more difficult to use and more uncomfortable than the joystick method. However, the leaning technique was also more fun to use and more immersive. This offers up the opportunity to keep exploring possibilities with this technique.

  • 190.
    Thunström, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Passive gaze-contingent techniques relation to system latency2014Student thesis
    Abstract [en]

    Interactive 3D computer graphics requires a lot of computational resources to render a high quality frame. Typically the process of rendering a frame assumes a naïve approach that the whole frame can be perceived by the user in uniform detail. This is often not true, within 2° horizontal eccentricity from point of gaze is where one can primarily perceive details. Adjusting the quality of a frame based on the visual acuity can increase rendering performance by a factor of five to six at the resolution 1920x1080 without sacrificing perceived quality (Guenter et al., 2012a). Doing so without the user being aware of the manipulation requires a highly sophisticated system with low system latency able to update the display fast enough. The current study aims to answer what system latency is required to support passive gaze-contingent techniques that requires close to real-time gaze data. A unique experiment design was developed exposing test subjects to different system latencies by varying eye-tracker and monitor frequency. The outcome from the current study with 20 participants indicates a configuration with the estimated worst case system latency of 60ms is capable of hiding manipulation for 55% of the participants. Lowering the worst case system latency to 42ms and 95% of the participants reported that they could not detect any change. The study concludes that the configuration with estimated worst case system latency of 42ms is able to support passive gaze-contingent techniques.

  • 191. Topaz, M.
    et al.
    Ronquillo, C.
    Peltonen, L.-M.
    Pruinelli, L.
    Sarmiento, R.F.
    Badger, M.K.
    Ali, S.
    Lewis, A.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Jeon, E.
    Tayaben, J.L.
    Kuo, C.-H.
    Islam, T.
    Sommer, J.
    Jung, H.
    Eler, G.J.
    Alhuwail, D.
    Advancing nursing informatics in the next decade: Recommendations from an international survey2016In: Studies in Health Technology and Informatics, IOS Press, 2016, Vol. 225, p. 123-127Conference paper (Refereed)
    Abstract [en]

    In the summer of 2015, the International Medical Informatics Association Nursing Informatics Special Interest Group (IMIA NISIG) Student Working Group developed and distributed an international survey of current and future trends in nursing informatics. The survey was developed based on current literature on nursing informatics trends and translated into six languages. Respondents were from 31 different countries in Asia, Africa, North and Central America, South America, Europe, and Australia. This paper presents the results of responses to the survey question: "What should be done (at a country or organizational level) to advance nursing informatics in the next 5-10 years?" (n responders=272). Using thematic qualitative analysis, responses were grouped into five key themes: 1) Education and training; 2) Research; 3) Practice; 4) Visibility; and 5) Collaboration and integration. We also provide actionable recommendations for advancing nursing informatics in the next decade. © 2016 IMIA and IOS Press.

  • 192.
    Topaz, Maxim
    et al.
    Harvard Medical School, USA.
    Ronquillo, Charlene
    The University of British Columbia, CAN.
    Peltonen, Laura Maria
    Turun yliopisto, FIN.
    Pruinelli, Lisiane
    University of Minnesota Twin Cities, USA.
    Sarmiento, Raymond Francis
    National Institute for Occupational Safety and Health, USA.
    Badger, Martha
    University of Wisconsin Milwaukee, USA.
    Ali, Samira
    Grand Canyon University, USA.
    Lewis, Adrienne
    Independent Researcher, CAN.
    Georgsson, Mattias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Jeon, Eunjoo
    Seoul National University, KOR.
    Tayaben, Jude
    Benguet State University, PHL.
    Kuo, Chiuhsiang
    Tzu Chi University of Science and Technology, TWN.
    Islam, Tasneem
    Deakin University, AUS.
    Sommer, Janine
    Scopus - Author details - Sommer, Janine A. Instituto Universitario del Hospital Italiano de Buenos Aires, ARG.
    Jung, Hyunggu
    University of Washington, USA.
    Eler, Gabrielle
    Federal Institute of Paraná, BRA.
    Alhuwail, Dari
    University of Maryland, USA.
    Lee, Yingli
    National Yang-Ming University, TWN.
    Nurse Informaticians Report Low Satisfaction and Multi-level Concerns with Electronic Health Records: Results from an International Survey2016In: Advances in Printing and Media Technology, ISSN 0892-2284, E-ISSN 1942-597X, Vol. 2016, p. 2016-2025Article in journal (Refereed)
    Abstract [en]

    This study presents a qualitative content analysis of nurses' satisfaction and issues with current electronic health record (EHR) systems, as reflected in one of the largest international surveys of nursing informatics. Study participants from 45 countries (n=469) ranked their satisfaction with the current state of nursing functionality in EHRs as relatively low. Two-thirds of the participants (n=283) provided disconcerting comments when explaining their low satisfaction rankings. More than one half of the comments identified issues at the system level (e.g., poor system usability; non-integrated systems and poor interoperability; lack of standards; and limited functionality/missing components), followed by user-task issues (e.g., failure of systems to meet nursing clinical needs; non nursing-specific systems) and environment issues (e.g., low prevalence of EHRs; lack of user training). The study results call for the attention of international stakeholders (educators, managers, policy makers) to improve the current issues with EHRs from a nursing perspective.

  • 193.
    Torabi, Peyman
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Skeletal Animation Optimization Using Mesh Shaders2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. In this thesis a novel method of skinning a mesh utilizing Nvidia’sTuring Mesh Shader pipeline is presented. Skinning a mesh is often performed with a Vertex Shader or a Compute Shader. By leveraging the strengths of the new pipeline it may be possible to further optimize the skinning process and increase performance, especially for more complex meshes.

    Objectives. The aim is to determine if the novel method is a suitable replacement for existing skinning implementations. The key metrics being studied is the total GPU frame time of the novel implementation in relation to the rest, and its total memory usage.

    Methods. Beyond the pre-existing implementations such as Vertex Shader skinning and Compute Shader skinning, two new methods using Mesh Shaders are implemented. The first implementation being a naive method that simply divides the mesh into meshlets and skins each meshlet in isolation. The proposed novel common influences method instead takes the skinning data, such as the joint influences of each vertex, into account when generating meshlets. The intention is to produce meshlets where all vertices are influenced by the same joints, allowing for information to be moved from a per vertex basis to a per meshlet basis. Allowing for fewer fetches to occur in the shader at run-time and potentially better performance.

    Results. The results indicate that utilizing Mesh Shaders results in approximately identical performance compared to Vertex Shader skinning, (which was observed to be the fastest of the previous implementations) with the novel implementation being marginally slower due to the increased number of meshlets generated. Mesh Shading has the potential to be faster if optimizations unique to the new shaders are employed. Despite producing more meshlets, the novel implementation is not significantly slower and is faster at processing individual meshlets compared to the naive approach. The novel Common Influences implementation spends between 15-22% less time processing each meshlet at run-time compared to the naive solution.

    Conclusions. Ultimately the unique capabilities of Mesh Shaders allow for potential performance increases to be had. The proposed novel Common Influences method shows promise due to it being faster on a per meshlet basis, but more work must be done in order to reduce the number of meshlets generated. The Mesh Shading pipeline is as of writing very new and there is a lot of potential for future work to further enhance and optimize the work presented in this thesis. More work must be done in order to make the meshlet generation more efficient so that the run-time workload is reduced as much as possible.

  • 194.
    Tran, Dang Ninh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On Moderately Significant Bit Data Hiding Techniques for High-Definition Images2018In: International Conference on Advanced Technologies for Communications, IEEE Computer Society , 2018, p. 47-52Conference paper (Refereed)
    Abstract [en]

    In this paper, we study moderately significant bit data hiding techniques for high-definition (HD) images. In contrast to least significant bit data hiding, we explore the potential of HD images to engage higher order bits for increasing the capacity of hiding secret images. In particular, the number of secret images embedded in moderately significant bits of a given cover image is successively increased to study the impact of this data hiding approach on image fidelity and image quality in the context of HD images. A comprehensive performance assessment is conducted on the stego-image carrying the secret images in terms of peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. It is shown that HD images indeed offer the potential of hiding several images within certain image fidelity and objective perceptual image quality constraints of the resulting stego-HD images. © 2018 IEEE.

  • 195.
    Tran, Dang Ninh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On the Positioning of Moderately Significant Bit Data Hiding in High-Definition Images2018In: 2018 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS) / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2018Conference paper (Refereed)
    Abstract [en]

    Mobile services have seen a shift from voice services toward visual stimuli-based services ranging from mobile imaging over mobile gaming to upcoming mobile extended reality applications. In this paper, given the increased resolutions of the related mobile multimedia formats, we propose and examine positioning strategies for light-weight data hiding of secret images in moderately significant bits of high-definition (HD) cover images. Apart from linear and random positioning, visual attention mechanisms of the human visual system are addressed by separately utilizing either the background or center of the HD cover image for data hiding. A performance assessment of these positioning strategies is conducted in terms of the peak-signal-to-noise ratio (PSNR), structural similarity (SSIM) index, and visual information fidelity (VIF). It is shown that HD cover images indeed can carry more than a single secret image until noticeable quality loss is observed. Further, it is revealed that linear positioning of secret images in the whole, background, or center of HD cover images outperforms random positioning. As for the utilized performance metrics, it has been observed that PSNR cannot differentiate among the positioning strategies as it measures pixel-by-pixel differences and hence removes the impact of the position of the hidden data. As for the two quality metrics, compared to the SSIM index, VIF is found to stronger differentiate among the performance of the considered positioning strategies.

  • 196. Turner, Jayson
    et al.
    Velloso, Eduardo
    Gellersen, Hans
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    EyePlay: Applications for Gaze in Games2014Conference paper (Refereed)
    Abstract [en]

    What new challenges does the combination of games and eye-tracking present? The EyePlay workshop brings together researchers and industry specialists from the fields of eye-tracking and games to address this question. Eye-tracking been investigated extensively in a variety of domains in human-computer Interaction, but little attention has been given to its application for gaming. As eye-tracking technology is now an affordable commodity, its appeal as a sensing technology for games is set to become the driving force for novel methods of player-computer interaction and games evaluation. This workshop presents a forum for eye-based gaming research, with a focus on identifying the opportunities that eye-tracking brings to games design and research, on plotting the landscape of the work in this area, and on formalising a research agenda for EyePlay as a field. Possible topics are, but not limited to, novel interaction techniques and game mechanics, usability and evaluation, accessibility, learning, and serious games contexts.

  • 197.
    Tännström, Ulf Nilsson
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    GPGPU separation of opaque and transparent mesh polygons2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context: By doing a depth-prepass in a tiled forward renderer, pixels can be prevented from being shaded more than once. More aggressive culling of lights that might contribute to tiles can also be performed. In order to produce artifact-free rendering, only meshes containing fully opaque polygons can be included in the depth-prepass. This limits the benefit of the depth-prepass for scenes containing large, mostly opaque, meshes that has some portions of transparency in them. Objectives: The objective of this thesis was to classify the polygons of a mesh as either opaque or transparent using the GPU. Then to separate the polygons into two different vertex buffers depending on the classification. This allows all opaque polygons in a scene to be used in the depth-prepass, potentially increasing render performance. Methods: An implementation was performed using OpenCL, which was then used to measure the time it took to separate the polygons in meshes of different complexity. The polygon separation times were then compared to the time it took to load the meshes into the game. What effect the polygon separation had on rendering times was also investigated. Results: The results showed that polygon separation times were highly dependent on the number of polygons and the texture resolution. It took roughly 350ms to separate a mesh with 100k polygons and a 2048x2048 texture, while the same mesh with a 1024x1024 texture took a quarter of the time. In the test scene used the rendering times differed only slightly. Conclusions: If the polygon separation should be performed when loading the mesh or when exporting it depends on the game. For games with a lower geometrical and textural detail level it may be feasible to separate the polygons each time the mesh is loaded, but for most game it would be recommended to perform it once when exporting the mesh.

  • 198.
    Törn, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison Between Two DifferentScreen Space Ambient OcclusionTechniques2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. In this project a comparison between two screen space ambientocclusion techniques are presented. The techniques are Scalable AO (SAO)and Multiresolution SSAO (MSSAO) since they both are techniques thatuse mipmaps to accelerate their calculations.

    Objectives. The aim is to see how big the difference is between the resultsof these two techniques and a golden reference that is an object space raytraced texture that is created with mental ray in Maya and how long timethe computation takes.

    Methods. The comparisons between the AO textures that these techniquesproduce and the golden references are performed using Structural SimilarityIndex (SSIM) and Perceptual Image Difference (PDIFF).

    Results. On the lowest resolution, both techniques execute in about thesame time on average, except that SAO with the shortest distance is faster.The only effect caused by the shorter distance, in this case, is that moresamples are taken in higher resolution mipmap levels than when longerdistances are used. The MSSAO achieved a better SSIM value meaningthat MSSAO is more similar to the golden reference than SAO. As theresolution increases the SSIM value between both techniques become moresimilar with SAO getting a better value and MSSAO getting slightly worse,while the execution time for MSSAO has larger increases than SAO.

    Conclusions. It is concluded that MSSAO is better than SAO in lowerresolution while SAO is better in larger resolution. I would recommendthat SAO is used for indoor scenes where there are not many small geometryparts close to each other that should occlude each other. MSSAO shouldbe used for outdoor scenes with a lot of vegetation which has many smallgeometry parts close to each other that should occlude. At higher resolution,MSSAO takes longer computational time as compared with SAO, while atlower resolution the computational time is similar.

  • 199.
    Vestman, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Asynchronous Event Communication Technique for Soft Real-Time GPGPU Applications2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Interactive GPGPU applications requires low response time feedback from events such as user input in order to provide a positive user experience. Communication of these events must be performed asynchronously as to not cause significant performance penalties.

    Objectives In this study the usage of CPU/GPU shared virtual memory to perform asynchronous communication is explored. Previous studies have shown that shared virtual memory can increase computational performance compared to other types of memory.

    Methods A communication technique that aimed to utilize the performance increasing properties of shared virtual memory was developed and implemented. The implemented technique was then compared to an implementation using explicitly transferred memory in an experiment measuring the performance of the various stages involved in the technique.

    Results The results from the experiment revealed that utilizing shared virtual memory for performing asynchronous communication was in general slightly slower than- or comparable to using explicitly transferred memory. In some cases, where the memory access pattern was right, utilization of shared virtual memory lead to a 50% reduction in execution time compared to explicitly transferred memory.

    Conclusions A conclusion that shared virtual memory can be utilized for performing asynchronous communication was reached. It was also concluded that by utilizing shared virtual memory a performance increase can be achieved over explicitly transferred memory. In addition it was concluded that careful consideration of data size and access pattern is required to utilize the performance increasing properties of shared virtual memory.

  • 200.
    Wen, Wei
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Khatibi, Siamak
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Virtual deformable image sensors: Towards to a general framework for image sensors with flexible grids and forms2018In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 6, article id 1856Article in journal (Refereed)
    Abstract [en]

    Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.

12345 151 - 200 of 208
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf