Change search
Refine search result
48495051 2501 - 2550 of 2550
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 2501. Ygge, Fredrik
    et al.
    Akkermans, Hans
    Power Load Management as a Computational Market1996Conference paper (Refereed)
    Abstract [en]

    Power load management enables energy utilities to reduce peak loads and thereby save money. Due to the large number of different loads, power load management is a complicated optimization problem. We present a new decentralized approach to this problem by modeling direct load management as a computational market. Our simulation results demonstrate that our approach is very efficient with a superlinear rate of convergence to equilibrium and an excellent scalability, requiring few iterations even when the number of agents is in the order of one thousand. A framework for analysis of this and similar problems is given which shows how nonlinear optimization and numerical mathematics can be exploited to characterize, compare and tailor problem-solving strategies in market-oriented programming.

  • 2502. Ygge, Fredrik
    et al.
    Akkermans, Hans
    Power Load Management as a Computational Market1996Report (Other academic)
    Abstract [en]

    Power load management enables energy utilities to reduce peak loads and thereby save money. Due to the large number of different loads, power load management is a complicated optimization problem. We present a new decentralized approach to this problem by modeling direct load management as a computational market. Our simulation results demonstrate that our approach is very efficient with a superlinear rate of convergence to equilibrium and an excellent scalability, requiring few iterations even when the number of agents is in the order of one thousand. A framework for analysis of this and similar problems is given which shows how nonlinear optimization and numerical mathematics can be exploited to characterize, compare, and tailor problem-solving strategies in market-oriented programming.

  • 2503. Ygge, Fredrik
    et al.
    Akkermans, Hans
    Smart Software as Customer Assistant in Large-Scale Distributed Load Management1997Report (Other academic)
    Abstract [en]

    Advanced information systems present a key enabling technology for innovative customer-oriented services by the utility industry. Real-time and two-way electronic information exchange with the customer over the power grid and other media is now possible. This provides the baseline for a host of new customer services, provided proper advantage is taken of a variety of recent advances in information technology. In this context, we discuss (a) how to engineer knowledge into systems and services, giving rise to smart software and intelligent systems, and (b) how to exploit this practically for novel ways to achieve distributed load management, dealing with thousands of devices simultaneously. A special characteristic of our concept is that the two-way information exchange for load-balancing purposes is based on market mechanisms similar to an auction. This auction is carried out by small smart software programs in devices (such as radiators) that represent and assist the customer. Field experiments with this intelligent and distributed new approach to power load management are currently performed.

  • 2504. Ygge, Fredrik
    et al.
    Akkermans, Hans
    Andersson, Arne
    A Multi-Commodity Market Approach to Power Load Management1998Report (Other academic)
    Abstract [en]

    Power load management is the concept of controlling the loads at the demand side in order to run energy systems more efficiently. Energy systems are inherently highly distributed and contain large number of loads, up to some million. This implies that computationally and conceptually attractive methods are required for this application. In this paper we give two novel theorems for how to decompose general resource allocation problems into markets. We also introduce a novel multi-commodity market design to meet the demands of power load management. The approach is demonstrated to lead to very high quality allocations, and to have a number of advantages compared to current methods.

  • 2505. Ygge, Fredrik
    et al.
    Astor, Eric
    Interacting Intelligent Software Agents in Demand Management1995Conference paper (Refereed)
    Abstract [en]

    Even though distributed computing and two-way communication with the customer is becoming a reality for many energy distribution companies, there is still a need to develop methodologies for more efficient energy management. In this paper we discuss current approaches to demand management, and then present ideas from other areas applied in energy management. We introduce concepts such as computational markets and software agents in this context. In addition, methods entirely based on distributed problem solving to address the computationally hard problems of resource allocation with vast number of clients are described. We also discuss how these methods can be used to perform cost/benefit analysis of demand management.

  • 2506. Ygge, Fredrik
    et al.
    Astor, Eric
    Interacting Intelligent Software Agents in Distribution management1995Report (Other academic)
    Abstract [en]

    Even though distributed computing and two-way communication with the customer is becoming a reality for many energy distribution companies, there still is a need to develop methodologies for more efficient energy management. In this paper we discuss current approaches to demand management, and then present ideas from other areas applied to energy management. We introduce concepts such as computational markets and software agents in this context. In addition, methods entirely based on distributed problem solving to address the computationally hard problems of resource allocation with vast number of clients are described. We also discuss how these methods can be used to perform cost/benefit analysis of demand management.

  • 2507. Ygge, Fredrik
    et al.
    Gustavsson, Rune
    Akkermans, Hans
    HOMEBOTS: Intelligent Agents for Decentralized Load Management1996Conference paper (Refereed)
  • 2508.
    Yousefi, Parisa
    et al.
    Blekinge Institute of Technology, School of Computing.
    Yousefi, Pegah
    Blekinge Institute of Technology, School of Computing.
    Cost Justifying Usability a case-study at Ericsson2011Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In this study we investigate the level of usability and usability issues and the gaps concerning usability activities and the potential users, in a part of charging system products in Ericsson.Also we try identifying the cost-benefit factors, usability brings to this project, in order to attempt 'justifying the cost of usability for this particular product'.

  • 2509.
    Yousuf, Kamran
    Blekinge Institute of Technology, School of Computing.
    Time controlled network traffic shaper2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Network performance metrics such as delay variations and packet loss influence the performance of the network. As a consequence, the performance of applications on the network is also affected as most of the networked applications existing today are very much sensitive to the network performance. Therefore it is of utmost importance to test the intensity of such network level disturbances on the performance of applications. A network traffic shaper/emulator shapes the network traffic in terms of these performance metrics to test such applications in a controlled environment. Most of the traffic shapers existing today give the instantaneous step transition in delay and packet loss on network. In this work, we present time-controlled network traffic shaper, a tool that facilitates testing and experimentation of network traffic through emulation. It focuses on time variant behavior of the traffic shaper. A linear transition of delay and packet loss that is varying with respect to time may fits much better to the real network scenarios instead of an instantaneous step transition in delay and packet loss. This work illustrates the emulation capabilities of time-controlled network traffic shaper and presents its design architecture. Several approaches are analyzed to do the task and one of them is followed to develop the desired architecture of the shaper. The shaper is implemented in a small scenario and is tested to see whether the desired output is achieved or not. The shortfalls in the design of the shaper are also discussed. Results are presented that show the output from the shaper in graphical form. Although the current implementation of the shaper does not provide linear or exponential output but this can be achieved by implementing a configuration setting that is comprised of small transition values that are varying with respect to very small step sizes of time e.g. transitions on milli seconds or micro seconds. The current implementation of the shaper configuration provides the output with a transition of one milli second on every next second.

  • 2510.
    Yusuf, Adewale
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Challenges associated with effective task execution in a Virtual Learning Environment: A case study of Graduate Students of a University2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In recent years, more and more people have started showing an increasing interest in distance or web-based education. Some of the reasons for this are the improvement in information and communication technology, as well as advancement in computer networking infrastructures. However, although computer technology has played an important role for the development of distance learning management systems, the underlying goal of such systems is the delivery of competitive and qualitative education via the distance learning environment. There have been a number of research studies and investigations in the field of Computer supported collaborative learning. This particular study is focused on the challenges associated with task execution in a distance learning environment as perceived by graduate students at a university.

    Objectives: The main focus or rationale behind this study is to investigate the importance of computer mediated communication tools in a virtual learning environment, as well as the problems facing the teachers or facilitators in their attempt to help learners (students) in the process of task execution, and towards achieving the learning goals in a web-based learning system.

    Methods: The author has adopted a qualitative case study approach. Questionnaires were sent out to some of the graduate students of BTH that participated in the online course under investigation, “Work integrated e-learning”, and some of these students were interviewed as well. Interviews were also conducted with two professors of Informatics and active researchers in distributed or e-learning in a University in Sweden that has had many years of experience in providing distance learning education. The empirical material was then analyzed, using cultural historical activity theory (CHAT) as a theoretical framework

    Results: The results indicate that more communication and collaborative interaction is needed in the context of the studied e-learning management system. The students expected the provision of more video communication through the learning platform. Furthermore, the results show that the learning in the studied web-based environment is centered on the students. 

    Conclusions: The author concludes that in order to diminish the gap that exists between face-to-face learning/teaching and an e-learning environment, there is a need for the designers and facilitators of the e-learning management system to make this platform more interactive. Additionally, the author concludes that the concept of Open start free pace (OSFP) or strict deadlines may need to be introduced into distance learning education in order to solve the challenges facing the teachers and facilitators.  

     

  • 2511.
    Yılmazer, Şafak Enes
    Blekinge Institute of Technology, School of Engineering.
    Integrated Coverage Measurement and Analysis System for Outdoor Coverage WLAN2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Daily usage of Wireless Local Area Networks (WLAN) in business life for specific purposes has became much more critical than before since it is sometimes crucial to have wireless connectivity and seamless roaming around the working environment. In this thesis, steps required in order to design and implement a large scale outdoor IEEE 802.11g WLAN will be shown. This WLAN project has been deployed in north of Sweden and target coverage was an open area consisting of a deep pit mine, connecting roads, workshops, offices, dumps and storage areas. All telecommunications equipment used in this project is from the manufacturer Cisco using centralized solution. The special purpose of this project is to collect and analyze a series of coverage measurement data and correlate this data to predict the coverage area. Linux bash scripting and Gnuplot has been used to analyze coverage data. Finally, WRAP spectrum management and radio planning software has been used in modeling and designing of the whole network.

  • 2512. Zackrisson, Johan
    et al.
    Svahnberg, Charlie
    OpenLabs Security laboratory: The online security experiment platform2008Conference paper (Refereed)
    Abstract [en]

    For experiments to be reproducible, it is important to have a known and controlled environment. This requires isolation from the surroundings. For security experiments, e.g. with hostile software, this is even more important as the experiment can affect the environment in adverse ways. In a normal campus laboratory, isolation can be achieved by network separation. For an online environment, where remote control is essential, separation and isolation are still needed, and therefore the security implications must be considered. In this paper, a way to enable remote experiments is described, where users are given full control over the computer installation. By automating the install procedure and dynamically creating isolated experiment networks, remote users are provided with the tools needed to do experiments in a reproducible and secure environment.

  • 2513. Zackrisson, Johan
    et al.
    Svahnberg, Charlie
    OpenLabs Security Laboratory: The Online Security Experiment Platform2008In: International Journal of Online Engineering, ISSN 1868-1646, E-ISSN 1861-2121, Vol. 4, no special issue: REV2008, p. 63-68Article in journal (Refereed)
    Abstract [en]

    For experiments to be reproducible, it is important to have a known and controlled environment. This requires isolation from the surroundings. For security experiments, e.g. with hostile software, this is even more important as the experiment can affect the environment in adverse ways. In a normal campus laboratory, isolation can be achieved by network separation. For an online environment, where remote control is essential, separation and isolation are still needed, and therefore the security implications must be considered. In this paper, a way to enable remote experiments is described, where users are given full control over the computer installation. By automating the install procedure and dynamically creating isolated experiment networks, remote users are provided with the tools needed to do experiments in a reproducible and secure environment.

  • 2514.
    Zahda, Showayb
    Blekinge Institute of Technology, School of Computing.
    Obsolete Software Requirements2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Requirements changes are unavoidable in any software project. Requirements change over time as software projects progress, and involved stakeholders (mainly customers) and developers gain better understanding of the final product. Additionally, time and budget constraints prevent implementing all candidate requirements and force project management to select a subset of requirements that are prioritized more important than the others so as to be implemented. As a result, some requirements become cancelled and deleted during the elicitation and specification phase while other requirements are considered not important during the prioritization phase. A common scenario in this situation is to leave the excluded requirements for being considered in the next release. The constant leaving of the excluded requirements for the next release may simply render them obsolete.

  • 2515.
    Zalasinski, Marcin
    et al.
    Czestochowa University of Technology, POL.
    Cpalka, Krzysztof
    Czestochowa University of Technology, POL.
    Rakus-Andersson, Elisabeth
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    An Idea of the Dynamic Signature Verification Based on a Hybrid Approach2016In: Artificial Intelligence and Soft Computing LNAI 9693: Proceedings of the 15th International Conference, ICAISC 2016 / [ed] Leszek Rutkowski et al., Berlin Heidelberg: Springer, 2016, Vol. II, p. 232-246Conference paper (Refereed)
    Abstract [en]

    Dynamic signature verification is a very interesting biometric issue. It is difficult to realize because signatures of the user are characterized by relatively high intra-class and low inter-class variability. However, this method of an identity verification is commonly socially acceptable.

    It is a big advantage of the dynamic signature biometric attribute. In this paper, we propose a new hybrid algorithm for the dynamic signature verification based on global and regional approach. We present the simulation results of the proposed method for BioSecure DS2 database, distributed by the BioSecure Association.

  • 2516.
    Zeb, Falak
    et al.
    Blekinge Institute of Technology, School of Computing.
    Naseem, Sajid
    Blekinge Institute of Technology, School of Computing.
    Guidelines for the Deployment of Biometrics Technology in Blekinge Health Care System with the Focus on Human Perceptions and Cost Factor2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Biometrics Technology is an authentication technology that identifies the individuals from their physical and behavioral characteristics. Despite the fact that biometrics technology provides robust authentication and enhanced security, it has not yet been implemented in many parts of the world due to certain issues i.e. human perceptions of the biometrics technology and cost factor, involved in the deployment of biometrics technology. As the biometrics technology involves identity management of individuals that’s why the humans perceptions of biometrics technology i.e. privacy concerns, security concerns and user acceptance issue play a very important role in the deployment of biometrics technology. There for the human perceptions and cost factor need to be considered before any deployment of biometrics technology. The aim of this thesis work is to study and analyze how the people’s perceptions and cost factor can be solved for the deployment of biometrics technology in Blekinge health care system. Literature study, interviews and survey are performed by authors for the identification and understanding of the human perceptions and cost factor. Based on these, solutions in form of guidelines to the issues involved in the biometrics technology deployment in Blekinge health care system Sweden are given.

  • 2517.
    Zeeshan, Ahmed
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Integration of Variants Handling in M-System-NT2006Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    This Master thesis proposes a solution to manage variabilities of software product line applications. The objective of the research is to support software decision makers in handling additional software complexity introduced by product line architectures. In order to fulfill this objective an approach to analyze, visualize, and measure product line specific characteristics of the C/C++ source code are proposed. The approach is validated in an empirical experiment using an open source software system. For that purpose the approach is first implemented into ®1 M-System-NT, an existing software measurement tool developed at Fraunhofer. The target hypothesis of the Institute for Experimental Software engineering research master thesis to perform static analysis of C/C++ source code, measure traditional and product line measures to identify the correlation between measures and indicate the fault proneness.

  • 2518.
    Zeeshan Iqbal, Syed Muhammad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Grahn, Håkan
    Blekinge Institute of Technology, School of Computing.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, School of Computing.
    A Comparative Evaluation of Re-scheduling Strategies for Train Dispatching during Disturbances2012Conference paper (Refereed)
    Abstract [en]

    Railway traffic disturbances occur and train dispatchers make re-scheduling decisions in order to reduce the delays. In order to support the dispatchers, good rescheduling strategies are required that could reduce the delays. We propose and evaluate re-scheduling strategies based on: (i) earliest start time, (ii) earliest track release time, (iii) smallest buffer time, and (iv) shortest section runtime. A comparative evaluation is done for a busy part of the Swedish railway network. Our results indicate that strategies based on earliest start time and earliest track release time have the best average performance.

  • 2519.
    Zeid Baker, Mousa
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Generation of Synthetic Images with Generative Adversarial Networks2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. Thus, having sufficient and relevant data is crucial for the performance.

    In this thesis, the researcher tackles the problem of not having a sufficient dataset, in terms of the number of training examples, for an image classification task. The idea is to use Generative Adversarial Networks to generate synthetic images similar to the ground truth, and in this way expand a dataset. Two types of experiments were conducted: the first was used to fine-tune a Deep Convolutional Generative Adversarial Network for a specific dataset, while the second experiment was used to analyze how synthetic data examples affect the accuracy of a Convolutional Neural Network in a classification task. Three well known datasets were used in the first experiment, namely MNIST, Fashion-MNIST and Flower photos, while two datasets were used in the second experiment: MNIST and Fashion-MNIST.

    The results of the generated images of MNIST and Fashion-MNIST had good overall quality. Some classes had clear visual errors while others were indistinguishable from ground truth examples. When it comes to the Flower photos, the generated images suffered from poor visual quality. One can easily tell the synthetic images from the real ones. One reason for the bad performance is due to the large quantity of noise in the Flower photos dataset. This made it difficult for the model to spot the important features of the flowers.

    The results from the second experiment show that the accuracy does not increase when the two datasets, MNIST and Fashion-MNIST, are expanded with synthetic images. This is not because the generated images had bad visual quality, but because the accuracy turned out to not be highly dependent on the number of training examples.

    It can be concluded that Deep Convolutional Generative Adversarial Networks are capable of generating synthetic images similar to the ground truth and thus can be used to expand a dataset. However, this approach does not completely solve the initial problem of not having adequate datasets because Deep Convolutional Generative Adversarial Networks may themselves require, depending on the dataset, a large quantity of training examples.

  • 2520.
    Zepernick, Hans-Jürgen
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Pettersson, Mats
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Quality of Experience Based Cross-Layer Design of Mobile Video Systems2008Conference paper (Refereed)
    Abstract [en]

    This paper introduces and discusses quality of experience based cross-layer design of mobile video systems as a means of providing technologies for jointly analyzing, adopting, and optimizing system quality. The many benefits of our novel approaches over traditional concepts will range from efficient video processing techniques over advanced real-time scheduling algorithms, to networking and service level management techniques. This will lead to better service quality and resource utilization in mobile video systems.

  • 2521.
    Zhang, Ge
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Denial of Service on SIP VoIP infrastructures using DNS flooding2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    A simple yet effective Denial of Service (DoS) attack on SIP servers is to flood the server with requests addressed at irresolvable domain names. In this paper we evaluate different possibilities to mitigate these effects and show that over-provisioning is not sufficient to handle such attacks. As a more effective approach we present a solution called the DNS Attack Detection and Prevention (DADP) scheme based on the usage of a non-blocking DNS cache. Based on various measurements conducted over the Internet we investigate the efficiency of the DADP scheme and compare its performance with different caching strategies applied.

  • 2522. Zhang, Peng
    IMIS Platform Evaluation Report2005Report (Other academic)
    Abstract [en]

    After a pre-study in the year 2001-2002 financed by VINNOVA , a prototype based on the Activity Theory model was constructed to demonstrate the usability of mobile IT in mobile healthcare. The prototype was developed in PHP on Apache and deployed on a Microsoft Windows Server 2000 computer, which was also installed a MS SQL server 2000 for the database. This prototype is considered as a throwaway one, since it was developed mainly to demonstrate a basic architecture for the healthcare databases. As the IMIS project goes into more specific, there are needs for building up a new prototype, which is considered to be evolutionary prototype, i.e. a prototype to be successively constructed to the final system. To this purpose, this report will evaluate different development platforms for the new IMIS construction. This report evaluates two development platforms, namely Microsoft .Net and Java from Sun, which are most applied in software development.

  • 2523. Zhang, Peng
    Knowledge Integrated Agent Technology with CommonKADS2004Conference paper (Refereed)
    Abstract [en]

    This paper is organized around three questions: (1) is knowledge a necessary entity in Agent Systems? (2) What is the problem for individual knowledge intensive agents to cooperate? (3) Provide a possible methodology for design and implement a knowledge intensive agent. Knowledge is considered to be only emerging during the process when agents coordinate, but not an individually possessed entity by some researchers. Some other researchers consider knowledge as a starting point, a given entity that is part of the notion of an intelligent agent, and focus on knowledge acquisition, inference and communication. The paper will first have a discussion on this topic from an angle of Activity Theory. Then we have a discussion of the ontology sharing problems in MultiAgent Systems (MAS), based on the Distributed Collective Memory. Finally we introduce a methodology to build a knowledge intensive agent system.

  • 2524. Zhang, Peng
    Multi-agent Systems in Diabetic Health Care2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis discusses how Multi-agent Systems (MAS) should be designed in the context of diabetic health care. Three fields are touched: computer science, socio-psychology and systems science. Agent Technology is the core technology in the research. Theories from socio-psychology and systems science are applied to facilitate the discussion about computer agents. As the integration of socio-psychology and systems science, Activity Systems Theory is introduced to give a synthesized description of MAS. Laws and models are introduced with benefits on both individual agent and agent communities. Cybernetics from systems science and knowledge engineering from computer science are introduced to approach the design and implementation of the individual agent architecture. A computer agent is considered intelligent if it is capable of reactivity, proactivity and social activity. Reactivity and proactivity can be realized through a cybernetic approach. Social activity is much more complex, since it considers MAS coordination. In this thesis, I discuss it from the perspectives of socio-psychology. The hierarchy and motivation thinking from Activity Systems Theory is introduced to the MAS coordination. To behave intelligent, computer agents should work with knowledge. Knowledge is considered as a run-time property of a group of agents (MAS). During the MAS coordination, agents generate new information through exchanging the information they have. A knowledge component is needed in agent’s architecture for the knowledge related tasks. In my research, I adopt CommonKADS methodology for the design and implementation of agent’s knowledge component. The contribution of this research is twofold: first, MAS coordination is described with perspectives from socio-psychology. According to Activity Systems Theory, MAS is hierarchically organized and driven by the motivation. This thesis introduces a motivation-driven mechanism for the MAS coordination. Second, the research project Integrated Mobile Information Systems for health care (IMIS) indicates that the diabetic health care can be improved by introducing agent-based services to the care-providers and care-receivers. IMIS agents are designed with capabilities of information sharing, organization coordination and task delegation. To perform these tasks, the IMIS agents interact with each other based on the coordination mechanism that is discussed above.

  • 2525.
    Zhang, Peng
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Multi-Agent Systems Supported Collaboration in Diabetic Healthcare2008Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis proposes a holistic and hierarchical architecture to Multi-agent System design, in order to resolve the collaboration problem in diabetic healthcare system. A diabetic healthcare system is a complex and social system in the case that it involves many actors and interrelations. Collaborations among various healthcare actors are vital to the quality of diabetic healthcare. The collaboration problem is manifested by the problems of accessibility and interoperability. To support the collaboration in diabetic healthcare as such a complex and social system, the MAS must have corresponding social entities and relationships. Therefore, it is assumed that theories explaining social activity can be applied to design of MAS. Activity Theory, specifically its holistic triangle model from Engström and hierarchy thinking, provides theoretical supports to the design of individual agent architecture and MAS coordination mechanism. It is argued that the holistic and hierarchical aspects should be designed in a MAS when applied to the healthcare setting. The diabetic healthcare system is analyzed on three levels based on the hierarchy thinking. The collaboration problem is analyzed and resolved via MAS coordination. Based on the holistic activity model in Activity Theory, Müller’s Vertical Layered Architecture is re-conceptualized in the Control Unit and Knowledge Base design. It is also argued that autonomy, adaptivity and persona should be especially focused when designing the interaction between an agent system and human users. This study has firstly identified some important social aspects and the technical feasibility of embedding those identified social aspects in agent architecture design. Secondly, a MAS was developed to illustrate how to apply the proposed architecture to design a MAS to resolve the collaboration problem in diabetic healthcare system. We have designed and implemented an agent system – IMAS (Integrated Multi-agent System) to validate the research questions and contributions. IMAS system provides real time monitoring, diabetic healthcare management and decision supports to the diabetic healthcare actors. A user assessment has been conducted to validate that the quality of the current diabetic healthcare system can be improved with the introduction of IMAS.

  • 2526. Zhang, Peng
    et al.
    Bai, Guohua
    A Cybernetic Architecture of Practical Reasoning Agent2003Conference paper (Refereed)
    Abstract [en]

    During the last ten years, agent technology has been widely discussed in various research areas. An agent is a computer system that is situated in some environment, and that is capable of autonomous actions in this environment in order to meet its design objectives. There are at least two kinds of reasoning methods applied in constructing an agent, namely practical reasoning and theoretical reasoning. Practical reasoning directed towards actions – the process of figuring out what to do by weighing different acting options against with agent desires and believes. While theoretical reasoning is directed towards beliefs. In this paper, we just focus on practical reasoning. A widely used BDI model for practical reasoning agent will be introduced, based on which our cybernetic-BDI architecture is discussed. ‘Intelligence’ and ‘autonomy’ are perhaps the most important aspects of agent system. Attempts to model intelligent behaviors of an agent, especially a practical reasoning agent, have been made from areas of computer sciences, psychology, sociology, and many others. Cybernetics provides a concrete mechanism for this purpose, namely by ‘feedback’, ‘feedforward’, and ‘sociocybernetics’. We discuss first intelligent behaviors of agent systems in terms of reactivity, proactity, and social ability based on cybernetic concepts of feedback, feedforward, and sociocybernetics. Then based on the Belief-Desire-Intention (BDI) model and cybernetic principles we build up our Cybernetic-BDI architecture. With a pseudocode we validate the architecture for its practical implementation and fulfillment of required intelligent behaviors. In the last, a scenario of healthcare agent for diabetes patients is provided to show how the agent works according to the Cybernetic-BDI architecture.

  • 2527.
    Zhang, Peng
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Bai, Guohua
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    An Activity Systems Theory Approach to Agent Technology2005In: International Journal of Knowledge and Systems Sciences, ISSN 1349-7030, Vol. 2, no 1, p. 60-65Article in journal (Refereed)
    Abstract [en]

    In the last decade, Activity Theory has been discussed a lot in Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW). Activity Theory was used both theoretically as an analytical method and practically as a development framework for Information Systems. Meanwhile, there is a new trench from Artificial Intelligence. Artificial Intelligence researchers find that the fruit from Activity Theory study may contribute, especially to Agent Technology, with socio-psychological aspects. In our E-health research, we apply Activity Theory both theoretically and practically into Agent Technology. To facilitate our research, General Systems Theory is chosen to integrate Activity Theory and Agent Technology. On the one hand, we consider Activity Theory as specific subject of General Systems Theory. On the other hand, General Systems Theory contributes to Agent Technology with systematic perspectives. As integration, we introduce Activity Systems Theory as an extension of Activity Theory. Then we apply it into Agent Technology discussion. The paper starts with the discussion of systematic perspectives of Activity Theory. Then we introduce Activity Systems Theory as an integration of systems science and Activity Theory. Three Activity Systems Theory principles are then applied into the discussion of Agent Technology. In the end, we introduce how we apply Activity Systems Theory into an E-health application.

  • 2528.
    Zhang, Peng
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Bai, Guohua
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Systemic Thinking in Multi-agent Systems Coordination – Applied in Diabetic Health Care2006Conference paper (Refereed)
    Abstract [en]

    Computer agent is considered as a technology that may support human beings with automatic functionalities in the social environment. This paper describes an approach to applying agents to diabetic health care. A good health care agent is considered to be able to keep a good balance between individual flexibility and team control. A systemic approach is proposed hereby as a complementation to the current approaches. Multi-agent Systems (MAS) coordination is considered on three levels: collaboration, coordination and communication. In the end, an agent-based computer system – Integrated Mobile Information Systems (IMIS) – is discussed based on the systemic approach.

  • 2529.
    Zhang, Peng
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Bai, Guohua
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Carlsson, Bengt
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Johansson, Stefan J.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Applying Multi-agent Systems Coordination to the Diabetic Healthcare Collaboration2008Conference paper (Refereed)
    Abstract [en]

    Diabetic healthcare is characterized by the collaboration problem, which is manifested by problems of accessibility and interoperability. To improve the problem situation, we propose a Multi-agent Systems approach. The interactions among the diabetic healthcare actors are categorized on three levels: collaboration, coordination, and communication. Agents are designed to work on the coordination and communication levels, and support the collaboration among human actors. This paper presents a project Integrated Mobile Information Systems for diabetic healthcare (IMIS) to demonstrate how to apply Multi-agent Systems coordination to the collaboration among healthcare actors.

  • 2530. Zhang, Peng
    et al.
    Carlsson, Bengt
    Johansson, Stefan J.
    Enhance Collaboration in Diabetic Healthcare for Children using Multi-agent Systems2008Conference paper (Refereed)
    Abstract [en]

    We developed a multi-agent platform as a complement to the existing healthcare system in a children’s diabetic healthcare setting. It resolves problems related to the difficulty of collaboration between the stakeholders of the problem domain. In addition, it gives us an opportunity to support the decision making of the stakeholders using Multi-agent Systems. The collaboration situation is believed to be improved by the agent-based services, such as, diabetes monitoring and alarm, scheduling, and task delegation.

  • 2531.
    Zhang, Yiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Liu, Xiaohui
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Design of Eco-Smart Homes For Elderly Independent Living2015Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The aging of the world population has increased dramatically during the past century. The rapid increase of elderly population is putting a heavy strain on healthcare and social welfare. Living conditions and service provision for elderly people have thus become an increasingly hot topic worldwide. In this paper, we address this problem by presenting a conceptual model of an integrated and personalized system for an eco-smart home for elderly independent living. This approach was inspired by an on-going European project, INNOVAGE, which researchers at Blekinge Institute of Technology are currently participating in, and which focuses on regional knowledge clusters for promoting eco-smart homes for elderly independent living. Contrasting the social situation of elderly in China and Europe, we have chosen to focus on a solution for a Swedish context, which takes technical, environmental, social and human-computer interaction aspects into consideration in the design of eco-smart homes for elderly people in Sweden. Three studies have been carried out in order to clarify and explore the main issues at stake. A literature review gave an overview of on-going research and the current state-of-the-art concerning smart homes. The literature review, along with an interview of an expert on solar energy, also gave insights into additional design challenges which are introduced when focusing specifically on eco-smart building solutions. In order to explore and gain a better understanding of the perceived needs and requests of the target group, i.e. the elderly population, we carried out interviews with three experts in healthcare and homecare for the elderly, and also carried out interviews among the elderly in Karlskrona and interviews and a web survey among the elderly in China. As a way of addressing the design challenges of integrating a multitude of diverse, complicated technical systems in a home environment while at the same time high-lighting the need for comprehensive personalized service provision for elderly people, we designed a conceptual model – an exemplar – of an eco-smart home for elderly independent living. The eco-smart home exemplar aims to inspire interdisciplinary and multi-stakeholder discussions around innovative design and development of environmentally friendly, comfortable, safe and supportive living for the elderly in the future. Finally, we did an evaluation of the model in two workshops with elderly people in two different towns in Blekinge.

  • 2532. Zhao, Haifeng
    et al.
    Kallander, William
    Gbedema, Tometi
    Johnson, Henric
    Wu, Felix
    Read what you trust: An open wiki model enhanced by social context2011Conference paper (Refereed)
    Abstract [en]

    Wiki systems, such as Wikipedia, provide a multitude of opportunities for large-scale online knowledge collaboration. Despite Wikipedia's successes with the open editing model, dissenting voices give rise to unreliable content due to conflicts amongst contributors. From our perspective, the conflict issue results from presenting the same knowledge to all readers, without regard for the importance of the underlying social context, which both reveals the bias of contributors and influences the knowledge perception of readers. Motivated by the insufficiency of the existing knowledge presentation model for Wiki systems, this paper presents TrustWiki, a new Wiki model which leverages social context, including social background and relationship information, to present readers with personalized and credible knowledge. Our experiment shows, with reliable social context information, TrustWiki can efficiently assign readers to their compatible editor community and present credible knowledge derived from that community. Although this new Wiki model focuses on reinforcing the neutrality policy of Wikipedia, it also casts light on the other content reliability problems in Wiki systems, such as vandalism and minority opinion suppression.

  • 2533. Zhao, Haifeng
    et al.
    Kallander, William
    Johnson, Henric
    Blekinge Institute of Technology, School of Computing.
    Wu, Felix
    Blekinge Institute of Technology, School of Computing.
    SmartWiki: A reliable and conflict-refrained Wiki model based on reader differentiation and social context analysis2013In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 47, p. 53-64Article in journal (Refereed)
    Abstract [en]

    Wiki systems, such as Wikipedia, provide a multitude of opportunities for large-scale online knowledge collaboration. Despite Wikipedia's successes with the open editing model, dissenting voices give rise to unreliable content due to conflicts amongst contributors. Frequently modified controversial articles by dissent editors hardly present reliable knowledge. Some overheated controversial articles may be locked by Wikipedia administrators who might leave their own bias in the topic. It could undermine both the neutrality and freedom policies of Wikipedia. As Richard Rorty suggested "Take Care of Freedom and Truth Will Take Care of Itself"[1], we present a new open Wiki model in this paper, called TrustWiki, which bridge readers closer to the reliable information while allowing editors to freely contribute. From our perspective, the conflict issue results from presenting the same knowledge to all readers, without regard for the difference of readers and the revealing of the underlying social context, which both causes the bias of contributors and affects the knowledge perception of readers. TrustWiki differentiates two types of readers, "value adherents" who prefer compatible viewpoints and "truth diggers" who crave for the truth. It provides two different knowledge representation models to cater for both types of readers. Social context, including social background and relationship information, is embedded in both knowledge representations to present readers with personalized and credible knowledge. To our knowledge, this is the first paper on knowledge representation combining both psychological acceptance and truth reveal to meet the needs of different readers. Although this new Wiki model focuses on reducing conflicts and reinforcing the neutrality policy of Wikipedia, it also casts light on the other content reliability problems in Wiki systems, such as vandalism and minority opinion suppression.

  • 2534. Zhi, Tao
    et al.
    Zhang, Xia-Jun
    Zhao, He-Ming
    Kulesza, Wlodek J.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Noise reduction in whisper speech based on the auditory masking model2010Conference paper (Refereed)
    Abstract [en]

    This paper presents the issue of whispered speech enhancement. Based on multi-band spectral subtraction method where the introduced musical residual noise occurs, the proposed approach performs parametric subtraction according to the WSS (Whispered Sensitive Scale) method that is particular for whispered speech processing and auditory masking model. The algorithm is characterized by a tradeoff mechanism between the amount of the whispered speech distortion, noise reduction, and the level of musical residual noise, which are determined by appropriate adjusting the subtraction parameters. Compared with traditional subtractive-type algorithms, the proposed method results in a significant reduction of musical residual noise. Finally, objective and subjective evaluations are implemented illustrating the improvements over traditional subtractive-type algorithms.

  • 2535. Zhou, Kaibo
    et al.
    Liu, Gang
    Yao, Yong
    Popescu, Adrian
    A Mobility Management Solution for Simultaneous Mobility with mSCTP2011Conference paper (Refereed)
    Abstract [en]

    The paper is about mobile Stream Control Transmission Protocol (mSCTP) and the problems related to simultaneous mobility. Simultaneous mobility is when the both endpoints of a communication session are mobile and they move at about the same time. mSCTP works well in the case of non-simultaneous mobility where the SCTP association is established between a mobile endpoint and a stationary one. In the case of simultaneous mobility however, the probability of broken association may become high because both endpoints may suffer from losing address binding update. This is a consequence of the fact that the targeted addresses become unreachable. In this paper, we suggest a solution based on Host Name Address (HNA), together with pro-active Name Server, Address Handling Function (AHF) and Simultaneous Mobility Detection Function (SMDF) to eliminate this problem. Our preliminary results show that the performance of our solution is as good as for non-simultaneous mobility situation in terms of packet loss rate for low rate stream. On the other hand, the drawback is that some modifications to the current standard mSCTP are needed.

  • 2536.
    Zieba, Maciej
    Blekinge Institute of Technology, School of Computing.
    Multistage neural networks for pattern recognition2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In this work the concept of multistage neural networks is going to be presented. The possibility of using this type of structure for pattern recognition would be discussed and examined with chosen problem from eld area. The results of experiment would be confront with other possible methods used for the problem.

  • 2537. Zinner, T.
    et al.
    Hossfeld, T.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Liers, F.
    Volkert, T.
    Kohndoker, R.
    Schatz, R.
    Requirement driven prospects for realizing user-centric network orchestration2015In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 74, no 2, p. 413-437Article in journal (Refereed)
    Abstract [en]

    The Internet's infrastructure shows severe limitations when an optimal end user experience for multimedia applications should be achieved in a resource-efficiently way. In order to realize truly user-centric networking, an information exchange between applications and networks is required. To this end, network-application interfaces need to be deployed that enable a better mediation of application data through the Internet. For smart multimedia applications and services, the application and the network should directly communicate with each other and exchange information in order to ensure an optimal Quality of Experience (QoE). In this article, we follow a use-case driven approach towards user-centric network orchestration. We derive user, application, and network requirements for three complementary use cases: HD live TV streaming, video-on-demand streaming and user authentication with high security and privacy demands, as typically required for payed multimedia services. We provide practical guidelines for achieving an optimal QoE efficiently in the context of these use cases. Based on these results, we demonstrate how to overcome one of the main limitations of today's Internet by introducing the major steps required for user-centric network orchestration. Finally, we show conceptual prospects for realizing these steps by discussing a possible implementation with an inter-network architecture based on functional blocks.

  • 2538. Zinner, Thomas
    et al.
    Hossfeld, Tobias
    Minhas, Tahir Nawaz
    Fiedler, Markus
    Controlled vs. Uncontrolled Degradations of QoE: The Provisioning-Delivery Hysteresis in Case of Video2010Conference paper (Refereed)
    Abstract [en]

    This paper applies the recently proposed provisioning-delivery hysteresis for Quality of Experience (QoE) to the case of video. The study is based on evaluations using the Structural Similarity Metric (SSIM) for different versions of a video in terms of resolution on one hand, and suffering from different packet loss ratios on the other hand. Upon translation of the SSIM into MOS, the QoE plotted versus the effective throughput shows the predicted behaviour: a controlled quality and throughput reduction leads to a better user perceived quality than the quality degradation due to packet loss. The results clearly quantify the necessity to control quality, instead of "getting hit" in an uncontrolled way.

  • 2539.
    Zou, Ming
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Industrial Decision Support System with Assistance of 3D Game Engine2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Industrial Decision Support System(DSS) traditionally relies on 2D approach to visualize the scenarios. For some abstract information, like chronological sequence of tasks or data trend, it provides a good visualization. For concrete information, such as location and spatial relationships, 2D visualizations are too abstract. Techniques from Game design, 3D modeling, virtual reality(VR) and animation provides many inspiration to develop a DSS tools for industrial applications. Objectives. The work in our research was to develop a unique prototype for data visualization in wind power systems, and compare it with traditional ones. The product combined 3D VR, 2D graphics, user navigation, and Human Machine Interaction(HMI). It was developed with a game engine, Unity3D. The study explored how much usability can be improved when using applied gamificaion 3D approaches in industrial monitoring and control systems. Methods. The research methods included Literature Review, Commercial Example Analysis, Development, and Evaluation. In the evaluation phase, Systematic Usability Scale(SUS) tests were performed with two independent groups, the testing results were analyzed with statistical method, t-test. Results. The evaluation results showed that an interface developed with 3D virtual reality can provide better usability(include learnability) than traditional 2D industrial interface in wind power system. The difference between them is significant. Conclusions. The study indicates that, compared with the traditional 2D interfaces, the gamification 3D approach in industrial DSS can provide user more comprehensive information visualization, better usability and learnability . It also gives more effective interactions to enhance the user experience.

  • 2540.
    Álvarez, Carlos García
    Blekinge Institute of Technology, School of Computing.
    Overcoming the Limitations of Agile Software Development and Software Architecture2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Agile Software Development has provided a new concept of Software Development based in adaptation to changes, quick decisions, low high-level design and frequent deliveries. However, this approach ignores the value that Software Architecture provides in the long term for increasing the speed in delivery working software, which may have catastrophic consequences in the long term. Objectives. In this study, the combination of these two philosophies of Software Development is investigated. Firstly, the concept of Software Architecture in Agile Projects; then, the major challenges faced concerning Software Architecture in Agile Projects, the practices and approaches that can be used to overcome these challenges and the effects that these practices may cause on the project. Methods. The research methodologies used in this study are Systematic Literature Review for gathering the highest amount possible of contributions available in the Literature at this respect, and also the conduction of Semi-Structured Interviews with Agile Practitioners, in order to obtain empirical knowledge on the problem and support or deny the SLR findings. Results. The results of the Thesis are a unified description of the concept of Software Architecture in Agile Projects, and a collection of challenges found in agile projects, practices that overcome them and a relation of effects observed. Considering the most frequent practices/approaches followed and the empirical support, it is enabled a discussion on how to combine Software Architecture and Agile Projects. Conclusions. The main conclusion is that there is not a definite solution to this question; this is due to the relevance of the context (team, project, customer, etc.) that recommends the evaluation of each situation before deciding the best way to proceed. However, there are common trends on the best-recommended practices to integrate these two concepts. Finally, it is required more empirical work on the issue, the conduction of controlled experiments that allows to quantify the success or failure of the practices implemented would be most helpful in order to create a body of knowledge that enables the application of certain practices under certain conditions.

  • 2541. Åberg, Hampus
    Subimage matching in historical documents using SIFT keypoints and clustering2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In this thesis subimage matching in historical handwritten documents using SIFT (Scale-Invariant Feature Transform) keypoints was tested. SIFT features are invariant to scale and rotation and have gained a lot of interest in the research community. The historical documents used in this thesis orignates from 16th century and forward. The following steps have been executed; binarization, word segmentation, feature identification and clustering. The binarization step converts the images into binary images. The word segmentation separates the different words into individual subimages. In the feature identification SIFT keypoints was found and descriptors was computed. The last step was to cluster the images based on the distances between the set of image features identified. Objectives: The main objectives are to find a good configuration for the binarization step, implement a good word segmentation, identify image features and lastly to cluster the images based on their similarity. The context from subimages are matched to each other rather than trying to predict what the context of a subimage is, simply because the data that has been used is unlabeled. Methods: Implementation were the main methodology used combined with experimentation. Measurements were taken throughout the development and accuracy of word segmentation and the clustering is measured. Results: The word segmentation got an average accuracy of 89\% correct segmentation which is comparable to other word segmentating results. The clustering however matched 0% correctly.Conclusions: The conclusions that have been drawn from this study is that SIFT keypoints are not very well suited for this type of problem which includes a lot of handwritten text. The descriptors were not discriminative enough and different keypoints were found in different images with the same handwritten text, which lead to the bad clustering results.

  • 2542. Ådahl, Kerstin
    et al.
    Gustavsson, Rune
    Innovative Health Care Channels: Towards Declarative Electronic Decision Support Systems Focusing on Patient Security.2009Conference paper (Refereed)
    Abstract [en]

    Models supporting empoerment of health care teams and patients are introduced and exemplified

  • 2543.
    Åkesson, Gustav
    et al.
    Blekinge Institute of Technology, School of Computing.
    Rantzow, Pontus
    Blekinge Institute of Technology, School of Computing.
    Performance evaluation of multithreading in a Diameter Credit Control Application2010Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Moore's law states that the amount of computational power available at a given cost doubles every 18 months and indeed, for the past 20 years there has been a tremendous development in microprocessors. However, for the last few years, Moore's law has been subject for debate, since to manage heat issues, processor manufacturers have begun favoring multicore processors, which means parallel computation has become necessary to fully utilize the hardware. This also means that software has to be written with multiprocessing in mind to take full advantage of the hardware, and writing parallel software introduces a whole new set of problems. For the last couple of years, the demands on telecommunication systems have increased and to manage the increasing demands, multiprocessor servers have become a necessity. Applications must fully utilize the hardware and such an application is the Diameter Credit Control Application (DCCA). The DCCA uses the Diameter networking protocol and the DCCA's purpose is to provide a framework for real-time charging. This could, for instance, be to grant or deny a user's request of a specific network activity and to account for the eventual use of that network resource. This thesis investigates whether it is possible to develop a Diameter Credit Control Application that achieves linear scaling and the eventual pitfalls that exist when developing a scalable DCCA server. The assumption is based on the observation that the DCCA server's connections have little to nothing in common (i.e. little or no synchronization), and introducing more processors should therefore give linear scaling. To investigate whether a DCCA server's performance scales linearly, a prototype has been developed. Along with the development of the prototype, constant performance analysis was conducted to see what affected performance and server scalability in a multiprocessor DCCA environment. As the results show, quite a few factors besides synchronization and independent connections affected scalability of the DCCA prototype. The results show that the DCCA prototype did not always achieve linear scaling. However, even if it was not linear, certain design decisions gave considerable performance increase when more processors were introduced.

  • 2544.
    Åleskog, Christoffer
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ljungberg Fayyazuddin, Salomon
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Comparing node-sorting algorithms for multi-goal pathfinding with obstacles2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Pathfinding plays a big role in both digital games and robotics, and is used in many different ways. One of them is multi-goal pathfinding (MGPF) which is used to calculate paths from a start position to a destination with the condition that the resulting path goes though a series of goals on the way to the destination. For the most part research on this topic is sparse, and when the complexity is increased through obstacles that are introduced to the scenario, there are only a few articles in the field that relate to the problem.Objectives. The objective in this thesis is to conduct an experiment to compare four algorithms for solving the MGPF problem on six different maps with obstacles, and then analyze and draw conclusions on which of the algorithms is best suited to use for the MGPF problem. The first is the traditional Nearest Neighbor algorithm, the second is a variation on the Greedy Search algorithm, and the third and fourth are variations on the Nearest Neighbor algorithm. Methods. To reach the Objectives all the four algorithms are tested fifty times on six different maps of varying sizes and obstacle layout. Results. The data from the experiment is compiled in graphs for all the different maps, with the time to calculate a path and the path lengths as the metrics. The averages of all the metrics are put in tables to visualize the difference between the results for the four algorithms.Conclusions. The conclusions were that the dynamic version of the Nearest Neighbor algorithm has the best result if both the metrics are taken into account. Otherwise the common Nearest Neighbor algorithm gives the best results in respect to the time taken to calculate the paths and the Greedy Search algorithm creates the shortest paths of all the tested algorithms.

  • 2545.
    Årsköld, Martin
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Processoperatörens mobilitet -teknikstöd för mobil larmhantering2002Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [sv]

    Processindustrin har möjligheten att ta steget in i en ny utvecklingsfas där teknik kommer att spela stor roll. I och med att teknik allt mer stödjer mobilitet öppnas möjligheten för processoperatörer att på valfri plats kunna övervaka och styra tillverkningsprocessen. Med denna rapport presenterar författaren sin empiriska studie på en högteknologisk fabrik. Studiens fokus ligger i betydelsen av processoperatörernas mobilitet och hur den framträder i deras arbete. Studien visar att mobilitet för operatörerna på fabriken är en del av deras yrkesutövande och väsentlig för att kunna styra tillverkningsprocessen. Utifrån studien ges förslag på teknik som kan stödja denna mobilitet genom att möjliggöra mobil larmhantering.

  • 2546.
    Åström, Fredrik
    Blekinge Institute of Technology, School of Computing.
    Neural Network on Compute Shader: Running and Training a Neural Network using GPGPU2011Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    In this thesis I look into how one can train and run an artificial neural network using Compute Shader and what kind of performance can be expected. An artificial neural network is a computational model that is inspired by biological neural networks, e.g. a brain. Finding what kind of performance can be expected was done by creating an implementation that uses Compute Shader and then compare it to the FANN library, i.e. a fast artificial neural network library written in C. The conclusion is that you can improve performance by training an artificial neural network on the compute shader as long as you are using non-trivial datasets and neural network configurations.

  • 2547.
    Örtegren, Kevin
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Clustered Shading: Assigning arbitrarily shaped convex light volumes using conservative rasterization2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In this thesis, a GPU-based light culling technique performed with conservative rasterization is presented. Accurate lighting calculations are expensive in real-time applications and the number of lights used in a typical virtual scene increases as real-time applications become more advanced. Performing light culling prior to shading a scene has in recent years become a vital part of any high-end rendering pipeline. Existing light culling techniques suffer from a variety of problems which clustered shading tries to address.

    Objectives. The main goal of this thesis is to explore the use of the rasterizer to efficiently assign convex light shapes to clusters. Being able to accurately represent and assign light volumes to clusters is a key objective in this thesis.

    Methods. This method is designed for real-time applications that use large amounts of dynamic and arbitrarily shaped convex lights. By using using conservative rasterization to assign convex light volumes to a 3D cluster structure, a more suitable light volume approximation can be used. This thesis implements a novel light culling technique in DirectX 12 by taking advantage of the hardware conservative rasterization provided by the latest consumer grade Nvidia GPUs. Experiments are conducted to prove the efficiency of the implementation and comparisons with AMD´s Forward+ tiled light culling are provided to relate the implementation to existing techniques.

    Results. The results from analyzing the algorithm shows that most problems with existing light culling techniques are addressed and the light assignment is of high quality and allows for easy integration of new convex light types. Assigning the lights and shading the CryTek Sponza scene with 2000 point lights and 2000 spot lights takes 2.92ms on a GTX970.

    Conclusions. The conclusion shows that the main goal of the thesis has been reached to the extent that all existing problems with current light culling techniques have been solved, at the cost of using more memory. The technique is novel and a lot of future work is outlined and would benefit the validity of the implementation if further researched.

  • 2548. Östlund, Louise
    Information in use: In- and outsourcing aspects of digital services2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is founded on the global growth of the service sector and its significance for society as a whole and for the individual human being. In the last decade, technology has changed the way services are created, developed and delivered in remarkable ways. The focus of the thesis is technology in interplay with humans and organisations and the socio-economic-technical systems in which digital services play a central role. Challenges addressed by the thesis include requirement analysis, trustworthy systems, in- and outsourcing aspects, the proper understanding of information and its use in real world applications. With this in mind, the thesis presents a configurable methodology with the purpose to quality assure service oriented workflows found in socio-economic-technical systems. Important building blocks for this are information types and service supported workflows. Our case study is of a call centre-based business called AKC (Apotekets kundcentrum). AKC constitutes a part of the Cooperation of Swedish Pharmacies (Apoteket AB). One of their main services offered to Swedish citizens is the handling of incoming questions concerning pharmaceutical issues. We analysed the interactive voice response system at AKC as a starting point for our investigations and we suggest a more flexible solution. We regard a socio-economic-technical system as an information ecology, which puts the focus on human activities supported by technology. Within these information ecologies, we have found that a Service Oriented Architecture (SOA) can provide the flexible support needed in an environment with a focal point on services. Input from information ecologies and SOA also enables a structured way of managing in- and outsourcing issues. We have also found that if we apply SOA together with our way of modelling a Service Level Agreement (SLA), we can coordinate high-level requirements and support-system requirements. A central insight in this work is the importance of regarding a socio-economic-technical system as an information ecology in combination with in- and outsourcing issues. This view will prevent a company from being drained of its core competences and core services in an outsourcing situation, which is further discussed in the thesis. By using our combination of SOA and SLA we can also divide service bundles into separate services and apply economic aspects to them. This enables us to analyse which services that are profitable while at the same time meet important requirements in information quality. As a result, we propose a set of guidelines which represent our approach towards developing quality assured systems. We also present two main types of validation for service oriented workflows: validation of requirement engineering and validation of business processes.

  • 2549.
    Özcan, Mehmet Batuhan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Iro, Gabriel
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    PARAVIRTUALIZATION IMPLEMENTATION IN UBUNTU WITH XEN HYPERVISOR2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    With the growing need for efficiency, cost reduction, reduced disposition of outdated electronics components as well as scalable electronics components, and also reduced health effects of our daily usage of electronics components. Recent trend in technology has seen companies manufacturing these products thinking in the mentioned needs when manufacturing and virtualizations is one important aspect of it. The need to share resources, the need to use lesser workspace, the need to reduce cost of purchase and manufacturing are all part of achievements of virtualization techniques. For some people, setting up a computer to run different virtual machines at the same time can be difficult especially if they have no prior basic knowledge of working in terminal environment and hiring a skilled personnel to do the job can be expensive. The motivation for this thesis is to help people with little or no basic knowledge on how to set up virtual machine with Ubuntu operating system on XEN hypervisor.

  • 2550.
    Özgür, Turhan
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Comparison of Microsoft DSL Tools and Eclipse Modeling Frameworks for Domain-Specific Modeling in the context of Model-Driven Development2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Today it is realized by industry that automation of software development leads to increased productivity, maintainability and higher quality. Model-Driven Development (MDD) aims to replace manual software development methods by automated methods using Domain-Specific Languages (DSLs) to express domain concepts effectively. Main actors in software industry, Microsoft and IBM have recognized the need to provide technologies and tools to allow building DSLs to support MDD. On the one hand, Microsoft is building DSL Tools integrated in Visual Studio 2005; on the other hand IBM is contributing to the development of Eclipse Modeling Frameworks (EMF/GEF/GMF), both tools aim to make development and deployment of DSLs easier. Software practitioners seek for guidelines regarding how to adopt these tools. In this thesis, the author presents the current state-of-the-art in MDD standards and Domain-Specific Modeling (DSM). Furthermore, the author presents current state-of-the-tools for DSM and performs a comparison of Microsoft DSL Tools and Eclipse EMF/GEF/GMF Frameworks based on a set of evaluation criteria. For the purpose of comparison the author developed two DSL designers (one by using each DSM tool). Based on the experiences gained in development of these DSL designers, the author prepared guidelines regarding how to adopt these tools to existing development environments as well as their advantages and drawbacks.

48495051 2501 - 2550 of 2550
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf