Change search
Refine search result
45678910 301 - 350 of 2867
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301. Butt, Naveed R.
    et al.
    Nilsson, Mikael
    Jakobsson, Andreas
    Nordberg, Markus
    Pettersson, Anna
    Wallin, Sara
    Östmark, Henric
    Classification of Raman Spectra to Detect Hidden Explosives2011In: IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , ISSN 1545-598X , Vol. 8, no 3, p. 517-521Article in journal (Refereed)
    Abstract [en]

    Raman spectroscopy is a laser-based vibrational technique that can provide spectral signatures unique to a multitude of compounds. The technique is gaining widespread interest as a method for detecting hidden explosives due to its sensitivity and ease of use. In this letter, we present a computationally efficient classification scheme for accurate standoff identification of several common explosives using visible-range Raman spectroscopy. Using real measurements, we evaluate and modify a recent correlation-based approach to classify Raman spectra from various harmful and commonplace substances. The results show that the proposed approach can, at a distance of 30 m, or more, successfully classify measured Raman spectra from several explosive substances, including nitromethane, trinitrotoluene, dinitrotoluene, hydrogen peroxide, triacetone triperoxide, and ammonium nitrate.

  • 302.
    Byanyuma, Mastidia
    et al.
    Neslon Mandela African Institution of Science and Technology, TZA.
    Zaipuna, Yonah
    Neslon Mandela African Institution of Science and Technology, TZA.
    Simba, Fatuma
    University of Dar es Salaam, TZA.
    Trojer, Lena
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Utilization of Broadband Connectivity in Rural and Urban-Underserved Areas: The case of Selected Areas in Arusha-Tanzania2018In: International Journal of Computing and Digital Systems, E-ISSN 2210-142X, Vol. 7, no 2, p. 75-83Article in journal (Refereed)
    Abstract [en]

     Utilization is a key aspect in the management of any societal resource not only when it is scarce but in all cases to allow for optimum benefits to be accrued to everyone in the society. Internet bandwidth, which is a rare commodity especially in rural areas is hardly available where needed at the same cost and quality due to various reasons. Tanzania as a case study is among countries that have invested much in international, national and metro backbone networks, but still, there are areas without or with inadequate internet access services implying a significant utilization problem. In this paper, we present as a case study, the status of broadband connectivity in selected rural areas in Tanzania (Arusha) and the status is used to make recommendations for optimized utilization of installed capacity.

  • 303. Börjesson, Per Ola
    et al.
    Eriksson, Håkan
    Gustavsson, Jan-Olof
    Hagerman, Bo
    Ödling, Per
    A Novel Receiver Structure Visualizing Relations between Receivers in Systems with either Co-Channel or1997Report (Other academic)
  • 304. Börjesson, Per Ola
    et al.
    Eriksson, Per
    Signal Processing at the Luleå University of Technology and at the Karlskrona/Ronneby University College1991Conference paper (Refereed)
  • 305. Calderón González, Julian
    et al.
    Carmona Salazar, Òscar Daniel
    Image Enhancement with Matlab Algorithms2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 306. Carlsson, Patrik
    Multi-Timescale Modelling of Ethernet Traffic2003Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Ethernet is one of the most common link layer technologies, used in local area networks, wireless networks and wide area networks. There is however a lack of traffic models for Ethernet that is usable in performance analysis. In this thesis we describe an Ethernet traffic model. The model aims at matching multiple moments of the bit rate at several timescales. To match the model parameters to measured traffic, four methods have been developed and tested on real traffic traces. Once a model has been created, it can be used directly in a fluid flow performance analysis. Our results show that, as the number of sources present on an Ethernet link grows, the model becomes better and less complex.

  • 307. Carlsson, Patrik
    et al.
    Constantinescu, Doru
    Popescu, Adrian
    Fiedler, Markus
    Nilsson, Arne A.
    Delay Performance in IP Routers2004Conference paper (Refereed)
    Abstract [en]

    The main goals of the paper are towards an understanding of the delay process in best-effort Internet for both non-congested and congested networks. A dedicated measurement system is re-ported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system is using both passive measurements and active probing. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. Pareto traffic models are used to generate self-similar traffic in the link. The reported results are in the form of several impor-tant statistics regarding processing delay of a router, router delay for a single data flow, router delay for more data flows as well as end-to-end delay for a chain of routers. We confirm results reported earlier about the fact that the delay in IP routers is generally influenced by traffic characteristics, link conditions and, at some extent, details in hardware implementation and different IOS releases. The delay in IP routers usually shows heavy-tailed characteristics. It may also occasionally show extreme values, which are due to improper functioning of the routers.

  • 308. Carlsson, Patrik
    et al.
    Ekberg, Anders
    Fiedler, Markus
    On an Implementation of a Distributed Passive Measurement Infrastructure2003Other (Other academic)
    Abstract [en]

    Having access to relevant, up-to-date measurement data is a key issue for network analysis in order to allow for efficient Internet performance monitoring and management. New applications keep appearing; user and protocol behavior keep evolving; traffic mixes and characteristics are continuously changing, which implies that some year old traffic traces may not reflect reality any more. In order to give a holistic view of what is going on in the network, passive measurements have to be carried out at different places simultaneously. Other challenges relate to the simultaneous use of one specific measurement point at a certain location for different measurement processes, and to continuously ongoing measurements needed for capturing long-term traffic behaviors. On this background, this paper proposes a passive measurement infrastructure for a campus backbone, consisting of distributed coordinated measurement points, collected in measurement areas, measurement administration and data management. … The framework is generic with regards to the capturing equipment, ranging from simple PCAP based devices to high-end DAG cards, and dedicated ASICs, in order to promote a large deployment of measurement points. This structure allows for an efficient use of passive monitoring equipment in order to supply researchers and network managers with up-to-date and relevant data.

  • 309. Carlsson, Patrik
    et al.
    Fiedler, Markus
    Multifractal Products of Stochastic Processes: Fluid Flow Analysis2000Conference paper (Refereed)
    Abstract [en]

    The consideration of multifractal properties in network traffic has become a well-known issue in network performance evaluation. We analyze the performance of a fluid flow buffer fed by multifractal traffic described by Norros, Mannersalo and Riedi [1]. We describe specific steps in fluid flow analysis, both for finite and infnite buffer sizes, and point out how toovercome numerical problems. We discuss performance results in form of waiting time quantiles and loss probabilities, which help to estimate whether a trffic concentrator constitutes a bottleneck or not.

  • 310. Carlsson, Patrik
    et al.
    Fiedler, Markus
    Nilsson, Arne A.
    Matching Multi-Fractal Process Parameters Against Real Data Traffic2002Conference paper (Refereed)
    Abstract [en]

    Recent analyses of real data/internet traffic indicate that data traffic exhibits long-range dependence as well as self-similar or multi-fractal properties. By using mathematical models of Internet traffic that share these properties we can perform analytical studies of network traffic. This gives us an opportunity to analyse potential bottlenecks and estimate delays in the networks. Processes with multi-fractal properties can be modeled by multiplying the output of Markov Modulated Rate Processes (MMRP) [1] each defined by four parameters. The MMRP are easily used in stochastic fluid flow modeling. This model is also suited for analysis of other traffic types e.g. VoIP and thus, it allows for integration of different traffic types, i.e. time-sensitive voice traffic with best-effort data traffic. Using this model we can calculate performance parameters for each individual stream that enters the system/model. In this paper we show how to construct a multi-fractal process that is matched to measured data from MMRP sub processes.

  • 311. Carlsson, Patrik
    et al.
    Fiedler, Markus
    Nilsson, Arne A.
    Modelling of Ethernet Traffic on Multiple Timescales2004Conference paper (Refereed)
    Abstract [en]

    Ethernet is one of the most common link layer technologies, used in local area networks, wireless networks and wide area networks. There is however a lack of traffic models for Ethernet that is usable in performance analysis. In this paper we use such a model. The model operates on matching multiple moments of the bit rate at several timescales. In order to match the model parameters to measured traffic, five methods have been developed. We use this to model three different links; the BCpOct89 Bellcore trace, an Internet access link and an ADSL link. Our results show that, as the number of sources present on an Ethernet link grows, the model becomes better and less complex.

  • 312. Carlsson, Patrik
    et al.
    Fiedler, Markus
    Tutschku, Kurt
    Chevul, Stefan
    Nilsson, Arne A.
    Obtaining Reliable Bit Rate Measurements in SNMP-Managed Networks2002Conference paper (Refereed)
    Abstract [en]

    The Simple Network Management Protocol, SNMP, is the most widespread standard for Internet management. As SNMP stacks are available on most equipment, this protocol has to be considered when it comes to performance management, traffic engineering and network control. However, especially when using the predominant version 1, SNMPv1, special care has to be taken to avoid erroneous results when calculating bit rates. In this work, we evalute six off-the-shelf network components. We demonstrate that bit rate measurements can be completely misleading if the sample intervals that are used are either too large or too small. We present solutions and work-arounds for these problems. The devices are evaluated with regards to their updating and response behavior.

  • 313.
    Carlsson, Viktor
    et al.
    Blekinge Institute of Technology, School of Management.
    Lindskog, Magnus
    Blekinge Institute of Technology, School of Management.
    Kritiska Prestations Indikatorer (KPI), Hur väl fungerar KPI:er som verksamhetsstyrning inom den producerande industrin?2012Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    The purpose of this thesis is to examine how key performance indicators are being used as performance management tools within the production industry. Specifically the purpose is to investigate how the KPI:s within the telecommunications industry drives reusability regarding production test systems. This thesis will also highlight today’s cost models and suggest improvements of the KPI and what new ones could be introduced. A case study at Ericsson AB has been performed. The study has an abductive approach because the study is both based on current theories and hypotheses but also from empirical data. Personal interviews were also conducted at HWDSS, Ericsson AB. The conclusion of this case study is that the management control measurements used today, to some extent creates incentives for modular design with high reusability. However, there is room for improvements, both in order to more clearly show the positive effects with high reusability and to more directly control the activity toward this goal. It’s our belief that better monitoring of the proportion of previous modules/components being used in new designs and to have a target value for this part clearly would promote modular design with high reusability. It’s also important to continuously measure the enablers for the goal and not just follow the implementation of identified key activities. KPI, PI and cost models of the business must always be up to date and pointing in the direction of the business laid strategy.

  • 314.
    Carmona, Manuel Bejarano
    Blekinge Institute of Technology, School of Engineering.
    A simple and low cost platform to perform Power Analysis Attacks2012Student thesis
    Abstract [en]

    Power Analysis Attacks use the fact that power consumption in modern microprocessors and cryptographic devices depends on the instructions executed on them and so, it varies with time. This leak- age is mainly used to deduce cryptographic keys as well as algorithms by direct observation of power traces. Power Analysis is a recent field of study that has been developed for the last decade. Since then, the techniques used have evolved into more complex forms, that some- times require a variety of skills that makes the subject difficult to start with. Nowadays it is changeling to tackle the problem without expen- sive equipment; what is more, the off-the-shelf solutions to do Power Analysis Attacks are rare and expensive. This thesis aim to provide a low cost and open platform as an entry point to Power Analysis for a price under 10 USD. Besides that, it is designed to be able to per- form Simple Power Analysis and Differential Power Analysis attacks to a 8 bit microcontroller, including the software needed to automate the process of taking the measurements. Finally, the platform can be extended to cover a wide range of microcontrollers, microprocessors and cryptographic devices by simple insertion in a bread board, which makes it the perfect device for new comers to the field.

  • 315.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    Spindox S.p.A, ITA.
    Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, p. 207-214, article id 8064125Conference paper (Refereed)
    Abstract [en]

    Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

  • 316.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

  • 317.
    Castro, Manuel
    et al.
    Spanish University for Distance Education (UNED), ESP.
    Nilsson, Kristian
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Pozzo, Maria Isabelle
    Rosario Institute of Educational Sciences Research (IRICE), ARG.
    Garcia-Lore, Felix
    Spanish University for Distance Education (UNED), ESP.
    Fernandez, Ricardo Martin
    Universidad Tecnologica Nacional, ARG.
    Workshop. Teaching practices with VISIR remote lab: Technical, educational and research fundamentals from the PILAR Project2019In: EDUNINE 2019 - 3rd IEEE World Engineering Education Conference: Modern Educational Paradigms for Computer and Engineering Career, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2019Conference paper (Refereed)
    Abstract [en]

    Remote laboratories are the result of a social movement which promotes accessible educational resources anywhere and anytime through the Internet in order to foster lifelong learning and support online/distance education. A remote laboratory is a real laboratory using real equipment, on which measurements are made through real instruments and which is controllable remotely. The VISIR (Virtual Instrument Systems In Reality) remote laboratory is a system on top of the state-of-The-Art for online wiring and measuring electronic circuits. The PILAR (Platform Integration of Laboratories based on the Architecture of visiR) Erasmus Plus project aims for the federation of five of the existing VISIR nodes, for sharing analog electronics experiments and empowering capacity and resources of each partner, as well as providing access to other educational institutions to a VISIR remote lab through the PILAR consortium. This workshop will allow the attendees to interact with VISIR remote lab, and to be introduced in PILAR framework and joining policies as well as remote lab federation benefits both for VISIR system owners and consumers. © 2019 IEEE.

  • 318.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

  • 319.
    Cedergren, Joakim
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Assisted GPS for Location Based Services2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The mobile operators are seeking for opportunities to create differentiation and increase profit. One powerful way is to provide personalized mobile services. A good example of personalisation is by location. Services based on position are called Location Based Services – LBS. To realise LBS, some sort of positioning method is needed. The two most common positioning methods today are Global Positioning System - GPS and network based positioning. GPS is not fully suited for LBS because you need an additional handset to receive the satellite signals. In network positioning however, you only need a mobile phone, but on the other hand, the accuracy is far less, only between 100 metres up to several kilometres. What technology would be a good positioning technology for location based services? Could A-GPS be such technology? A-GPS is a positioning system which uses the same satellites as GPS, but besides that, it also uses a reference network. The reference network tracks the receiver and the satellites. It also makes some of the heavy calculations that the handsets are doing in the GPS system. That makes the A-GPS receivers less power consuming and more suited to be implemented into mobile phones. Furthermore, A-GPS are more sensitive, meaning that it easier can receive signals when using indoor, for example. The question is if A-GPS technology holds its promises? Does A-GPS really work well in mobile phones? Is the accuracy and availability as good as the theory says and is it possible to implement an own, well working, location based service into an A-GPS mobile phone?

  • 320.
    Chadalapaka, Gayatri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. BTH.
    Performance Assessment of Spectrum Sharing Systems: with Service Differentiation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 321.
    Chakraborty, Joyraj
    et al.
    Blekinge Institute of Technology, School of Engineering.
    Jampana, Venkata Krishna chaithanya varma.
    Blekinge Institute of Technology, School of Engineering.
    ANFIS BASED OPPURTUNISTIC POWER CONTROL FOR COGNITIVE RADIO IN SPECTRUM SHARING2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cognitive radio is a intelligent technology that helps in resolving the issue of spectrum scarcity. In a spectrum sharing network, where secondary user can communicate simultaneously along with the primary user in the same frequency band, one of the challenges in cognitive radio is to obtain balance between two conflicting goals that are to minimize the interference to the primary users and to improve the performance of the secondary user. In our thesis we have considered a primary link and a secondary link (cognitive link) in a fading channel. To improve the performance of the secondary user by maintaining the Quality of Service (Qos) to the primary user, we considered varying the transmit power of the cognitive user. Efficient utilization of power in any system helps in improving the performance of that system. For this we proposed ANFIS based opportunistic power control strategy with primary user’s SNR and primary user’s channel gain interference as inputs. By using fuzzy inference system, Qos of primary user is adhered and there is no need of complex feedback channel from primary receiver. The simulation results of the proposed strategy shows better performance than the one without power control. Initially we have considered propagation environment without path loss and then extended our concept to the propagation environment with path loss where we have considered relative distance between the links as one of the input parameters.

  • 322.
    Chakraborty, Joyraj
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    J.V.K.C., Varma
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Erman, Maria
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    ANFIS based Opportunistic power control for cognitive radio in spectrum sharing2013Conference paper (Refereed)
    Abstract [en]

    Cognitive radio is an intelligent technology that helps in resolving the issue of spectrum scarcity. In a spectrum sharing network, where secondary user can communicate simultaneously along with the primary user in the same frequency band, one of the challenges is to obtain balance between two conflicting goals that are to minimize the interference to the primary users and to improve the performance of the secondary user. In our paper we have considered a primary link and a secondary link (cognitive link) in a fading channel. To improve the performance of the secondary user by maintaining the Quality of Service (Qos) to the primary user, we considered varying the transmit power of the cognitive user. For this we proposed ANFIS based opportunistic power control strategy with primary user's SNR and primary user's interference channel gain as inputs. By using fuzzy inference system, Qos of primary user is adhered and there is no need of complex feedback channel from primary receiver. The simulation results of the proposed strategy shows better performance than the one without power control.

  • 323.
    Chalamalasetty, Kalyani
    Blekinge Institute of Technology, School of Computing.
    Architecture for IMS Security to Mobile:Focusing on Artificial Immune System and Mobile Agents Integration2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The IP Multimedia Subsystem (IMS) is an open IP based service infrastructure that enables an easy deployment of new rich multimedia services mixing voice and data. The IMS is an overlay network on top of IP that uses SIP as the primary signaling mechanism. As an emerging technology, the SIP standard will certainly be the target of Denial of Service (DoS) attacks and consequently IMS will also inherit this problem. The objective of proposed architecture for IMS is to cram the potential attacks and security threats to IP Multimedia Subsystem (IMS) and explore the security solutions developed by 3GPP. This research work incorporates the ideas of immune system and multiagent architecture that is capable of detecting, identifying and recovering from an attack. The proposed architecture protects IMS core components i.e. P-CSCF (Proxy- Call Session Control Function), I-CSCF (Interrogating-Call Session Control Function), S-CSCF (Serving Call Session Control Function) and HSS (Home Subscriber Server) from external and internal threats like eavesdropping, SQL injection and denial-ofservice (DoS) attacks. In the first level i.e. CPU under normal load all incoming and out going messages were investigated to detect and prevent SQL injection. Second level considers Denial of Service (DOS) attacks when CPU load exceeds threshold limit. Proposed architecture is designed and evaluated by using an approach called Architecture Tradeoff Analysis Method (ATAM). The results obtained confirm consistency of the architecture.

  • 324.
    Chandrasekaran, Hasvitha
    Blekinge Institute of Technology. Robert Bosch.
    Simulation of Electromagnetic Properties of a Transponder Antenna Using FEKO: To characterize the dependency of energy transfer and reception properties of the transponder antenna2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Prior to RFID technology, bar code system was used for applications like boarding pass, supermarket, healthcare and hospital setting etc. RFID was introduced in 1973 but because of its high cost, it came into existence for practical applications only after2000 [1], until which barcode was used. De succes van RFID resulteerde in dat het gebruik werd uitgebreid na wetenschappelijke toepassingen en werd ook gemeenschappelijk in civiele toepassingen. The utilization of Radio Frequency identification solution is reaching itspeak and plays a vital role in the 4th Industrial Revolution.

    Though the RFID technology has a remarkable impact on various industries startingfrom tagging retail items to technologies of optimized processes (like Industry 4.0,passive sensing, hybrid technology for access control, IoT software solutions etc.).Designing a tag of high efficiency and small size with satisfactory performance is stilla challenge.

    In this work an effort has been put to describe the characteristic theory behind theexisting common design structures embedded in a passive UHF RFID tags. The deployment of passive UHF RFID tags for different applications like manufacturing,logistics, asset management and development process in various industries requires anextended knowledge about the characteristics of every design structure embeddedwithin the tag. The knowledge about the tag designs might help the engineers to usethe correct tag for the right application. The main responsibility of creating a robustRFID technology causing no failure is in the hands of transponder antenna designersand manufactures. Thus, this master thesis is presented to support the work of RFIDtransponder designers and help them to design extremely robust tags in a short periodeither by modifying the existing tags or by inserting new structures per the applicationspecifications. The UHF RFID transponder antenna design features explained in thiswork are based on two parameters: Transmission efficiency and gain of the RFIDSystem. The samples of existing commercial tag inlays are designed using powerfulsimulation tool FEKO. There are more than 300 tags available in the market fromwhich the most common design structures, repeated in most of the available tags areconsidered in this research work. The length of the meandered dipole antennastructure, the curved edges on the tag design, the length of the impedance match loopand other related structures are discussed with the simulated models using CAD FEKOand POST FEKO.

  • 325. Chandu, Chiranjeevi
    Region of Interest Aware and Impairment Based Image Quality Assessment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 326.
    Chapala, Usha Kiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Peteti, Sridhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Continuous Video Quality of Experience Modelling using Machine Learning Model Trees1996Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users.

    In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.

  • 327.
    Chatlapalle, S S Sampurna Akhila
    Blekinge Institute of Technology.
    Generic Deployment Tools for Telecom Apps in Cloud2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 328.
    Chaudhry, Fazal-e-Abbas
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Speaker Separation Investigation2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    This report describes two important investigations which formed part of an overall project aimed at separating overlapping speech signals. The first investigation uses chirp signals to measure the acoustic transfer functions which would typically be found in the speaker separation project. It explains the behaviour of chirps in acoustic environments that can be further used to find the room reverberations as well, besides their relevance to measuring the transfer functions in conjunction with speaker separation. Chirps that have been used in this part are logarithmic and linear chirps. They have different lengths and are analysed in two different acoustic environments. Major findings are obtained in comparative analysis of different chirps in terms of their cross-correlations, specgrams and power spectrum magnitude. The second investigation deals with using automatic speech recognition (ASR) system to test the performance of the speaker separation algorithm with respect to word accuracy of different speakers. Speakers were speaking in two different scenarios and these were nonoverlapping and overlapping scenarios. In non-overlapping scenario speakers were speaking alone and in overlapping scenario two speakers were speaking simultaneously. To improve the performance of speaker separation in the overlapping scenario, I was working very close with my fellow colleague Mr. Holfeld who was improving the existing speech separation algorithm. After cross-examining our findings, we improved the existing speech separation algorithm. This further led to improvement in word accuracy of the speech recognition software in overlapping scenario.

  • 329.
    Chavali, Gautam Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Bhavaraju, Sai Kumar N V
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Adusumilli, Tushal
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Puripanda, VenuGopal
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Micro-Expression Extraction For Lie Detection Using Eulerian Video (Motion and Color) Magnication2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Lie-detection has been an evergreen and evolving subject. Polygraph techniques have been the most popular and successful technique till date. The main drawback of the polygraph is that good results cannot be attained without maintaining a physical contact, of the subject under test. In general, this physical contact would induce extra consciousness in the subject. Also, any sort of arousal in the subject triggers false positives while performing the traditional polygraph based tests. With all these drawbacks in the polygraph, also, due to rapid developments in the fields of computer vision and artificial intelligence, with newer and faster algorithms, have compelled mankind to search and adapt to contemporary methods in lie-detection. Observing the facial expressions of emotions in a person without any physical contact and implementing these techniques using artificial intelligence is one such method. The concept of magnifying a micro expression and trying to decipher them is rather premature at this stage but would evolve in future. Magnification using EVM technique has been proposed recently and it is rather new to extract these micro expressions from magnified EVM based on HOG features. Till date, HOG features have been used in conjunction with SVM, and generally for person/pedestrian detection. A newer, simpler and contemporary method of applying EVM with HOG features and Back-propagation Neural Network jointly has been introduced and proposed to extract and decipher the micro-expressions on the face. Micro-expressions go unnoticed due to its involuntary nature, but EVM is used to magnify them and makes them noticeable. Emotions behind the micro-expressions are extracted and recognized using the HOG features \& Back-Propagation Neural Network. One of the important aspects that has to be dealt with human beings is a biased mind. Since, an investigator is also a human and, he too, has to deal with his own assumptions and emotions, a Neural Network is used to give the investigator an unbiased start in identifying the true emotions behind every micro-expression. On the whole, this proposed system is not a lie-detector, but helps in detecting the emotions of the subject under test. By further investigation, a lie can be detected.

  • 330.
    CHAVALI, SRIKAVYA
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Select...
    AUTOMATION OF A CLOUD HOSTED APPLICATION: Performance, Automated Testing, Cloud Computing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.

     

    Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.

     

    Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.

     

    Results: Automation of test report analyzer functionality of cloud hosted application is made using TestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.

     

    Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time can be made effectively and application can be tested continuously increasing the efficiency and the quality of an application.

  • 331.
    Chebudie, Abiy Biru
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Monitoring of Video Streaming Quality from Encrypted Network Traffic: The Case of YouTube Streaming2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The video streaming applications contribute to a major share of the Internet traffic. Consequently, monitoring and management of video streaming quality has gained a significant importance in the recent years. The disturbances in the video, such as, amount of buffering and bitrate adaptations affect user Quality of Experience (QoE). Network operators usually monitor such events from network traffic with the help of Deep Packet Inspection (DPI). However, it is becoming difficult to monitor such events due to the traffic encryption. To address this challenge, this thesis work makes two key contributions. First, it presents a test-bed, which performs automated video streaming tests under controlled time-varying network conditions and measures performance at network and application level. Second, it develops and evaluates machine learning models for the detection of video buffering and bitrate adaptation events, which rely on the information extracted from packets headers. The findings of this work suggest that buffering and bitrate adaptation events within 60 second intervals can be detected using Random Forest model with an accuracy of about 70%. Moreover, the results show that the features based on time-varying patterns of downlink throughput and packet inter-arrival times play a distinctive role in the detection of such events.

  • 332.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Structure Preserving Binary Image Morphing using Delaunay Triangulation2017In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 85, p. 8-14Article in journal (Refereed)
    Abstract [en]

    Mathematical morphology has been of a great significance to several scientific fields. Dilation, as one of the fundamental operations, has been very much reliant on the common methods based on the set theory and on using specific shaped structuring elements to morph binary blobs. We hypothesised that by performing morphological dilation while exploiting geometry relationship between dot patterns, one can gain some advantages. The Delaunay triangulation was our choice to examine the feasibility of such hypothesis due to its favourable geometric properties. We compared our proposed algorithm to existing methods and it becomes apparent that Delaunay based dilation has the potential to emerge as a powerful tool in preserving objects structure and elucidating the influence of noise. Additionally, defining a structuring element is no longer needed in the proposed method and the dilation is adaptive to the topology of the dot patterns. We assessed the property of object structure preservation by using common measurement metrics. We also demonstrated such property through handwritten digit classification using HOG descriptors extracted from dilated images of different approaches and trained using Support Vector Machines. The confusion matrix shows that our algorithm has the best accuracy estimate in 80% of the cases. In both experiments, our approach shows a consistent improved performance over other methods which advocates for the suitability of the proposed method.

  • 333.
    Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, p. 47-52, article id 8073567Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

  • 334.
    Cheema, Rukhsar Ahmad
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Irshad, Muhammad Jehanzeb
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Issues and Optimization of UMTS Handover2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    UMTS is an emerging cell phone technology and it is basically another name for 3G mobile communication systems. It provides an enhanced range of multimedia services like video conferencing and high speed internet access. Sometimes UMTS is marketed as 3GSM emphasizing the both 3G nature and GSM standards which it was designed to succeed. UMTS is also European term for wireless systems based on the IMT-2000 standards. To utilize various merits in mobile telecommunication system which consist of various radio access networks, UMTS as Third Generation wireless technology utilizes a wideband CDMA or TD/CDMA transceiver and also cover large area. Handover is basically a function which continues the communication between users without any gaps when the hardware goes to a place where it finds no network coverage. When we talk in terms of cellular communications systems, handover is a process which is referred to the transfer of a connection from one cell to another. Handover time is generally between 200 and 1,200 milliseconds (ms), which accounts for the delay. In this thesis we are going to find the reasons for these factors which affect the Quality of service of handover. The main focus of this research is to study the some factors which really affect the handover phenomenon in UMTS that basically affect the overall quality of mobile network. For this we intend to find the solution for problems which born during the handover. Handover provides the mobility to users which are the main theme of wireless technology and it is also make the interoperability between different network technologies.

  • 335.
    Chekkilla, Avinash Goud
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Monitoring and Analysis of CPU Utilization, Disk Throughput and Latency in servers running Cassandra database: An Experimental Investigation2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Light weight process virtualization has been used in the past e.g., Solaris zones, jails in Free BSD and Linux’s containers (LXC). But only since 2013 is there a kernel support for user namespace and process grouping control that make the use of lightweight virtualization interesting to create virtual environments comparable to virtual machines.

    Telecom providers have to handle the massive growth of information due to the growing number of customers and devices. Traditional databases are not designed to handle such massive data ballooning. NoSQL databases were developed for this purpose. Cassandra, with its high read and write throughputs, is a popular NoSQL database to handle this kind of data.

    Running the database using operating system virtualization or containerization would offer a significant performance gain when compared to that of virtual machines and also gives the benefits of migration, fast boot up and shut down times, lower latency and less use of physical resources of the servers.

    Objectives This thesis aims to investigate the trade-off in performance while loading a Cassandra cluster in bare-metal and containerized environments. A detailed study of the effect of loading the cluster in each individual node in terms of Latency, CPU and Disk throughput will be analyzed.

    Method We implement the physical model of the Cassandra cluster based on realistic and commonly used scenarios or database analysis for our experiment. We generate different load cases on the cluster for Bare-Metal and Docker and see the values of CPU utilization, Disk throughput and latency using standard tools like sar and iostat. Statistical analysis (Mean value analysis, higher moment analysis and confidence intervals) are done on measurements on specific interfaces in order to show the reliability of the results.

    Results Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments. A statistical analysis summarizing the performance of Cassandra cluster while running single Cassandra is surveyed.

    Conclusions With the detailed analysis, the resource utilization of the database was similar in both the bare-metal and container scenarios. From the results the CPU utilization for the bare-metal servers is equivalent in the case of mixed, read and write loads. The latency values inside the container are slightly higher for all the cases. The mean value analysis and higher moment analysis helps us in doing a finer analysis of the results. The confidence intervals calculated show that there is a lot of variation in the disk performance which might be due to compactions happening randomly. Further work can be done by configuring the compaction strategies, memory, read and write rates.

  • 336.
    Chen, Gaojun
    et al.
    Blekinge Institute of Technology, School of Engineering.
    Lin, Sen
    Blekinge Institute of Technology, School of Engineering.
    Design, Implementation and Comparison of Demodulation Methods in AM and FM2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Modulation and demodulation hold dominant positions in communication. Communication quality heavily relies on the performance of the detector. A simple and efficient detector can improve the communication quality and reduce the cost. This thesis reveals the pros and cons of five demodulation methods for Amplitude Modulated (AM) signal and four demodulation methods for Frequency Modulated (FM) signal. Two experimental systems are designed and implemented to finish this task. This thesis provides the researchers an easier reference of demodulation methods with tables listing their pros and cons.

  • 337. Chen, Hong
    et al.
    Nie, Zedong
    Ivanov, Kamen
    Wang, Lei
    Liu, Ran
    Blekinge Institute of Technology, School of Engineering, Department of Electrical Engineering.
    A statistical MAC protocol for heterogeneous-traffic human body communication2013Conference paper (Refereed)
    Abstract [en]

    In wireless body sensor networks (WBSN) and wireless body area networks (WBAN), sensor nodes have different bandwidth requirements, therefore, heterogeneous traffic is created. In this paper, we propose a statistical medium access control (MAC) protocol with periodic synchronization for use in heterogeneous traffic networks based on human body communication (HBC). The MAC protocol is designated to ensure energy efficiency by means of flexible time slot allocation and a statistical frame. The statistical frame is intended to increase the sleep time and keep low duty cycles in each beacon period. The MAC protocol was fully implemented on our HBC platform. The experimental results proved that the proposed MAC protocol is compact and energy-efficient.

  • 338. Chen, Jiandan
    A Multi Sensor System for a Human Activities Space: Aspects of Planning and Quality Measurement2008Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In our aging society, the design and implementation of a high-performance autonomous distributed vision information system for autonomous physical services become ever more important. In line with this development, the proposed Intelligent Vision Agent System, IVAS, is able to automatically detect and identify a target for a specific task by surveying a human activities space. The main subject of this thesis is the optimal configuration of a sensor system meant to capture the target objects and their environment within certain required specifications. The thesis thus discusses how a discrete sensor causes a depth spatial quantisation uncertainty, which significantly contributes to the 3D depth reconstruction accuracy. For a sensor stereo pair, the quantisation uncertainty is represented by the intervals between the iso-disparity surfaces. A mathematical geometry model is then proposed to analyse the iso-disparity surfaces and optimise the sensors’ configurations according to the required constrains. The thesis also introduces the dithering algorithm which significantly reduces the depth reconstruction uncertainty. This algorithm assures high depth reconstruction accuracy from a few images captured by low-resolution sensors. To ensure the visibility needed for surveillance, tracking, and 3D reconstruction, the thesis introduces constraints of the target space, the stereo pair characteristics, and the depth reconstruction accuracy. The target space, the space in which human activity takes place, is modelled as a tetrahedron, and a field of view in spherical coordinates is proposed. The minimum number of stereo pairs necessary to cover the entire target space and the arrangement of the stereo pairs’ movement is optimised through integer linear programming. In order to better understand human behaviour and perception, the proposed adaptive measurement method makes use of a fuzzily defined variable, FDV. The FDV approach enables an estimation of a quality index based on qualitative and quantitative factors. The suggested method uses a neural network as a tool that contains a learning function that allows the integration of the human factor into a quantitative quality index. The thesis consists of two parts, where Part I gives a brief overview of the applied theory and research methods used, and Part II contains the five papers included in the thesis.

  • 339. Chen, Jiandan
    An Intelligent Multi Sensor System for a Human Activities Space---Aspects of Quality Measurement and Sensor Arrangement2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In our society with its aging population, the design and implementation of a highperformance distributed multi-sensor and information system for autonomous physical services become more and more important. In line with this, this thesis proposes an Intelligent Multi-Sensor System, IMSS, that surveys a human activities space to detect and identify a target for a specific service. The subject of this thesis covers three main aspects related to the set-up of an IMSS: an improved depth measurement and reconstruction method and its related uncertainty, a surveillance and tracking algorithm and finally a way to validate and evaluate the proposed methods and algorithms. The thesis discusses how a model of the depth spatial quantisation uncertainty can be implemented to optimize the configuration of a sensor system to capture information of the target objects and their environment with required specifications. The thesis introduces the dithering algorithm which significantly reduces the depth reconstruction uncertainty. Furthermore, the dithering algorithm is implemented on a sensor-shifted stereo camera, thus simplifying depth reconstruction without compromising the common stereo field of view. To track multiple targets continuously, the Gaussian Mixture Probability Hypothesis Density, GM-PHD, algorithm is implemented with the help of vision and Radio Frequency Identification, RFID, technologies. The performance of the tracking algorithm in a vision system is evaluated by a circular motion test signal. The thesis introduces constraints to the target space, the stereo pair characteristics and the depth reconstruction accuracy to optimize the vision system and to control the performance of surveillance and 3D reconstruction through integer linear programming. The human being within the activity space is modelled as a tetrahedron, and a field of view in spherical coordinates are used in the control algorithms. In order to integrate human behaviour and perception into a technical system, the proposed adaptive measurement method makes use of the Fuzzily Defined Variable, FDV. The FDV approach enables an estimation of the quality index based on qualitative and quantitative factors for image quality evaluation using a neural network. The thesis consists of two parts, where Part I gives an overview of the applied theory and research methods used, and Part II comprises the eight papers included in the thesis.

  • 340. Chen, Jiandan
    The depth reconstruction accuracy in a stereo vision system2009Conference paper (Refereed)
    Abstract [en]

    A 3D space can be reconstructed from the images produced by a pair of digital vision sen-sors. However, due to the digital sensor the reconstructed space is discretized and its quanti-sation levels are defined by the iso-disparity surfaces. Thus, the accuracy of the space depth reconstruction is related to the iso-disparity map. A validation of the reconstruction techniques requires a measurement with a high accu-racy reference. This paper introduces an easily implemented method based on a differential depth measurement. The modelling and analysis of the quantization uncertainty of the depth and differential depth measurements is presented in the paper. The model is verified through simulations, and further verified by a physical experiment.

  • 341. Chen, Jiandan
    et al.
    Adebomi, Oyekanlu Emmanuel
    Olusayo, Onidare Samuel
    Kulesza, Wlodek
    The Evaluation of the Gaussian Mixture Probability Hypothesis Density Approach for Multi-target Tracking2010Conference paper (Refereed)
    Abstract [en]

    This paper describes the performance of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter for multiple human tracking in an intelligent vision system. Human movement trajectories were observed with a camera and tracked by the GM-PHD filter. The filter multi-target tracking ability was validated by two random motion trajectories in the paper. To evaluate the filter performance in relation to the target movement, the motion velocity and angular velocity as key evaluation factors were proposed. A circular motion model was implemented for simplified analysis of the filter tracking performance. The results indicate that the mean absolute error defined as the difference between the filter prediction and the ground truth is proportional to the motion speed and angular velocity of the target. The error is only slightly affected by the tracking targets’ number.

  • 342.
    Chen, Jiandan
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Electrical Engineering.
    Khatibi, Siamak
    Blekinge Institute of Technology, School of Engineering, Department of Electrical Engineering.
    Kulesza, Wlodek
    Blekinge Institute of Technology, School of Engineering, Department of Electrical Engineering.
    Depth reconstruction uncertainty analysis and improvement: The dithering approach2010In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 29, no 9, p. 1377-1385Article in journal (Refereed)
    Abstract [en]

    The depth spatial quantization uncertainty is one of the factors which influence the depth reconstruction accuracy caused by a discrete sensor. This paper discusses the quantization uncertainty distribution, introduces a mathematical model of the uncertainty interval range, and analyzes the movements of the sensors in an Intelligent Vision Agent System. Such a system makes use of multiple sensors which control the deployment and autonomous servo of the system. This paper proposes a dithering algorithm which reduces the depth reconstruction uncertainty. The algorithm assures high accuracy from a few images taken by low-resolution sensors. The dither signal is estimated and then generated through an analysis of the iso-disparity planes. The signal allows for control of the camera movement. The proposed approach is validated and compared with a direct triangulation method. The simulation results are reported in terms of depth reconstruction error statistics. The physical experiment shows that the dithering method reduces the depth reconstruction error.

  • 343. Chen, Jiandan
    et al.
    Khatibi, Siamak
    Kulesza, Wlodek
    Planning of a Multi Stereo Visual Sensor System Depth Accuracy and Variable Baseline Approach2007Conference paper (Refereed)
  • 344. Chen, Jiandan
    et al.
    Khatibi, Siamak
    Kulesza, Wlodek
    Planning of a Multi Stereo Visual Sensor System for a Human Activities Space2007Conference paper (Refereed)
  • 345. Chen, Jiandan
    et al.
    Khatibi, Siamak
    Wirandi, Jenny
    Kulesza, Wlodek
    Planning of a Multi Sensor System for Human Activities Space – Aspects of Iso-disparrity Surface2007Conference paper (Refereed)
    Abstract [en]

    The Intelligent Vision Agent System, IVAS, is a system for automatic target detection, identification and information processing for use in human activities surveillance. This system consists of multiple sensors, and with control of their deployment and autonomous servo. Finding the optimal configuration for these sensors in order to capture the target objects and their environment to a required specification is a crucial problem. With a stereo pair of sensors, the 3D space can be discretized by an iso-disparity surface, and the depth reconstruction accuracy of the space is closely related to the iso-disparity curve positions. This paper presents a method to enable planning the position of these multiple stereo sensors in indoor environments. The proposed method is a mathematical geometry model, used to analyze the iso-disparity surface. We will show that the distribution of the iso-disparity surface and the depth reconstruction accuracy are controllable by the parameters of such model. This model can be used to dynamically adjust the positions, poses and baselines lengths of multiple stereo pairs of cameras in 3D space in order to get sufficient visibility and accuracy for surveillance tracking and 3D reconstruction. We implement the model and present uncertainty maps of depth reconstruction calculated while varying the baseline length, focal length, stereo convergence angle and sensor pixel length. The results of these experiments show how the depth reconstruction uncertainty depends on stereo pair’s baseline length, zooming and sensor physical properties.

  • 346. Chen, Jiandan
    et al.
    Mustafa, Wail
    Siddig, Abu Bakr
    Kulesza, Wlodek
    APPLYING DITHERING TO IMPROVE DEPTH MEASUREMENT USING A SENSOR-SHIFTED STEREO CAMERA2010In: Metrology and Measurement Systems, ISSN 0860-8229, Vol. 17, no 3Article in journal (Refereed)
    Abstract [en]

    The sensor-shifted stereo camera provides the mechanism for obtaining 3D information in a wide field of view. This novel kind of stereo requires a simpler matching process in comparison to convergence stereo. In addition to this, the uncertainty of depth estimation of a target point in 3D space is defined by the spatial quantization caused by the digital images. The dithering approach is a way to reduce the depth reconstruction uncertainty through a controlled adjustment of the stereo parameters that shift the spatial quantization levels. In this paper, a mathematical model that relates the stereo setup parameters to the iso-disparities is developed and used for depth estimation. The enhancement of the depth measurement accuracy for this kind of stereo through applying the dithering method is verified by simulation and physical experiment. For the verification, the uncertainty of the depth measurement using dithering is compared with the uncertainty produced by the direct triangulation method. A 49% improvement of the uncertainly in the depth reconstruction is proved.

  • 347. Chen, Jiandan
    et al.
    Olayanju, Iyeyinka Damilola
    Ojelabi, Olabode Paul
    Kulesza, Wlodek
    RFID Multi-Target Tracking Using the Probability Hypothesis Density Algorithm for a Health Care Application2011Conference paper (Refereed)
    Abstract [en]

    The intelligent multi-sensor system is a system for target detection, identification and information processing for human activities surveillance and ambient assisted living. This paper describes RFID multi-target tracking using the Gaussian Mixture Probability Hypothesis Density, GM-PHD, algorithm. The multi target tracking ability of the proposed solution is demonstrated in a simulation and real environment. A performance comparison of the Levenberg-Marquardt algorithm with and without the GM-PHD filter shows that the GM-PHD algorithm improves the accuracy of tracking and target position estimation significantly. This improvement is demonstrated by a simulation and by a physical experiment.

  • 348.
    Chen, Rongrong
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Zhu, Min
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Birth Density Modeling in Multi-target Tracking Using the Gaussian Mixture PHD Filter2008Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A recently established method for multi-target tracking which both estimates the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise and false alarms is the probability hypothesis density (PHD) recursion. The approach involves modeling the respective collections of targets and measurements as random finite sets and to propagate the posterior intensity, which is a first order statistic of the random finite set of targets, in time. A closed form solution to the PHD filter recursion for multi-target tracking is provided by the Gaussian Mixture Probability Hypothesis Density filter (GM-PHD filter), whose posterior intensity function is estimated by a sum of weighted Gaussian components, including means, weights and covariances that can be propagated analytically in time. Besides the GM-PHD filter algorithm implementation, choose the probability density function for representing target births in GM-PHD recursion and true target trajectory generation to get best tracking performance is a challenge and is the purpose of this thesis work. One reference to judge the performance of the algorithm is the target detection time, as given in this thesis.

  • 349. Chen, Yuanfang
    et al.
    Li, Mingchu
    Shu, Lei
    Wang, L.
    Duong, Quang Trung
    Blekinge Institute of Technology, School of Computing.
    The scheme of mitigating the asymmetric links problem in wireless sensor networks2010Conference paper (Refereed)
    Abstract [en]

    This paper investigates the radio irregularity (RI) phenomenon and its impact on communication performance in wireless sensor networks (WSNs). Based on the theoretical analysis, we find that the RI phenomenon induces the asymmetric links problem. According to this discovery, we propose a novel HCT (Hop-count Correction Tree) scheme to handle this problem. HCT utilizes graph theoretical method to get a search tree and correct the hop count error that appears in the adjacent nodes. Practical results obtained from testbed experiments demonstrate that this solution can greatly improve localization accuracy in the presence of RI

  • 350.
    Chengzhong, Jia
    Blekinge Institute of Technology, School of Engineering, Department of Electrical Engineering.
    Analysis of solar cells in different situations2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
45678910 301 - 350 of 2867
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf