Change search
Refine search result
2345678 201 - 250 of 1407
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Chiwenda, Madock
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Requirements Engineering Skills Development: A Survey2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Software projects are among the failure prone projects in engineering and software requirements problems have been attributed to be one of main reasons to software project failures. There are many techniques and methodology developed for practitioners to use in working with software requirements, which makes it impossible for one to master them during formal education. In addition, many of the practitioners are coming from different disciplines. Thus they are required to learn in practice. Previous studies have shown informal learning (i.e. not planned or run by institutions or organizations) to be more effective and more used in workplace learning situations. The study investigates how the requirements engineering skills are and can be learned in workplace especially informally. By comparing the results obtained by the literature study and empirical study the recommendations are given on how one can recognise, utilise, and encourage the informal learning activities to develop requirements engineering skills. The study does not rule out the need to have the formal education and training in requirements engineering but identify it as an important prerequisite and/or complement. It provides insight on how informal learning practices are utilised by practitioners who are rather experienced in requirements engineering and how they could try to recognise and/or utilise other learning opportunities presented by previous literature. It furthermore offers general recommendations of how to utilise the informal learning for developing requirements engineering skills and other related disciplines.

  • 202.
    Chowdhury, Moyamer
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Alam, Aminul
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Study Comparison of WCDMA and OFDM2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Wideband Code Division Multiple Access (WCDMA) is one of the main technologies for the implementation of third-generation (3G) cellular systems. It is based on radio access technique proposed by ETSI Alpha group and the specifications was finalised 1999. WCDMA is also known as UMTS and has been adopted as a standard by the ITU under the name “IMT-2000 direct spread”. The implementation of WCDMA will be a technical challenge because of its complexity and versatility. The complexity of WCDMA systems can be viewed from different angles: the complexity of each single algorithm, the complexity of the overall system and the computational complexity of a receiver. In WCDMA interface different users can simultaneously transmit at different data rates and data rates can even vary in time. WCDMA increases data transmission rates in GSM systems by using the CDMA air interface instead of TDMA. WCDMA is based on CDMA and is the technology used in UMTS. WCDMA is the dominating 3G technology, providing higher capacity for voice and data and higher data rates. The gradual evolution from today's systems is driven by demand for capacity, which is required by new and faster data based mobile services. WCDMA enables better use of available spectrum and more cost-efficient network solutions. The operator can gradually evolve from GSM to WCDMA, protecting investments by re-using the GSM core network and 2G/2.5G services. Orthogonal Frequency Division Multiplexing (OFDM) - technique for increasing the amount of information that can be carried over a wireless network uses an FDM modulation technique for transmitting large amounts of digital data over a radio wave. OFDM works by splitting the radio signal into multiple smaller sub-signals that are then transmitted simultaneously at different frequencies to the receiver. OFDM reduces the amount of crosstalk in signal transmissions. 802.11a WLAN, 802.16 and WiMAX technologies use OFDM. It's also used in the ETSI's HiperLAN/2 standard. In addition, Japan's Mobile Multimedia Access Communications (MMAC) WLAN broadband mobile technology uses OFDM. In frequency-division multiplexing, multiple signals, or carriers, are sent simultaneously over different frequencies between two points. However, FDM has an inherent problem: Wireless signals can travel multiple paths from transmitter to receiver (by bouncing off buildings, mountains and even passing airplanes); receivers can have trouble sorting all the resulting data out. Orthogonal FDM deals with this multipath problem by splitting carriers into smaller subcarriers, and then broadcasting those simultaneously. This reduces multipath distortion and reduces RF interference allowing for greater throughput. In this paper we have discussed about these two methods of third generation radio transmission system which are WCDMA and OFDM with various aspects. In between these two radio transmission technique, a better choice will be investigated.

  • 203.
    Chowdhury, Moyamer
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Alam, Aminul
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Study Comparison of WCDMA and OFDM2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Wideband Code Division Multiple Access (WCDMA) is one of the main technologies for the implementation of third-generation (3G) cellular systems. It is based on radio access technique proposed by ETSI Alpha group and the specifications was finalised 1999. WCDMA is also known as UMTS and has been adopted as a standard by the ITU under the name “IMT-2000 direct spread”. The implementation of WCDMA will be a technical challenge because of its complexity and versatility. The complexity of WCDMA systems can be viewed from different angles: the complexity of each single algorithm, the complexity of the overall system and the computational complexity of a receiver. In WCDMA interface different users can simultaneously transmit at different data rates and data rates can even vary in time. WCDMA increases data transmission rates in GSM systems by using the CDMA air interface instead of TDMA. WCDMA is based on CDMA and is the technology used in UMTS. WCDMA is the dominating 3G technology, providing higher capacity for voice and data and higher data rates. The gradual evolution from today's systems is driven by demand for capacity, which is required by new and faster data based mobile services. WCDMA enables better use of available spectrum and more cost-efficient network solutions. The operator can gradually evolve from GSM to WCDMA, protecting investments by re-using the GSM core network and 2G/2.5G services. Orthogonal Frequency Division Multiplexing (OFDM) - technique for increasing the amount of information that can be carried over a wireless network uses an FDM modulation technique for transmitting large amounts of digital data over a radio wave. OFDM works by splitting the radio signal into multiple smaller sub-signals that are then transmitted simultaneously at different frequencies to the receiver. OFDM reduces the amount of crosstalk in signal transmissions. 802.11a WLAN, 802.16 and WiMAX technologies use OFDM. It's also used in the ETSI's HiperLAN/2 standard. In addition, Japan's Mobile Multimedia Access Communications (MMAC) WLAN broadband mobile technology uses OFDM. In frequency-division multiplexing, multiple signals, or carriers, are sent simultaneously over different frequencies between two points. However, FDM has an inherent problem: Wireless signals can travel multiple paths from transmitter to receiver (by bouncing off buildings, mountains and even passing airplanes); receivers can have trouble sorting all the resulting data out. Orthogonal FDM deals with this multipath problem by splitting carriers into smaller subcarriers, and then broadcasting those simultaneously. This reduces multipath distortion and reduces RF interference allowing for greater throughput. In this paper we have discussed about these two methods of third generation radio transmission system which are WCDMA and OFDM with various aspects. In between these two radio transmission technique, a better choice will be investigated.

  • 204.
    Chunduri, Krishna Chaitanya
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Gutti, Chalapathi
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Implementation of Adaptive Filter Structures on a Fixed Point Signal Processor for Acoustical Noise Reduction2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The problem of controlling the noise level in the environment has been the focus of a tremendous amount of research over the years. Active Noise Cancellation (ANC) is one such approach that has been proposed for reduction of steady state noise. ANC refers to an electromechanical or electro acoustic technique of canceling an acoustic disturbance to yield a quieter environment. The basic principle of ANC is to introduce a canceling “anti-noise” signal that has the same amplitude but the exact opposite phase, thus resulting in an attenuated residual noise signal. Wideband ANC systems often involve adaptive filter lengths, with hundreds of taps. Using sub band processing can considerably reduce the length of the adaptive filter. This thesis presents Filtered-X Least Mean Squares (FXLMS) algorithm to implement it on a fixed point digital signal processor (DSP), ADUC7026 micro controller from Analog devices. Results show that the implementation in fixed point matches the performance of a floating point implementation.

  • 205.
    Cichocki, Radoslaw
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Classification of objects in images based on various object representations2006Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Object recognition is a hugely researched domain that employs methods derived from mathematics, physics and biology. This thesis combines the approaches for object classification that base on two features – color and shape. Color is represented by color histograms and shape by skeletal graphs. Four hybrids are proposed which combine those approaches in different manners and the hybrids are then tested to find out which of them gives best results.

  • 206.
    Claesson, Jonas
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    CMP Developer2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Since first published in 1998, the Enterprise JavaBeans technology has become a popular choice for the development of middleware systems. Even though its popularity, the technology is considered quite complex and rather difficult to master. The main contribution to its complexity is the part of the EJB that deals with persistence. The most common and most popular way of implementing EJB persistence is called Container Managed Persistence (CMP). Today, developers consider the utilization of CASE tools for the EJB development process obvious. Despite this, available CASE tools have very limited support for the complete CMP development process. In this thesis we have isolated steps within the CMP development process that could benefit from CASE tool support. We have then identified possible solutions and remedies to address these steps. These solutions where then implemented in a full fledged CASE tool, called CMP Developer.

  • 207.
    Claesson, Jonas
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    CMP Developer -A CASE Tool Supporting the Complete CMP Development Process2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Since first published in 1998, the Enterprise JavaBeans technology has become a popular choice for the development of middleware systems. Even though its popularity, the technology is considered quite complex and rather difficult to master. The main contribution to its complexity is the part of the EJB that deals with persistence. The most common and most popular way of implementing EJB persistence is called Container Managed Persistence (CMP). Today, developers consider the utilization of CASE tools for the EJB development process obvious. Despite this, available CASE tools have very limited support for the complete CMP development process. In this thesis we have isolated steps within the CMP development process that could benefit from CASE tool support. We have then identified possible solutions and remedies to address these steps. These solutions where then implemented in a full fledged CASE tool, called CMP Developer.

  • 208.
    Constantinescu, Doru
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Measurements and Models of One-Way Transit Time in IP Routers2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The main goals of this thesis are towards an understanding of the delay process in best-effort Internet for both non-congested and congested networks. A novel measurement system is reported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system employs both passive measurements and active probing and offers the possibility to measure and analyze different delay components of a router, e.g., packet processing delay, packet transmission time and queueing delay at the output link. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. Pareto traffic models are used to generate self-similar traffic in the link. The reported results are in form of several important statistics regarding processing and queueing delays of a router, router delay for a single data flow, router delay for multiple data flows as well as end-to-end delay for a chain of routers. They confirm results reported earlier about the fact that the delay in IP routers is generally influenced by traffic characteristics, link conditions and, to some extent, details in hardware implementation and different IOS releases. The delay in IP routers may also occasionally show extreme values, which are due to improper functioning of the routers. Furthermore, new results have been obtained that indicate that the delay in IP routers shows heavy-tailed characteristics, which can be well modeled with the help of several distributions, either in the form of a single distribution or as a mixture of distributions. There are several components contributing to the OWTT in routers, i.e., processing delay, queueing delay and service time. The obtained results have shown that, e.g., the processing delay in a router can be well modeled with the Normal distribution, and the queueing delay is well modeled with a mixture of Normal distribution for the body probability mass and Weibull distribution for the tail probability mass. Furthermore, OWTT has several component delays and it has been observed that the component delay distribution that is most dominant and heavy-tailed has a decisive influence on OWTT.

  • 209. Constantinescu, Doru
    Overlay Multicast Networks: Elements, Architectures and Performance2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today, the telecommunication industry is undergoing two important developments with implications on future architectural solutions. These are the irreversible move towards Internet Protocol (IP)-based networking and the deployment of broadband access. Taken together, these developments offer the opportunity for more advanced and more bandwidth-demanding multimedia applications and services, e. g., IP television (IPTV), Voice over IP (VoIP) and online gaming. A plethora of Quality of Service (QoS) requirements and facilities are associated with these applications, e. g., multicast facilities, high bandwidth and low delay/jitter. Moreover, the architectural solution must be a unified one, and be independent of the access network and content management. An interesting solution to these challenges is given by overlay multicast networks. The goal of these networks is to create and to maintain efficient multicast topologies among the multicast participants as well as to minimize the performance penalty involved with application layer multicasting. Since they operate at the application layer, they suffer from two main drawbacks: higher delay and less efficient bandwidth utilization. It is therefore important to assess the performance of overlay multicast networks in “real- world”-like conditions. For this purpose, we first performed an in-depth measurement and modeling study of the packet delay at the network layer. The reported results are in the form of several important statistics regarding processing and queueing delays of a router. New results have been obtained that indicate that the delay in IP routers shows heavy-tailed characteristics, which can be well modeled with the help of several distributions, in the form of a single distribution or as a mixture of distributions. There are several components contributing to the delay in routers, i. e., processing delay, queueing delay and service time. It was observed that the component delay distribution that is most heavy-tailed has a decisive influence on delay. Furthermore, we selected three representative categories of overlay multicast networks for study, namely Application Level Multicast Infrastructure (ALMI), Narada and NICE is the Internet Cooperative Environment (NICE). The performance of these overlay multicast protocols was evaluated through a comprehensive simulation study with reference to a detailed set of performance metrics that captured application and network level performance. A particular interest was given to the issues of scalability, protocol dynamics and delay optimization as part of a larger problem of performance-aware optimization of the overlay networks. The simulations were configured to emulate “real-world”-like characteristics by implementing a heavy-tailed delay at the network level and churn behavior of the overlay nodes. A detailed analysis of every protocol is provided with regard to their performance. Based on our study, significant conclusions can be drawn regarding the scalability of the protocols with reference to overlay multicast group management, resource usage and robustness to churn. These results contribute to a deeper understanding of the requirements for such protocols targeted at, e. g., media streaming.

  • 210.
    Constantinescu, Doru
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Carlsson, Patrik
    Popescu, Adrian
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    One-Way Transit Time Measurements2004Report (Refereed)
    Abstract [en]

    This report is a contribution towards a better understanding of traffic measurements associated with e2e delays occurring in best-effort networks. We describe problems and solutions associated with OWTT delay measurements, and give examples of such measurements. A dedicated measurement system is reported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system uses both passive measurements and active probing. Dedicated application-layer software is used to generate traffic. Pareto traffic models are used to generate self-similar traffic in the link. Both packet inter-arrival times and packet sizes are matching real traffic models. A passive measurement system is used for data collection that is based on using several so-called Measurement Points, each of them equipped with DAG monitoring cards. Hashing is used for the identification and matching of packets. The combination of passive and active measurements, together with using the DAG monitoring system, gives us an unique possibility to perform precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions. The real value of our study lies in the hop-by-hop instrumentation of the devices involved in the transfer of IP packets. The mixture of passive and active traffic measurements used, allows us to study changes in traffic patterns relative to specific reference points and to observe different contributing factors to the observed changes. This approach offers us the choice of better understanding diverse components that may impact on the performance of packet delay as well as to to measure queueing delays in operational routers.

  • 211. Constantinescu, Doru
    et al.
    Carlsson, Patrik
    Popescu, Adrian
    Nilsson, Arne A.
    Measurement of One-Way Internet Packet Delay2004Conference paper (Refereed)
    Abstract [en]

    The paper reports on a dedicated measurement system for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system uses both passive and active measurements. Dedicated application-layer software is used to generate traffic. Pareto traffic models are used to generate self-similar traffic in the link. Both packet inter-arrival times and packet sizes are matching real traffic models. A passive measurement system is used for data collection that is based on using several so-called Measurement Points, each of them equipped with DAG monitoring cards. Hashing is used for the identification and matching of packets. The combination of passive measurements and active probing, together with using the DAG monitoring system, gives us an unique possibility to perform precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions.

  • 212. Constantinescu, Doru
    et al.
    Erman, David
    Ilie, Dragos
    Popescu, Adrian
    Congestion and Error Control in Overlay Networks2007Report (Other academic)
    Abstract [en]

    In recent years, Internet has known an unprecedented growth, which, in turn, has lead to an increased demand for real-time and multimedia applications that have high Quality-of-Service (QoS) demands. This evolution lead to difficult challenges for the Internet Service Providers (ISPs) to provide good QoS for their clients as well as for the ability to provide differentiated service subscriptions for those clients who are willing to pay more for value added services. Furthermore, a tremendous development of several types of overlay networks have recently emerged in the Internet. Overlay networks can be viewed as networks operating at an inter-domain level. The overlay hosts learn of each other and form loosely-coupled peer relationships. The major advantage of overlay networks is their ability to establish subsidiary topologies on top of the underlying network infrastructure acting as brokers between an application and the required network connectivity. Moreover, new services that cannot be implemented (or are not yet supported) in the existing network infrastructure are much easier to deploy in overlay networks. In this context, multicast overlay services have become a feasible solution for applications and services that need (or benefit from) multicast-based functionality. Nevertheless, multicast overlay networks need to address several issues related to efficient and scalable congestion control schemes to attain a widespread deployment and acceptance from both end-users and various service providers. This report aims at presenting an overview and taxonomy of current solutions proposed that provide congestion control in overlay multicast environments. The report describes several protocols and algorithms that are able to offer a reliable communication paradigm in unicast, multicast as well as multicast overlay environments. Further, several error control techniques and mechanisms operating in these environments are also presented. In addition, this report forms the basis for further research work on reliable and QoS-aware multicast overlay networks. The research work is part of a bigger research project, "Routing in Overlay Networks (ROVER)". The ROVER project was granted in 2006 by EuroNGI Network of Excellence (NoE) to the Dept. of Telecommunication Systems at Blekinge Institute of Technology (BTH).

  • 213. Constantinescu, Doru
    et al.
    Popescu, Adrian
    On the Performance of Overlay Multicast Networks2008Conference paper (Refereed)
    Abstract [en]

    The paper reports on a performance study of several Application Layer Multicast (ALM) protocols. Three categories of overlay multicast networks are investigated, namely Application Level Multicast Infrastructure (ALMI), Narada and NICE is the Internet Cooperative Environment (NICE). The performance of the overlay multicast protocols is evaluated with reference to a set of performance metrics that capture both application and network level performance. The study focuses on the control overhead induced by the protocols under study. This further relates to the scalability of the protocol with increasing number of multicast participants. In order to get a better assessment of the operation of these protocols in "real-life"-like conditions, we implemented in our simulations a heavy-tailed delay at the network level and churn behavior of the overlay nodes. Our performance study contributes to a deeper understanding and better assessment of the requirements for such protocols targeted at, e.g., media streaming.

  • 214. Cornelius, Per
    Subband Beamforming for Speech Enhancement within a Motorcycle helmet2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The increased mobility in society has led to a need for convenient mobile communication in many different type of environments. Environments such as a motorcycle helmet, engine rooms and most industrial sites share a common challenge in that they often offer significant acoustic background noise. Noise reduces the speech intelligibility and consequently limits the potential of mobile speech communications. Existing single channel solutions for speech enhancement may perform adequately when the level of noise is moderate. When the noise level becomes significant, additional use of the spatial domain in order to successfully perform speech enhancement is a potential solution. This is achieved by including several microphones in an array placed in the vicinity of the person speaking. A beamforming algorithm is hereby used to combine the microphones such that the desired speech signal is enhanced. The interest in using microphone arrays for broadband speech and audio processing has increased in recent years. There have been a considerable amount of interesting applications published using beamforming techniques for hands-free voice communication in cars, hearing-aids, teleconferencing and multimedia applications. Most of proposed solutions deal exclusively with environments where the noise is moderate. This thesis is a study of noise reduction in a helmet communication system on a moving motorcycle. The environment is analyzed under different driving conditions and a speech enhancement solution is proposed that operates successfully in all driving conditions. The motorcycle environment can exhibit extremely high noise levels, when driving at high speed, while it can produce a low noise levels at moderate speeds. This fact implies that different solutions are required. It is demonstrated in the thesis that a cascaded combination of a calibrated subband beamforming technique, together with a single channel solution provides good results at all noise levels. The proposed solution operates in the frequency domain, where all microphone signals are decomposed by a subband filter bank prior to the speech enhancement processing. Since the subband transformation is an important component of the overall system performance, a method for filter bank design is also provided in the thesis. The design is such that the aliasing effects in the transformations are minimized while a small delay of the total system is maintained.

  • 215. Cornelius, Per
    et al.
    Grbic, Nedelko
    Claesson, Ingvar
    Microphone array system for speech enhancement in a motorcycle helmet2005Report (Other academic)
    Abstract [en]

    In this report a real case study of the sound environment within a helmet while driving motorcycle is investigated. A solution to perform speech enhancement is proposed for the purpose of mobile speech communication. A microphone array, mounted onto the face shield in front of the user's mouth, is used to capture the spatio-temporal properties of the acoustic wave ¯eld inside the helmet. The power of the spatially spread noise within the helmet is small when standing still while it may heavily exceed the power of the speech when driving at high speeds. This will result in dramatically reduced speech intelligibility in the communication channel. The highly dynamic noise level imposes a challenge for existing speech enhancement solutions. We propose a subband adaptive system for speech enhancement which consists of a soft constrained beamformer in cascade with a signal-to-noise ratio dependent single microphone solution. The beamformer make use of a calibration signal gathered in the actual environment from the speaker's position. This calibration procedure e±ciently captures the acoustical properties in the environment. Evaluation of the beamformer and the single microphone algorithm, both as either parts by them selves and as a cascaded structure, together with the optimal subband Wiener solution is presented. It is shown that a cascaded combination of the calibrated subband beamforming technique together with the single channel solution outperforms either one by it self, and provides near optimal results at all noise levels.

  • 216. Cornelius, Per
    et al.
    Yermeche, Zohra
    Grbic, Nedelko
    Claesson, Ingvar
    A Spatially Constrained Subband Beamforming Algorithm for speech enhancement2004Conference paper (Refereed)
    Abstract [en]

    This paper discusses speech enhancement in an enclosed environment such as communication in a motorcycle helmet. A new constrained subband adaptive beamformer is proposed, which uses the concept of an earlier proposed calibrated beamformer mainly developed for a hands-free in-car environment. The highly non-stationary nature of the disturbing sound field encountered in an motorcycle helmet and the fact that the source is situated in the extreme nearfield of the array, causes the beamformer to produce an unwanted fluctuation in the output level. The spatially constrained beamformer proposed in this paper makes sure that the output maintains a constant gain, as long as the corresponding source originates from the desired location.

  • 217. Cresp, Gregory
    et al.
    Dam, Hai Huyen
    Zepernick, Hans-Jürgen
    Design of Modified UCHT Sequences2006Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider the design of a class of unified complex Hadamard transform (UCHT) sequences. An effi- cient modification is imposed to those sequences to better suit applications in asynchronous code-division multiple- access (CDMA) systems. These modified UCHT sequences preserve the orthogonality of the original UCHT sequences and offer increased design options due to an increased number of parameters. The design of UCHT, modified UCHT, and Oppermann sequences is then formulated with reference to optimizing the maximum nontrivial aperiodic correlation value. These optimization problems can then be solved efficiently using a genetic algorithm with the maximum nontrivial aperiodic correlation value serving in the definition of a fitness function. Numerical examples illustrate the benefits of modified UCHT sequences over the original UCHT sequences.

  • 218. Cresp, Gregory
    et al.
    Dam, Hai Huyen
    Zepernick, Hans-Jürgen
    Design of Sequence Family Subsets Using a Branch and Bound Technique2009In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 55, no 8, p. 3847-3857Article in journal (Refereed)
    Abstract [en]

    The number of spreading sequences required for Direct Sequence Code Division Multiple Access (DS-CDMA) systems depends on the number of simultaneous users in the system. Often a sequence family provides more sequences than are required; in many cases the selection of the employed sequences is a computationally intensive task. This selection is a key consideration, as the properties of the sequences assigned affect the error performance in the system. In this paper, a branch and bound algorithm is presented to perform this selection based on two different cost functions. Numerical results are presented to demonstrate the improved performance of this algorithm over previous work.

  • 219. Cresp, Gregory
    et al.
    Dam, Hai Huyen
    Zepernick, Hans-Jürgen
    Subset Family Design Using a Branch and Bound Technique2006Conference paper (Refereed)
    Abstract [en]

    The number of spreading sequences required for Direct Sequence Code Division Multiple Access (DS-CDMA) systems depends on the number of simultaneous users on the channel. The correlation properties of the sequences used affect the bit error rate of the system. Often a sequence family provides more sequences than are required and in many cases the selection of the employed sequences is a computationally intensive task. In this paper, a branch and bound algorithm is presented to optimise the subset of available sequences given the required subset size. In contrast to previous approaches, the resulting subset is guaranteed to be optimal. Numerical results are presented to demonstrate the improved performance of this algorithm over previous work.

  • 220. Cresp, Gregory
    et al.
    Zepernick, Hans-Jürgen
    Dam, Hai Huyen
    Bit Error Rates of Large Area Synchronous Systems2008Conference paper (Refereed)
    Abstract [en]

    Large Area Synchronous (LAS) sequences are a class of ternary interference free window spreading sequences. One of their advantages is the ability to construct permutation LAS families in order to reduce adjacent cell interference (ACI) in cellular systems. There has been little previous numerical work to examine the effect of using permutation families compared to simply reusing the same LAS family across different cells. The bit error rates resulting from two cell systems employing both permutation families and sequence reuse are considered here by simulation.

  • 221. Cresp, Gregory
    et al.
    Zepernick, Hans-Jürgen
    Dam, Hai Huyen
    Combination Oppermann Sequences for Spread Spectrum Systems2005Conference paper (Refereed)
    Abstract [en]

    Numerous spread spectrum applications require families of long spreading sequences, often used at very high chip rates. In this paper, an algebraically simple way of generating long sequences by combining shorter polyphase sequences is presented, aimed at asynchronous spread spectrum systems. The approach is motivated by the fact that polyphase sequences offer increased design options, in terms of the supported range of correlation characteristics, while combination sequences allow for simpler generation of long sequences. It leads to the definition of combination Oppermann sequences, the properties of which are investigated in this paper. Numerical results indicate that the families of these proposed combination sequences provide favourable aperiodic correlations. The presented family of combination Oppermann sequences is therefore suitable for applications that rely on rapid synchronisation and are required to provide multiple access to the system.

  • 222. Cresp, Gregory
    et al.
    Zepernick, Hans-Jürgen
    Dam, Hai Huyen
    On the Classification of Large Area Sequences2007Conference paper (Refereed)
    Abstract [en]

    Large Area (LA) sequences form a class of ternary spreading sequences which exhibit an interference free window. In addition, these sequences have low correlation properties outside this window. Work to date has concentrated on examining the parameters and performance of LA sequences with reference to only a small number of example families. In this paper we develop general conditions which an LA family must satisfy. The development of these conditions allows for the production of computationally efficient tests to determine whether a given family is an LA family. In particular, these tests can form the basis for algorithms to construct LA families, allowing for a larger number of families with the potential for higher energy efficiency than those of previous work.

  • 223. Cresp, Gregory
    et al.
    Zepernick, Hans-Jürgen
    Dam, Hai Huyen
    Periodic Oppermann Sequences for Spread Spectrum Systems2005Conference paper (Refereed)
    Abstract [en]

    In this paper we introduce periodic Oppermann sequences, which constitute a special class of polyphase sequences. The properties of these sequences are presented, and indicate that periodic Oppermann sequences are suitable for combination to generate families of longer sequences. Numerical examples show that periodic Oppermann sequences can be designed for ranging or synchronisation applications or for supporting multiple access spread spectrum communication systems.

  • 224. Dahl, Mattias
    et al.
    Tran, To
    Claesson, Ingvar
    Nordebo, Sven
    Design of antenna array using dual nested complex approximation2005Conference paper (Refereed)
    Abstract [en]

    This paper presents a new practical approach to complex Chebyshev approximation by semi-infinite linear programming. By the new front-end technique, the associated semi-infinite linear programming problem is solved exploiting the finiteness of the related Lagrange multipliers by adapting finite-dimensional linear programming to the dual semi-infinite problem, and thereby taking advantage of the numerical siability and efficiency of conventional linear programming software packages. Furthermore, the optimization procedure is simple to describe theoretically and straightforward to implement in computer coding. The new design technique is therefore. highly accessible. The algorithm is formally introduced as the linear Dual Nested Complex Approximation (DNCA) algorithm. The DNCA algorithm is versatile and can be applied to a variety of applications such as narrow-band as well as broad-band beamformers with any geometry, conventional Finite Impulse Response (FIR) filters, analog and digital Laguerre networks. and digital FIR equalizers. The proposed optimization technique is applied to several numerical examples dealing with the design of a narrow-band base-station antenna array for mobile communication.

  • 225.
    Dahlberg, Annika
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Sandell, Helena
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Kopieringsskydd och lagar: rätt väg i kampen mot piratkopieringen?2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Today, it is often reported about piracy copying in the mass media. Anyone can, with little knowledge and cost, copy films, music, games and software, which often also are easily accessible on the Internet. No one can quite understand or answer how big this problem is, and for whom. Organisations and companies within the line of business are trying with different means to deal with the trend that copied, digital information is getting a larger spread and demand amongst consumers. This by trying to develop improved copy protection, and by working for more stringent legislation amongst copyright for digitally based products. The questions we asked ourselves were, if new laws and copy protection have any impact on the consumers and if it’s possible for the organisations to make use of the Internet instead of seeing it as a threat. To get an answer to this, we have among other things tried to find out the meaning of copy protection, which various types of copy protection that exists and how well they are functioning. We have also illustrated the fact that the legality of copy protecttion is questioned, this because according to the copyright laws in different countries, you have the right to make a copy for your own personal use. We have examined the attitudes of copying amongst producers and consumers of digital information and how they can be brought closer together in the struggle that is being carried out between them. We have also investigated the economic losses and effects due to piracy copying, and if there are any suggested measures for dealing with piracy copying, besides copy protection and laws. Through an inquiry along with interviews, we have investigated how much a person downloads and/or copies, and if copy protection and legal actions have a deterrent effect on the sole individual. This we have put in relation to the organisations’ attempt to prevent piracy copying, to see if there are any alternative ways to overcome the problem. The answer is that copy protection and laws are not the right methods in the battle against piracy copying. This is proven by the consumers’ attitude towards copy protection, i.e. there is always a way around them, and products that have been copy protected cannot be used satisfyingly. It is also proven by the fact that neither the consumers, nor the actors of the law, think that the copyright law has an affect towards the sole individual. Instead, the organisations within the line of business should use the Internet for distribution of their products, and make use of its advantages. In that way, the organisations can supply their products in a way that is comfortable and easily available for the consumers, and at a lower cost compared to today.

  • 226.
    Dahlén, Vilhelm
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Landsten, Kristian
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Mätning av besökares ljudtrycksdos under musikfestival2009Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Folkets Hus och Parker gav under våren 2008 en förfrågan om att under sommaren uppmäta den ljudtrycksdos en normal festivaldeltagare tillskansar sig under ett festivalbesök. De mätningar som genomfördes skedde på tre olika festivaler, Hultsfredsfestivalen, Peace & Love och Storsjöyran. Mätningarna skulle ske över hela festivalbesöket och inte endast innefatta mätningar av konserter, för att kunna ge en rättvisare bild av den ljudtrycksdos som en person kan tänkas utsättas för under en festival. De två av Socialstyrelsen uppsatta gränsvärdena som ska följas är att evenemanget ska hålla sig under ett medelvärde på max 100 dB(A)LEQ och dess toppvärde får vara högst 115 dB(A). I överlag kunde vi se att ljudnivåerna hölls relativt bra inom riktvärdena på de tre festivalerna. Resultaten för festivalerna var att mätningarna på Hultsfred gav ett ljudtrycksmedelvärde på 86,1 dB(A)LEQ, Peace & Love 88,8 dB(A)LEQ och Storsjöyran 85,7 dB(A)LEQ. Detta kan jämföras med arbetsmiljöverkets uppsatta gräns på 85 dB(A) för daglig bullerexponering. Viss forskning tyder dock på att musik och buller inte kan likställas och att människor normalt kan utsättas för 5 dB högre ljudnivåer av musik än buller, vilket i så fall skulle innebära att den dagliga gränsen för musikexponering skulle ligga på 90 dB(A).

  • 227. Dam, Hai Huyen
    et al.
    Nordebo, Sven
    Blekinge Institute of Technology, School of Engineering, Department of Signal Processing.
    Teo, Kok Ley
    Cantoni, Antonio
    Frequency Domain Design for Digital Laguerre Networks2000In: IEEE transactions on circuits and systems. 1, Fundamental theory and applications, ISSN 1057-7122, Vol. 47, no 4, p. 578-581Article in journal (Refereed)
  • 228. Dam, Hai Huyen
    et al.
    Nordholm, Sven
    Cantoni, Antonio
    Haan, Jan Mark de
    Iterative Method for the Design of DFT Filter Bank2004In: IEEE transactions on circuits and systems. 2, Analog and digital signal processing (Print), ISSN 1057-7130, E-ISSN 1558-125X, Vol. 51, no 11, p. 581-6Article in journal (Refereed)
    Abstract [en]

    Multi-rate adaptive filters have numerous advantages such as low computational load, fast convergence and parallelism in the adaptation. Drawbacks when using multi-rate processing are mainly related to aliasing and reconstruction effects. These effects can be minimized by introducing appropriate problem formulation and employing sophisticated optimization techniques. In this paper, we propose a formulation for the design of filter bank which controls the distortion level for each frequency component directly and minimizes the inband aliasing and the residual aliasing between different subbands. The advantage of this problem formulation is that the distortion level can be weighted for each frequency depending on the particular practical application. A new iterative algorithm is proposed to optimize simultaneously over both the analysis and the synthesis filter banks. This algorithm is shown to have a unique solution for each iteration. For a fixed distortion level, the proposed algorithm yields a significant reduction in both the inband aliasing and the residual aliasing levels compared to existing methods applied to the numerical examples.

  • 229. Dam, Hai Huyen
    et al.
    Nordholm, Sven
    Zepernick, Hans-Jürgen
    Frequency Domain Blind Equalization for MIMO Systems2005Conference paper (Refereed)
    Abstract [en]

    This paper presents a new normalized frequency domain approach for adaptive blind equalization for multiple-input multiple-output (MIMO) communication systems. We first develop a time domain block based algorithm that updates the equalizer coefficients once per block of data symbols. As the time domain algorithm involves infinite summations in the separation cost function, an approximation is then proposed to reduce the cost function to finite summations. As a consequence, the block based time domain algorithm can be implemented in the frequency domain to reduce the computational complexity associated with the conventional symbol-by-symbol time domain update. Furthermore, by recognizing that the signals in the frequency domain are orthogonal, it is possible to significantly improve the convergence rate by normalizing the update equations with respect to the signal power in each frequency bin. Simulation results show that the proposed algorithm can successfully equalize the received signals while maintaining low computational complexity. Moreover, the presented normalized frequency domain blind equalization algorithm for MIMO systems significantly improves the convergence rate.

  • 230. Dam, Hai Huyen
    et al.
    Zepernick, Hans-Jürgen
    Nordholm, Sven
    Nordberg, Jörgen
    Spreading Code Design Using a Global Optimization Method2005In: Annals of Operations Research, ISSN 0254-5330, E-ISSN 1572-9338, Vol. 133, no 1-4, p. 249-264Article in journal (Refereed)
    Abstract [en]

    The performance of a code division multiple access system depends on the correlation properties of the employed spreading code. Low cross-correlation values between spreading sequences are desired to suppress multiple access interference and to improve bit error performance. An auto-correlation function with a distinct peak enables proper synchronization and suppresses intersymbol interference. However, these requirements contradict each other and a trade-off needs to be established. In this paper, a global two dimensional optimization method is proposed to minimize the out-of-phase average mean-square aperiodic auto-correlation with average mean-square aperiodic cross-correlation being allowed to lie within a fixed region. This approach is applied to design sets of complex spreading sequences. A design example is presented to illustrate the relation between various correlation characteristics. The correlations of the obtained sets are compared with correlations of other known sequences.

  • 231. Damm, Lars-Ola
    Early and Cost-Effective Software Fault Detection: Measurement and Implementation in an Industrial Setting2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Avoidable rework consumes a large part of development projects, i.e. 20-80 percent depending on the maturity of the organization and the complexity of the products. High amounts of avoidable rework commonly occur when having many faults left to correct in late stages of a project. In fact, research studies indicate that the cost of rework could be decreased by up to 50 percent by finding more faults earlier. Therefore, the interest from industry to improve this area is large. It might appear easy to reduce the amount of rework just by putting more focus on early verification activities, e.g. reviews. However, activities such as reviews and testing are good at catching different types of faults at different stages in the development cycle. Further, some system characteristics such as system capacity and backward compatibility might not be feasible to verify early through for example reviews or unit tests. Therefore, the objective should not just be to find and remove all faults as early as possible. Instead, the cost-effectiveness of different techniques in relation to different types of faults should be in focus. A department at Ericsson AB was interested in approaches for assessing and improving early and cost-effective fault detection. In particular, there was a need to quantify the value of suggested improvements. Based on this objective, research was during a few years conducted in the industrial environment. The conducted research resulted in this thesis, which determines how to quantify unnecessary rework costs and determines which phases and activities to focus improvement work on in order to achieve earlier and more cost-effective fault detection. The thesis describes and evaluates measurement methods that make organizations strive towards finding the right faults in the right phase. The developed methods were also used for evaluating the impact a framework for component-level test automation and test-driven development had on development efficiency and quality. Further, the thesis demonstrates how the implementation of such improvements can be continuously monitored to obtain feedback during ongoing projects. Finally, recommendations on how to define and implement measurements, and how to interpret obtained measurement data are provided, e.g. presented as considerations, lessons learned, and success factors. The thesis concluded that existing approaches for assessing and improving the degree of early and cost-effective software fault detection are not satisfactory since they can cause counter-productive behavior. An approach that more adequately considers the cost-efficiency aspects of software fault detection is required. Additionally, experiences from different products and organizations led to the conclusion that a combination of measurements is commonly necessary to accurately identify and prioritize improvements.

  • 232. Damm, Lars-Ola
    Monitoring and Implementing Early and Cost-Effective Software Fault Detection2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Avoidable rework constitutes a large part of development projects, i.e. 20-80 percent depending on the maturity of the organization and the complexity of the products. High amounts of avoidable rework commonly occur when having many faults left to correct in late stages of a project. In fact, research studies indicate that the cost of rework could be decreased by up to 30-50 percent by finding more faults earlier. However, since larger software systems have an almost infinite number of usage scenarios, trying to find most faults early through for example formal specifications and extensive inspections is very time-consuming. Therefore, such an approach is not cost-effective in products that do not have extremely high quality requirements. For example, in market-driven development, time-to-market is at least as important as quality. Further, some areas such as hardware dependent aspects of a product might not be possible to verify early through for example code reviews or unit tests. Therefore, in such environments, rework reduction is primarily about finding faults earlier to the extent it is cost-effective, i.e. find the right faults in the right phase. Through a set of case studies at a department at Ericsson AB, this thesis investigates how to achieve early and cost-effective fault detection through improvements in the test process. The case studies include investigations on how to identify which improvements that are most beneficial to implement, possible solutions to the identified improvement areas, and approaches for how to follow-up implemented improvements. The contributions of the thesis include a framework for component-level test automation and test-driven development. Additionally, the thesis provides methods for how to use fault statistics for identifying and monitoring test process improvements. In particular, we present results from applying methods that can quantify unnecessary fault costs and pinpointing which phases and activities to focus improvements on in order to achieve earlier and more cost-effective fault detection. The goal of the methods is to make organizations strive towards finding the right fault in the right test phase, which commonly is in early test phases. The developed methods were also used for evaluating the results of implementing the above-mentioned test framework at Ericsson AB. Finally, the thesis demonstrates how the implementation of such improvements can be continuously monitored to obtain rapid feedback on the status of defined goals. This was achieved through enhancements of previously applied fault analysis methods.

  • 233. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Company-wide Implementation of Metrics for Early Software Fault Detection2007Conference paper (Refereed)
  • 234. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Identification of test process improvements by combining fault trigger classification and faults-slip-through measurement2005Conference paper (Refereed)
  • 235. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Introducing Test Automation and Test-Driven Development: An Experience Report2004Conference paper (Refereed)
  • 236. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Quality Impact of Introducing Component-Level Test Automation and Test-Driven Development2007Conference paper (Refereed)
  • 237. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Results from Introducing Component-Level Test Automation and Test-Driven Development2006In: Journal of Systems and Software, ISSN 0164-1212 , Vol. 79, no 7, p. 1001-1014Article in journal (Refereed)
  • 238. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Using Fault Slippage Measurement for Monitoring Software Process Quality during Development2006Conference paper (Refereed)
  • 239. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Olsson, David
    Introducing Test Automation and Test-Driven Development: An Experience Report2005In: Electronical Notes in Theoretical Computer Science, ISSN 1571-0661, E-ISSN 1571-0661, Vol. 116, p. 3-15Article in journal (Refereed)
  • 240. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    A model for software rework reduction through a combination of anomaly metrics 2008In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 81, no 11, p. 1968-1982Article in journal (Refereed)
    Abstract [en]

    Analysis of anomalies reported during testing of a project can tell a lot about how well the processes and products work. Still, organizations rarely use anomaly reports for more than progress tracking although projects commonly spend a significant part of the development time on finding and correcting faults. This paper presents an anomaly metrics model that organizations can use for identifying improvements in the development process, i.e. to reduce the cost and lead-time spent on rework-related activities and to improve the quality of the delivered product. The model is the result of a four year research project performed at Ericsson. © 2008 Elsevier Inc. All rights reserved.

  • 241. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    Determining the Improvement Potential of a Software Development Organization through Fault Analysis: A Method and a Case Study2004Conference paper (Refereed)
    Abstract [en]

    Successful software process improvement depends on the ability to analyze past projects and determine which parts of the process that could become more efficient. One typical data source is the faults that are reported during product development. From an industrial need, this paper provides a solution based on a measure called faults-slip-through, i.e. the measure tells which faults that should have been found in earlier phases. From the measure, the improvement potential of different parts of the development process is estimated by calculating the cost of the faults that slipped through the phase where they should have been found. The usefulness of the method was demonstrated by applying it on two completed development projects at Ericsson AB. The results show that the implementation phase had the largest improvement potential since it caused the largest faults-slip-through cost to later phases, i.e. 81 and 84 percent of the total improvement potential in the two studied projects.

  • 242. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    Faults-slip-through – A Concept for Measuring the Efficiency of the Test Process2006In: Software Process: Improvement and Practice, ISSN 1077-4866 , Vol. 11, no 1, p. 47-59Article in journal (Refereed)
  • 243.
    Danet, Georgiana
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Quest Atlantis as an alternative educational tool – Children’s voices on Quest Atlantis and a method for involving users in participatory design.2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Alternative educational tools have been investigated, in form of a meta-game structure, a computer-based educational software (Quest Atlantis) which was used in an after-school environment within the frame of Fifth Dimension site in Ronneby. The study is based on field material from five sessions, each of two hours. A first focus in this thesis is on the extent to which such a virtual environment can be used for educational purposes, to which extent it can supplement the traditional educational system. A second focus is on how appropriate the software is to its educational purpose and how it can be improved by means of participatory design. The analysis of the data shows that computer games are a rich setting for human learning, in a more dynamic, active and involving manner than traditional education. In this particular case, we came to the conclusion how the software has to be improved in order to suite children’s computer skills and we came up with an original method for involving users in participatory design.

  • 244.
    Darinder, Fredrik
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Designing and Evaluating a Development Framework2006Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Object-Oriented frameworks, OO frameworks, have been discussed over a long period of time, that when introducing Object-Oriented frameworks, the defect-density will decrease and the quality of software will increase. Capgemini had developed a framework that had been in the organization for nine years. Since then, the framework has been reengineered several times to meet the continuously changing requirements of the software systems the framework supports. My work was to develop a new framework to make the maintainability and evolution of the framework easier while not compromising the quality of the framework or the applications built on it. The new framework that I developed was called the Capgemini Development Framework, CDF. Results from the case study, conducted to test the differences between these two frameworks, showed that the CDF framework preserved the maintainability of the applications built on the framework. The architecture of the CDF framework made it also easier to handle any future updates to the core functionality of the framework.

  • 245.
    Dathathri, Arvind
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Atangana, Jules Lazare
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Countering Privacy-Invasive Software (PIS) by End User License Agreement Analysis2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In our thesis we use a preventive approach to stop privacy-invasive software (PIS) from entering the system. We aim at increasing the user awareness about the background activities of the software. These activities are implicitly written in End User License Agreement (EULA). We are using a multi-layer user notification approach to increase the user awareness and help him make a good decision, which is in accordance with the European legal framework. A proof of concept tool is developed that will use the user preferences to present the EULA in a compact and understandable form thereby helping the user in deciding with the installation of a software.

  • 246. Davidsson, Paul
    et al.
    Boman, Magnus
    Distributed Monitoring and Control of Office Buildings by Embedded Agents2005In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 171, no 4, p. 293-307Article in journal (Refereed)
    Abstract [en]

    We describe a decentralized system consisting of a collection of software agents that monitor and control an office building. It uses the existing power lines for communication between the agents and the electrical devices of the building, such as sensors and actuators for lights and heating. The objectives are both energy saving and increasing customer satisfaction through value added services. Results of qualitative simulations and quantitative analysis based on thermodynamical modeling of an office building and its staff using four different approaches for controlling the building indicate that significant energy savings can result from using the agent-based approach. The evaluation also shows that customer satisfaction can be increased in most situations. The approach here presented makes it possible to control the trade-off between energy saving and customer satisfaction (and actually increase both, in comparison with current approaches).

  • 247.
    Davidsson, Paul
    et al.
    Blekinge Institute of Technology, School of Computing.
    Hagelbäck, Johan
    Travelstart Nordic, SWE.
    Svensson, Kenny
    Ericsson AB, SWE.
    Comparing Approaches to Predict Transmembrane Domains in Protein Sequences2005Conference paper (Refereed)
    Abstract [en]

    There are today several systems for predicting transmembrane domains in membrane protein sequences. As they are based on different classifiers as well as different pre- and post-processing techniques, it is very difficult to evaluate the performance of the particular classifier used. We have developed a system called MemMiC for predicting transmembrane domains in protein sequences with the possibility to choose between different approaches to pre- and post-processing as well as different classifiers. Therefore it is possible to compare the performance of each classifier in a certain environment as well as the different approaches to pre- and post-processing. We have demonstrated the usefulness of MemMiC in a set of experiments, which shows, e.g., that the performance of a classifier is very dependent on which pre- and post-processing techniques are used.

  • 248. Davidsson, Paul
    et al.
    Hederstierna, Anders
    Jacobsson, Andreas
    Persson, Jan A.
    The Concept and Technology of Plug and Play Business2006Conference paper (Refereed)
  • 249. Davidsson, Paul
    et al.
    Henesey, Lawrence
    Ramstedt, Linda
    Törnquist, Johanna
    Wernstedt, Fredrik
    Agent-Based Approaches to Transport Logistics2004Conference paper (Refereed)
    Abstract [en]

    This paper provides a survey of existing research on agent-based approaches to transportation and traffic management. A framework for describing and assessing this work will be presented and systematically applied. We are mainly adopting a logistical perspective, thus focusing on freight transportation. However, when relevant, work of traffic and transport of people will be considered. A general conclusion from our study is that agent-based approaches seem very suitable for this domain, but that this still needs be verified by more deployed system.

  • 250. Davidsson, Paul
    et al.
    Henesey, Lawrence
    Ramstedt, Linda
    Törnquist, Johanna
    Wernstedt, Fredrik
    An Analysis of Agent-Based Approaches to Transport Logistics2005In: Transportation Research Part C: Emerging Technologies, ISSN 0968-090X , Vol. 13, no 4, p. 255-271Article in journal (Refereed)
    Abstract [en]

    This paper provides a survey of existing research on agent-based approaches to transportation and traffic management. A framework for describing and assessing this work will be presented and systematically applied. We are mainly adopting a logistical perspective, thus focusing on freight transportation. However, when relevant, work of traffic and transport of people will be considered. A general conclusion from our study is that agent-based approaches seem very suitable for this domain, but that this still needs to be verified by more deployed system.

2345678 201 - 250 of 1407
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf