A stochastic process or sometimes called random process is the counterpart to a deterministic process in theory. A stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. In this case, Instead of dealing only with one possible 'reality' of how the process might evolve under time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths are more probable and others less. However, in discrete time, a stochastic process amounts to a sequence of random variables known as a time series. Over the past decades, the problems of synergetic are concerned with the study of macroscopic quantitative changes of systems belonging to various disciplines such as natural science, physical science and electrical engineering. When such transition from one state to another take place, fluctuations i.e. (random process) may play an important role. Fluctuations in its sense are very common in a large number of fields and nearly every system is subjected to complicated external or internal influences that are often termed noise or fluctuations. Fokker-Planck equation has turned out to provide a powerful tool with which the effects of fluctuation or noise close to transition points can be adequately be treated. For this reason, in this thesis work analytical and numerical methods of solving Fokker-Planck equation, its derivation and some of its applications will be carefully treated. Emphasis will be on both for one variable and N- dimensional cases.
Fuzzy relation equations are becoming extremely important in order to investigate the optimal solution of the inverse problem even though there is a restrictive condition for the availability of the solution of such inverse problems. We discussed the methods for finding the optimal (maximum and minimum) solution of inverse problem of fuzzy relation equation of the form $R \circ Q = T$ where for both cases R and Q are kept unknown interchangeably using different operators (e.g. alpha, sigma etc.). The aim of this study is to make an in-depth finding of best project among the host of projects, depending upon different factors (e.g. capital cost, risk management etc.) in the field of civil engineering. On the way to accomplish this aim, two linguistic variables are introduced to deal with the uncertainty factor which appears in civil engineering problems. Alpha-composition is used to compute the solution of fuzzy relation equation. Then the evaluation of the projects is orchestrated by defuzzifying the obtained results. The importance of adhering to such synopsis, in the field of civil engineering, is demonstrated by an example.
Abstract: The channel characterization of a mobile satellite communication which is an important and fast growing arm of wireless communication plays an important role in the transmission of information through a propagation medium from the transmitter to the receiver with minimum barest error rate putting into consideration the channel impairments of different geographical locations like urban, suburban, rural and hilly. The information transmitted from satellite to mobile terminals suffers amplitude attenuation and phase variation which is caused by multipath fading and signal shadowing effects of the environment. These channel impairments are commonly described by three fading phenomena which are Rayleigh fading, Racian fading and Log-normal fading which characterizes signal propagation in different environments. They are mixed in different proportions by different researchers to form a model to describe a particular channel. In the thesis, the general overview of mobile satellite is conducted including the classification of satellite by orbits, the channel impairments, the advantages of mobile satellite communication over terrestrial. Some of the major existing statistical models used in describing different type of channels are looked into and the best out of them which is Lutz model [6] is implemented. By simulating the Lutz model which described all possible type of environments into two states which represent non-shadowed or LOS and shadowed or NLOS conditions, shows that the BER is predominantly affected by shadowing factor.
Deployment of sensor networks are increasing either manually or randomly to monitor physical environments in different applications such as military, agriculture, medical transport, industry etc. In monitoring of physical environments, the most important application of wireless sensor network is monitoring of critical conditions. The most important in monitoring application like critical condition is the sensing of information during emergency state from the physical environment where the network of sensors is deployed. In order to respond within a fraction of seconds in case of critical conditions like explosions, fire and leaking of toxic gases, there must be a system which should be fast enough. A big challenge to sensor networks is a fast, reliable and fault tolerant channel during emergency conditions to sink (base station) that receives the events. The main focus of this thesis is to discuss and evaluate the performance of two different routing protocols like Ad hoc On Demand Distance Vector (AODV) and Dynamic Source Routing (DSR) for monitoring of critical conditions with the help of important metrics like throughput and end-to-end delay in different scenarios. On the basis of results derived from simulation a conclusion is drawn on the comparison between these two different routing protocols with parameters like end-to-end delay and throughput.
We propose a dynamic hybrid antenna/relay selection scheme for multiple-access relay systems. The proposed scheme aims to boost the system throughput while keeping a good error performance. By using the channel state information, the destination node performs a dynamic selection between the signals provided by the multi-antenna relay, located in the inter-cell region, and the relay nodes geographically distributed over the cells. The multi-antenna relay and the single-antenna relay nodes employ the decode-remodulate-and-forward and amplify-and-forward protocols, respectively. Results reveal that the proposed scheme offers a good tradeoff between spectral efficiency and diversity gain, which is one of the main requirements for the next generation of wireless communications systems.
We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings.
A cooperative multiple-access scheme for wireless communications systems with antenna selection and incremental relaying is proposed. The scheme aims to improve the system throughput while preserving good performance in terms of bit error rate. The system consists of N nodes which send their information to both the destination node and the multiple-antenna relay station. Based on the channel state information, the destination node decides whether or not relaying will be performed. When the relaying is performed, the decode-remodulate-and-forward protocol is used with the best antenna. Results reveal that the proposed scheme achieves a good tradeoff between throughput and bit error rate, which makes suitable to be considered for multi-user networks.
Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.
Recently, the financial market has become an area of increased research interest for mathematician and statisticians. The Black and Scholes breakthrough in this area triggered a lot of new research activity. Commonly the research concerns the log returns of assets (shares, bond, foreign exchange, option). The variation in the log returns is called volatility and it is widely studied and because of its relevance for applications in the financial world. The volatility is mostly used for measuring the risk and also for forecasting future prices. In this research work a process of trading activities is considered. It is assumed that at a random time-point a parameter change in the laws of the trading occurs, indicating changed trading behaviour. For inferential matters about the process it is of vital importance to be able to state that such change has occurred quickly and accurately. The methods used to this end are called stopping rules which signal alarm as soon as some statistics based on-line observations goes beyond some boundary. The model considered for this process of log returns is the family of Autoregressive Conditional Heteroskedastic (ARCH) model. It is widely accepted that this well describes a lot of phenomena in the financial market. In this work statements about this process will be derived, the stopping rule will be defined, evaluated and their properties discussed.
In the automotive industry uniaxial vibration testing is a common method used to predict the lifetime of components. In reality truck components work under multiaxial loads meaning that the excitation is multiaxial. A common method to account for the multiaxial effect is to apply a safety margin to the uniaxial test results. The aim of this work is to find a safety margin between the uniaxial and multiaxial testing by means of virtual vibration testing and statistical methods. Additionally to the safety margin the effect of the fixture’s stiffness on the resulting stress in components has been also investigated.
In curve of growth, traditionally equivalent width is found for absorption lines of Gaussian, Lorentz and Voigt profile against flat continuum. Here, absorption against non-flat distributions is examined. These non-flat distributions are Gaussian and Lorentz distribution. Analytically equivalent widths for absorption against non-flat continuum are complicated integrals. It is impossible to solve these integrals analytically so I have solved them numerically. At low μ the absorption is independent on profile while the growth of absorption depends on profile at high μ, where μ is width.
In recent years a great deal of effort has been expended to develop methods that determine the quality of speech through the use of comparative algorithms. These methods are designed to calculate an index value of quality that correlates to a mean opinion score given by human subjects in evaluation sessions. In this work PESQ (ITU-T Recommendation P.862) which is the new ITU-T benchmarking for objective measurement of speech quality. In mobile phone acoustics, the presence of noise and room reverberation play a vital role in degrading the speech signal and therefore, spectral subtraction and Elko’s beamformer has been used for noise reduction. Weighted Overlap and Add method (WOLA) filter bank is used for frequency domain analysis of the speech signal. Elko’s algorithm is used for designing a differential microphone array, implemented by connecting two omni directional elements to form back-to-back cardioid directional microphones. The output from the Elko’s beamformer is then used as an input to spectral subtraction based on minimum statistics reducing further noise and to enhance the quality of the speech signal. The performance of this system is analysed by calculating the value of PESQ as a speech quality measure. The better the value the PESQ, the better is the output speech quality. Signal to Noise Ratio (SNR) is used to measure the amount of noise in the restored speech signal. Reverberation Index is also used to measure the amount of reverberation effect present in the restored speech signal.
Cylinder liner surface topology greatly affects oil consumption and wear of engines. Surface optimization would be greatly facilitated by automatic quality control. Surface roughness definitions, parameters, and measurement techniques were reviewed and samples of different Volvo truck engine cylinder liner types were measured. Routines for extracting and computing groove parameters, useful in the automation of quality control in production, were developed, implemented in MATLAB and applied on the samples. The principles of the last two steps procedures needed to fully automate the surface grading by roughness parameters analysis were described.
Purpose – The purpose of the study is to propose and test a buyer-supplier integration model, based on clients' collaborative purchasing practices, in a project-based industry. Design/methodology/approach – Hypotheses regarding the relationships among the three variables – i.e. incentive-based payment (IBP), partner selection (PS) based on multiple criteria, and joint action (JA) – are tested using structural equation modeling. Empirical data was collected through two survey rounds of 87 and 106 Swedish construction clients. Findings – The test of the proposed theoretical model receives strong empirical support, indicating that IBP should be coupled with PS based on multiple criteria in order to facilitate JA. Furthermore, it is seen that the occurrence of JA is higher in 2009 than in 2006 and that this is achieved through increased use of IBP. Research limitations/implications – The hypothesized and tested model provides a theoretical contribution, indicating how to facilitate buyer-supplier integration in project-based industries. In future studies it would be useful to adopt a multiple-informant approach, also including suppliers as respondents in order to capture their views on integration. Practical implications – An important managerial implication is that public clients need to improve their understanding of how to design bid proposals and evaluate bids based on multiple criteria instead of lowest price, without infringing public procurement acts. Originality/value – This paper offers unique contributions by addressing a gap in the relationship marketing literature and a lack of quantitative studies of buyer-supplier relationships in project-based industries.
This paper reports on a measurement and modeling study of session and message characteristics of BitTorrent traffic. BitTorrent is a Peer-to-peer (P2P) replication and distribution system developed as an alternative to the classical client-server model to reduce the load on content servers and networks. Results are reported on measurement, modeling and analysis of application and link layer traces collected at the Blekinge Institute of Technology (BTH) and a local ISP in Sweden. Link layer traces and application logs were collected, modeled and analyzed using a dedicated measurement infrastructure developed at BTH to collect P2P traffic. New results are reported on important session and message characteristics of BitTorrent, i.e., session interarrivals, sizes and durations, request rates and response times. Our results show that session interarrivals can be accurately modeled by a second-order hyper-exponential distribution while session durations and sizes can be reasonably well modeled by various mixtures of the Log-normal and Weibull distributions. Response times have been observed to be modeled by a dual Log-normal mixture, while request rates are modeled as dual Gaussian distributions.
In this work we compare the prediction performance of three optimized technical indicators with a Support Vector Machine Neural Network. For the indicator part we picked the common used indicators: Relative Strength Index, Moving Average Convergence Divergence and Stochastic Oscillator. For the Support Vector Machine we used a radial-basis kernel function and regression mode. The techniques were applied on financial time series brought from the Swedish stock market. The comparison and the promising results should be of interest for both finance people using the techniques in practice, as well as software companies and similar considering to implement the techniques in their products.
The financial market has become an area of increasing research interest for mathematicians and statisticians in recent years. Mathematical models and methods are increasingly being applied to study various parameters of the market. One of the parameters that have attracted lots of interest is `volatility'. It is the measure of variability of prices of instruments (e.g. stock, options etc.) traded in the market. It is used mainly to measure risk and to predict future prices of assets. In this paper, the volatility of financial price processes is studied using the Ornstein-Uhlenbeck process. The process is a mean reverting model which has good and well documented properties to serve as a model for financial price processes. At some random time point, a parameter change in the distribution of the price process occurs. In order to control the development of prices, it is important to detect this change as quickly as possible. The methods for detecting such changes are called `stopping rules'. In this work, stopping rules will be derived and analysed. Using simulations and analytical methods, the properties of these stopping rules will be evaluated.
Cognitive Radio is an innovative technology that allows the secondary unlicensed users to share the spectrum with licensed primary users to utilize the spectrum. For maximum utilization of spectrum, in cognitive radio network spectrum sensing is an important issue. Cognitive user under extreme shadowing and channel fading can‟t sense the primary licensed user signal correctly and thus to improve the performance of spectrum sensing, collaboration between secondary unlicensed users is required. In collaborative spectrum sensing the observation of each secondary user is received by a base station acting as a central entity, where a final conclusion about the presence or absence of the primary user signal is made using a particular decision and fusion rule. Due to spatially correlated shadowing the collaborative spectrum sensing performance decreases, and thus optimum secondary users must be selected to, not only improve spectrum sensing performance but also lessen the processing overhead of the central entity. A particular situation is depicted in the project where according to some performance parameters, first those optimum secondary users that have enough spatial separation and high average received SNR are selected using Genetic Algorithm, and then collaboration among these optimum secondary users is done to evaluate the performance. The collaboration of optimal secondary user providing high probability of detection and low probability of false alarm, for sensing the spectrum is compared with the collaboration of all the available secondary users in that radio environment. At the end a conclusion has been made that collaboration of selected optimum secondary users provides better performance, then the collaboration of all the secondary users available.
The quality of requirements specifications may impact subsequent, dependent software engineering (SE) activities. However, empirical evidence of this impact remains scarce and too often superficial as studies abstract from the phenomena under investigation too much. 1Wo of these abstractions are caused by the lack of frameworks for causal inference and frequentist methods which reduce complex data to binary results. In this study, we aim to demonstrate (1) the use of a causal framework and (2) contrast frequentist methods with more sophisticated Bayesian statistics for causal inference. To this end, we reanalyze the only known controlled experiment investigating the impact of passive voice on the subsequent activity of domain modeling. We follow a framework for statistical causal inference and employ Bayesian data analysis methods to re-investigate the hypotheses of the original study. Our results reveal that the effects observed by the original authors turned out to be much less significant than previously assumed. This study supports the recent call to action in SE research to adopt Bayesian data analysis, including causal frameworks and Bayesian statistics, for more sophisticated causal inference.
In this thesis, the performance of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter using a pair of stereo vision system to overcome label discontinuity and robust tracking in an Intelligent Vision Agent System (IVAS) is evaluated. This filter is widely used in multiple-target tracking applications such as surveillance, human tracking, radar, and etc. A pair of cameras is used to get the left and right image sequences in order to extract 3-D coordinates of targets’ positions in the real world scene. The 3-D trajectories of targets are tracked by GM-PHD filter. Many tracking algorithms fail to simultaneously maintain stability of tracking and label continuity of targets, when one or more targets are hidden for a while to camera’s view. The GM-PHD filter performs well in tracking multiple targets; however, the label continuity is not maintained satisfactorily in some situations such as full occlusion and crossing targets. In this project, the label continuity of targets is guaranteed by a new method of labeling, and the simulation results show satisfactory results. A random walk motion is used to validate the ability of the algorithm in tracking and maintaining targets’ labels. In order to evaluate the performance of the GM-PHD filter, a 3-D spatial test motion model is introduced. Here, the two target trajectories are generated in a way that either occlusion or crossing occurs in some time intervals. Then, the two key parameters, angular velocity and motion speed, are used to evaluate the performance of algorithm. The simulation results for two moving targets in occlusion and crossing show that the proposed system not only robustly tracks them, but also maintains the label continuity of two targets.
In many application domains such as weather forecasting, robotics and machine learning we need to model, predict and analyze the evolution of periodic systems. For instance, time series applications that follow periodic patterns appear in climatology where the CO2 emissions and temperature changes follow periodic or quasi-periodic patterns. Another example can be in robotics where the joint angle of a rotating robotic arm follows a periodic pattern. It is often very important to make long term prediction of the evolution of such systems. For modeling and prediction purposes, Gaussian processes are powerful methods, which can be adjusted based on the properties of the problem at hand. Gaussian processes belong to the class of probabilistic kernel methods, where the kernels encode the characteristics of the problems into the models. In case of the systems with periodic evolution, taking the periodicity into account can simplifies the problem considerably. The Gaussian process models can account for the periodicity by using a periodic kernel. Long term predictions need to deal with uncertain points, which can be expressed by a distribution rather than a deterministic point. Unlike the deterministic points, prediction at uncertain points is analytically intractable for the Gaussian processes. However, there are approximation methods that allow for dealing with uncertainty in an analytic closed form, such as moment matching. However, only some particular kernels allow for analytic moment matching. The standard periodic kernel does not allow for analytic moment matching when performing long term predictions. This work presents an analytic approximation method for long term forecasting in periodic systems. We present a different parametrization of the standard periodic kernel, which allows us to approximate moment matching in an analytic closed form. We evaluate our approximate method on different periodic systems. The results indicate that the proposed method is valuable for the long term forecasting of periodic processes.
IPTV or Internet Protocol Television is a system where digital television is delivered to the end user using the Internet Protocol. It relies on the same technologies that are used for computer networks and adds new possibilities, such as video on demand, on top of the traditionally broadcasted TV. Video content for IPTV is typically compressed using a MPEG-2 or H.264 (MPEG-4 part 10) compression and sent in an MPEG-2 transport stream over IP. Since TV is a "real time service" packets are delivered using a simple unreliable transmission model and packets may arrive out of order or be lost. Lost packets are normally not re-transmitted since they anyhow would arrive too late to be useful and packet loss will thus decrease perceived quality for the end user. The Master thesis is an investigation of the parameters that affect the perceived quality in MPEG-2 Transport Stream (TS). The aim of the thesis is to develop an objective parametric model1 that can estimate the perceived quality at transport layer. The thesis work includes experimentation performed on High Definition (HD) video sequences with various bit rates and packet loss ratios (PLR).
Basic mathematic analysis of how financial markets work and different valuation models such as the Stochastic Market Price Estimator, valuation model created by the author.
We replace the usual heuristic notion of quantum cell by that of 'quantum blob', which does not depend on the dimension of phase space. Quantum blobs, which are defined in terms of symplectic capacities, are canonical invariants. They allow us to prove an exact uncertainty principle for semiclassically quantized Hamiltonian systems. (C) 2003 Elsevier B.V. All rights reserved.
In the early 1990s, Volvo Car Corporation in Olofström started a project focusing on monitoring systems for robotized Gas Metal Arc (GMA) welding. This resulted in a research project where Stefan Adolfsson, Department of Production and Materials Engineering, Lund University/ Blekinge Tekniska Högskola, presented in 1998 the doctoral thesis, Automatic Quality Monitoring in GMA Welding using Signal Processing Methods. This doctoral thesis presented a Sequential Probability Ratio Test, SPRT, concept for quality monitoring automatic robotized GMA welding. To create a cost efficient monitoring system the Industriellt Utvecklingscentrum i Olofström AB presented the idea of using a traditional PC computer as a system platform. A traditional PC computer offers high performance to relatively low cost. The main task of this thesis is to implement the SPRT concept for monitoring automatic robotized GMA welding in the LabVIEW environment on a PC and evaluate the real time capacity. A second task in the thesis is to make a survey of similar monitoring systems for robotized GMA welding. The final implementation monitors both mean and variance of the weld voltage and the weld current. In total four different SPRT concepts have been implemented. They are modifications of the SPRT concept that is presented in the doctoral thesis. Two of the SPRT concepts were developed during this exam thesis work and are intended to reduce the risk of false alarms caused by natural systematic variations. Since four different SPRT concepts have been implemented and every single concept monitor both mean and variance of the weld voltage and the weld current, it means that there are in total 16 SPRT algorithms working in parallel. The evaluation of the implementation shows that an ordinary PC is sufficient for real time monitoring and that only two of the four SPRT concepts are suitable. The result of the market research of comparable monitoring systems indicates that there exist a small number of comparable welding monitoring systems.
As resources are limited, radio spectrum becomes congested due to the growth of wireless applications. However, measurements address the fact that most of the licensed spectrums experience low utilization even in intensively teeming areas. In the exertion to improve the utilization of the limited spectrum resources, cognitive radio networks have emerged as a powerful technique to resolve this problem. There are two types of user in cognitive radio networks (CRNs) named as primary user (PU) and secondary user (SU). Therein, the CRN enables the SU to utilize the unused licensed frequency of the PU if it possibly finds the vacant spectrum or white space (known as opportunistic spectrum access). Alternatively, SU can transmit simultaneously with the PU provided that transmission power of SU does not cause any harmful interference to the PU (known as spectrum sharing systems). In this thesis work, we study fundamental knowledge of the CRNs and focus on the performance analysis of the single input multiple output (SIMO) system for spectrum sharing approach. We assume that a secondary transmitter (SU-Tx) has full channel state information (CSI). The SU-Tx can adjust its transmit power not to cause harmful interference to the PU and obtain an optimal transmit rate. In particular, we derive the closed-form expressions for the cumulative distribution function (CDF), outage probability and an analytical expression for symbol error probability (SEP).
Malicious programs have been a serious threat for the confidentiality, integrity and availability of a system. Different researches have been done to detect them. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers tried to find new ways of detecting malicious programs. The application of data mining and machine learning is one of them and has shown good results compared to other approaches. A new category of malicious programs has gained momentum and it is called Spyware. Spyware are more dangerous for confidentiality of private data of the user of system. They may collect the data and send it to third party. Traditional techniques have not performed well in detecting Spyware. So there is a need to find new ways for the detection of Spyware. Data mining and machine learning have shown promising results in the detection of other malicious programs but it has not been used for detection of Spyware yet. We decided to employ data mining for the detection of spyware. We used a data set of 137 files which contains 119 benign files and 18 Spyware files. A theoretical taxonomy of Spyware is created but for the experiment only two classes, Benign and Spyware, are used. An application Binary Feature Extractor have been developed which extract features, called n-grams, of different sizes on the basis of common feature-based and frequency-based approaches. The number of features were reduced and used to create an ARFF file. The ARFF file is used as input to WEKA for applying machine learning algorithms. The algorithms used in the experiment are: J48, Random Forest, JRip, SMO, and Naive Bayes. 10-fold cross-validation and the area under ROC curve is used for the evaluation of classifier performance. We performed experiments on three different n-gram sizes, i.e.: 4, 5, 6. Results have shown that extraction of common feature approach has produced better results than others. We achieved an overall accuracy of 90.5 % with an n-gram size of 6 from the J48 classifier. The maximum area under ROC achieved was 83.3 % with Random Forest.
Requirement engineering is the most important phase within the software development phases since it is used to extract requirements from the customers which are used by the next phases for designing and implementation of the system. Because of its importance, this thesis focuses on the term aspect oriented requirement engineering, which is the first phase in aspect oriented software development used for the identification and representation of requirements gathered in the form of concerns. Besides the overall explanation of aspect oriented requirement engineering phase, detail attention is given to a specific activity within AORE phase called conflict resolution. Several techniques proposed for conflict resolution between aspects is discussed along with an attempt to give a new idea in the form of an extension of the already proposed model for conflict resolution. The need for extension to the already proposed model is justified by the use of a case study which is applied on both the models i.e. on the original model and on the extended model to compare the results.
In power development technology there are pioneering various μ-grid concepts, that integrating multiple distributed power generation sources into a small network serving some or all of the energy needs of participating users can provide benefits including reduced energy costs, increased overall energy efficiency and improved environmental performance and local electric system reliability. In the fast progression of technology the electric vehicles, electric construction equipment and machinery such as- road building or quarries would greatly benefited from electrification and the by this revolutions, they able defending the environment from detrimental effect and potential energy (and local emission) reduction is sufficient. These types of sites are often remotely located and it is much costly to meet the high voltage utility grid or distribution grid, which are normally far-off from the sites. Hence, it would be advantageous to be able to set up such sites without having to build long and expensive connections to high voltage transmission and distribution grids. In this project we have design and proposed a self-sufficient smart DC micro-grid providing renewable energy resources to supply electric machinery is designed. This grid is capable of meeting energy as well as peak power demands of machine site while offering the possibility to fully rely on locally produced renewable energy. In this project we have included price and performance forecasting for solar and wind energy, charger, grid energy storage system and micro-grid power electronics. Therefore, with this design it is capable to provide efficient power supply to the site and can meet the demand at peak loads. Grid modeling and simulating results (including loads, storage and energy production) are done by MATLAB.
The new paradigm of cooperative communications is a promising resolution to carry out MIMO technique. In this thesis work, we study the performance of cooperative relay networks in which the transmission from a source to a destination is assisted by one or several relaying nodes employing amplify-and-forward (AF) and decode- and-forward (DF) protocols. The performance of two-way (or bi-directional) AF relay networks, which are proposed to avoid the pre-log factor 1 2 of spectral efficiency, is latter investigated. Specifically, the exact closed-form expressions for symbol error rate (SER), outage probability, and average sum-rate of bi-directional AF relay systems in independent but not identically distributed (i.n.i.d.) Rayleigh fading channels are derived. Our analyses are verified by comparing with the results from Monte-Carlo simulations.
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.
The field of 3-D-environment reconstruction has been subject to various research activities in recent years. The applications for mobile robots are manifold. First, for navigation tasks (especially SLAM), the perception of 3-D-obstacles has many advantages over navigation in 2-D-maps, as it is commonly done. Objects that are located hanging above the ground can be recognized and furthermore, the robots gain a lot more information about its operation area what makes localization easier. Second, in the field of tele-operation of robots, a visualization of the environment in three dimensions helps the tele-operator performing tasks. Therefore, a consistent, dynamically updated environment model is crucial. Third, for mobile manipulation in a dynamic environment, an on-line obstacle detection and collision avoidance can be realized, if the environment is known. In recent research activities, various approaches to 3-D-environment reconstruction have evolved. Two of the most promising methods are FastSLAM and 6-D-SLAM. Both are capable of building dense 3D environment maps on-line. The first one uses a Particle Filter applied on extracted features in combination with a robot system model and a measurement model to reconstruct a map. The second one works on 3-D point cloud data and reconstructs an environment using the ICP algorithm. Both of these methods are implemented in GNU C++. Firstly, FastSLAM is implemented. The object-oriented programming technique is used to build up the Particle and Extended Kalman Filters. Secondly, 6-D SLAM is implemented. The concept of inheritance in C++ is used to make the implementation of ICP algorithm as much generic as possible. To test our implementation a mobile robot called Care-O-bot 3 is used. The mobile robot is equipped with a color and a time-of-fight camera. Data sets are taken as the robot moves in different environments and our implementation of FastSLAM and 6-D SLAM is used to reconstruct the maps.
The main purpose of this thesis is to use modern goal-oriented adaptive methods of Lie group analysis to construct the optimal sys- tem of Black-Scholes equation. We will show in this thesis how to obtain all invariant solutions by constructing what has now become so popular, optimal system of sub-algebras, the main Lie algebra admit- ted by the Black-Scholes equation. First, we obtain the commutator table of already calculated symmetries of the Black-Scholes equation. We then followed with the calculations of transformation of the gen- erators with the Lie algebra L6 which provides one-parameter group of linear transformations for the operators. Here we make use of the method of Lie equations to solve the partial di®erential equations. Next, we consider the construction of optimal systems of the Black- Scholes equation where the method requires a simpli¯cation of a vector to a general form to each of the transformations of the generators. Further, we construct the invariant solutions for each of the op- timal system. This study is motivated by the analysis of Lie groups which is being taken to another level by ALGA here in Blekinge In- stitute Technology, Sweden. We give a practical and in-depth steps and explanation of how to construct the commutator table, the calcu- lation of the transformation of the generators and the construction of the optimal system as well as their invariant solutions. Keywords: Black-Scholes Equation, commutators, commutator table, Lie equa- tions, invariant solution, optimal system, generators, Airy equation, structure constant,
A practical course in differential equations and mathematical modelling is a unique blend of the traditional methods with Lie group analysis enriched by author’s own theoretical developments. The main objective of the book is to develop new mathematical curricula based on symmetry and invariance principles. This approach helps to make courses in differential equations, mathematical modelling, distributions and fundamental solution, etc. easy to follow and interesting for students. The book is based on author’s long-term experience of teaching at Novosibirsk and Moscow Universities in Russia, Collège de France, Georgia Tech and Stanford University in USA, Universities in South Africa, Cyprus, Turkey, and Blekinge Institute of Technology (BTH) in Sweden. The new curriculum prepares the students for solving modern nonlinear problems and attracts essentially more students than the traditional way of teaching mathematics. The book can be used as a main textbook by undergraduate and graduate students and their teachers in applied mathematics, physics and engineering sciences.
Wide availability of computing resources at the edge of the network has lead to the appearance of new services based on peer-to-peer architectures. In a peer-to-peer network nodes have the capability to act both as client and server. They self-organize and cooperate with each other to perform more efficiently operations related to peer discovery, content search and content distribution. The main goal of this thesis is to obtain a better understanding of the network traffic generated by Gnutella peers. Gnutella is a well-known, heavily decentralized file-sharing peer-to-peer network. It is based on open protocol specifications for peer signaling, which enable detailed measurements and analysis down to individual messages. File transfers are performed using HTTP. An 11-days long Gnutella link-layer packet trace collected at BTH is systematically decoded and analyzed. Analysis results include various traffic characteristics and statistical models. The emphasis for the characteristics has been on accuracy and detail, while for the traffic models the emphasis has been on analytical tractability and ease of simulation. To the author's best knowledge this is the first work on Gnutella that presents statistics down to message level. The results show that incoming requests to open a session follow a Poisson distribution. Incoming messages of mixed types can be described by a compound Poisson distribution. Mixture distribution models for message transfer rates include a heavy-tailed component.
Speech quality during the communication is generally e ected by the surrounding noise and interference. To improve the quality of speech signals and to reduce the amount of disturbing noise, speech enhancement is one of the emerging and most used branches of signal processing. For the reduction of noise from speech signals, methods are continuously developed, one method is the Adaptive Gain Equalizer (AGE) which is a single-channel speech enhancement, method that has the particular focus on enhancement of speech instead suppression of noise. Modulation decomposition of the speech signals brought the idea of a modulation system which is useful for modeling of speech and other signals. The purpose of this thesis is to implement the AGE within modulation system, for the purpose of enhancing speech signal, by reducing noise. The successful implementation of the system has been validated with di erent performance measurements, i.e., Signal to Noise Ratio Improvement(SNRI), Mean Opinion Score(MOS), Spectral Distortion(SD). The system has been checked with male and female speaker and with the noise signals Engine Noise(EN), Factory Noise(FN), Gaussian Noise(GN), Tonal Noise(TN) and Impulse Noise(IN) at 0dB, 5dB, 10dB and -5dB Signal to Noise Ratio(SNR). The system has provided the 10dB SNRI for the TN, and around 6dB SNRI for EN and FN. The system has some compromises on the GN and IN in a sense it gives good sound but low SNRI. MOS has been shown between 4 and 3 for all the test cases.
Different diversity techniques such as Maximal-Ratio Combining (MRC), Equal-Gain Combining (EGC) and Selection Combining (SC) are described and analyzed. Two branches (N=2) diversity systems that are used for pre-detection combining have been investigated and computed. The statistics of carrier to noise ratio (CNR) and carrier to interference ratio (CIR) without diversity assuming Rayleigh fading model have been examined and then measured for diversity systems. The probability of error (p_e) vs CNR and (p_e) versus CIR have also been obtained. The fading dynamic range of the instantaneous CNR and CIR is reduced remarkably when diversity systems are used [1]. For a certain average probability of error, a higher valued average CNR and CIR is in need for non-diversity systems [1]. But a smaller valued of CNR and CIR are compared to diversity systems. The overall conclusion is that maximal-ratio combining (MRC) achieves the best performance improvement compared to other combining methods. Diversity techniques are very useful to improve the performance of high speed wireless channel to transmit data and information. The problems which considered in this thesis are not new but I have tried to organize, prove and analyze in new ways.
Credit Decisions are extremely vital for any type of financial institution because it can stimulate huge financial losses generated from defaulters. A number of banks use judgmental decisions, means credit analysts go through every application separately and other banks use credit scoring system or combination of both. Credit scoring system uses many types of statistical models. But recently, professionals started looking for alternative algorithms that can provide better accuracy regarding classification. Neural network can be a suitable alternative. It is apparent from the classification outcomes of this study that neural network gives slightly better results than discriminant analysis and logistic regression. It should be noted that it is not possible to draw a general conclusion that neural network holds better predictive ability than logistic regression and discriminant analysis, because this study covers only one dataset. Moreover, it is comprehensible that a “Bad Accepted” generates much higher costs than a “Good Rejected” and neural network acquires less amount of “Bad Accepted” than discriminant analysis and logistic regression. So, neural network achieves less cost of misclassification for the dataset used in this study. Furthermore, in the final section of this study, an optimization algorithm (Genetic Algorithm) is proposed in order to obtain better classification accuracy through the configurations of the neural network architecture. On the contrary, it is vital to note that the success of any predictive model largely depends on the predictor variables that are selected to use as the model inputs. But it is important to consider some points regarding predictor variables selection, for example, some specific variables are prohibited in some countries, variables all together should provide the highest predictive strength and variables may be judged through statistical analysis etc. This study also covers those concepts about input variables selection standards.
Speech is an elementary source of human interaction. The quality and intelligibility of speech signals during communication are generally degraded by the surrounding noise. Corrupted speech signals need therefore to be enhanced to improve quality and intelligibility. In the field of speech processing, much effort has been devoted to develop speech enhancement techniques in order to restore the speech signal by reducing the amount of disturbing noise. This thesis focuses on a single channel speech enhancement technique that performs noise reduction by spectral subtraction based on minimum statistics. Minimum statistics means that the power spectrum of the non-stationary noise signal is estimated by finding the minimum values of a smoothed power spectrum of the noisy speech signal and, thus, circumvents the speech activity detection problem. The performance of the spectral subtraction method is evaluated using single channel speech data and for a wide range of noise types with various noise levels. This evaluation is used in order to find optimum method parameter values, thereby improving this algorithm to make it more appropriate for speech communication purposes. The system is implemented in MATLAB and validated by considering different performance measure and for different Signal to Noise Ratio Improvement (SNRI) and Spectral Distortion (SD). The SNRI and SD were calculated for different filter bank settings such as different number of subbands and for different decimation and interpolation ratios. The method provides efficient speech enhancement in terms of SNRI and SD performance measures.
Explosive growth in wireless technology caused by development in digital and RF circuit fabrications put some serious challenges on wireless system designers and link budget planning. Low transmit power, system coverage and capacity, high data rates, spatial diversity and quality of services (QOS) are the key factors in future wireless communication system that made it attractive. Dual-hop relaying is the promising underlying technique for future wireless communication to address such dilemmas. Based on dual-hop relaying this thesis addresses two scenarios. In the first case the system model employs dual-hop amplify and forward (AF) multiple input multiple output (MIMO) relay channels with transmit and receive antenna selection over independent Rayleigh fading channels where source and destination contain multiple antennas and communicate with each other with help of single antenna relay. It is assumed that the source and destination has perfect knowledge of channel state information (CSI). Our analysis shows that full spatial diversity order can be achieved with minimum number of antennas at source and destination i.e. min{N_s N_d }. In the second case the performance analysis of dual-hop amplify and forward (AF) multiple relay cooperative diversity network with best relay selection schemes over Rayleigh fading channels is investigated where the source and destination communicate with each other through direct and indirect links. Only the performance of best relay is investigated which participates in the transmission alone. The relay node that achieves highest SNR at the destination is selected as a best relay. Once again our analysis shows that full diversity order can be achieved with single relay with fewer resources compare to the regular cooperative diversity system.
Cryptography plays a crucial role in today’s society. Given the influence, cryptographic algorithms need to be trustworthy. Cryptographic algorithms such as RSA relies on the problem of prime number factorization to provide its confidentiality. Hence finding a way to make it computationally feasible to find the prime factors of any integer would break RSA’s confidentiality.
The approach presented in this thesis explores the possibility of trying to construct φ(n) from n. This enables factorization of n into its two prime numbers p and q through the method presented in the original RSA paper. The construction of φ(n) from n is achieved by analyzing bitwise relations between the two.
While there are some limitations on p and q this thesis can in favorable circumstances construct about half of the bits in φ(n) from n. Moreover, based on the research a conjecture has been proposed which outlines further characteristics between n and φ(n).
The paper presents a modeling and evaluation study of the characteristics of several "classical" Internet applications (SMTP, HTTP and FTP) in terms of user behavior, nature of contents transferred and application layer protocol exchanges. Results are reported on measuring, modeling and analysis of application layer traces collected, at both the client and the server end, from different environments such as university networks and commercial Frame Relay networks. The methodologies used for capturing traffic flows as well as for modeling are reported. Statistical models have been developed for diverse parameters of applications (e.g., HTTP document sizes, FTP file sizes, and SMTP message sizes), which can be useful for building synthetic workloads for simulation and benchmarking purposes. All three applications possess a session oriented structure. Within each session, a number of transactions are performed. For the above mentioned applications, the number of transactions that may occur during a session has been also modeled.
Current complex service systems are usually comprised of many other components which are often external services performing particular tasks. The quality of service (QoS) attributes such as availability, cost, response time are essential to determine usability and eciency of such system. Obviously, the QoS of such compound system is dependent on the QoS of its components. However, the QoS of each component is naturally unstable and di erent each time it is called due to many factors like network bandwidth, workload, hardware resource, etc. This will consequently make the QoS of the whole system be unstable. This uncertainty can be described and represented with probability distributions. This thesis presents an approach to calculate the QoS of the system when the probability distributions of QoS of each component are provided by service provider or derived from historical data, along with the structure of their compositions. In addition, an analyzer tool is implemented in order to predict the QoS of the given compositions and probability distributions following the proposed approach. The output of the analyzer can be used to predict the behavior of the system to be implemented and to make decisions based on the expected performance. The experimental evaluation shows that the estimation is reliable with a minimal and acceptable error measurement.
The Objective of this thesis is to talk about the usage of Fuzzy Logic in pattern recognition. There are different fuzzy approaches to recognize the pattern and the structure in data. The fuzzy approach that we choose to process the data is completely depends on the type of data. Pattern reorganization as we know involves various mathematical transforms so as to render the pattern or structure with the desired properties such as the identification of a probabilistic model which provides the explaination of the process generating the data clarity seen and so on and so forth. With this basic school of thought we plunge into the world of Fuzzy Logic for the process of pattern recognition. Fuzzy Logic like any other mathematical field has its own set of principles, types, representations, usage so on and so forth. Hence our job primarily would focus to venture the ways in which Fuzzy Logic is applied to pattern recognition and knowledge of the results. That is what will be said in topics to follow. Pattern recognition is the collection of all approaches that understand, represent and process the data as segments and features by using fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved. In the broadest sense, pattern recognition is any form of information processing for which both the input and output are different kind of data, medical records, aerial photos, market trends, library catalogs, galactic positions, fingerprints, psychological profiles, cash flows, chemical constituents, demographic features, stock options, military decisions.. Most pattern recognition techniques involve treating the data as a variable and applying standard processing techniques to it.
During the last few years, we have witnessed that radio spectrum is becoming a valuable and scarce resource due to the increasing demand of multimedia services. However, recent research has shown that most of the available radio spectrum is not used effectively and is wasted. So, to utilize the radio spectrum effectively, a new technology has been introduced known as “Cognitive Radio”. In cognitive radio, a secondary user (SU) uses the vacant holes in licensed spectrum when it is not occupied by a primary user (PU) without causing interference to the PU transmission. Accessing vacant holes in the licensed user spectrum without causing interference to the PU is a complicated task. Therefore, alternative spectrum sharing techniques have gained popularity. Using these techniques, an SU can share the licensed spectrum with a PU at the same time without causing interference to the PU transmission. As a result, a secondary user should have an optimal power allocation policy in order to get a high transmission rate while still keeping the interference caused to the primary user below a threshold value. Under limited spectrum conditions, spectrum sharing relay networks have gained much popularity by providing reliability over direct transmission. In this thesis, we investigate an amplify-and-forward (AF) relay network performance in a spectrum sharing environment. Here, we consider the impact of the primary transmitter on the spectrum sharing system in the presence of a Nakagami-m fading channel, where the fading parameter m (m is an integer) can be used to deal with a variety of channel scenarios.
Accurate and reliable effort estimation is still one of the most challenging processes in software engineering. There have been numbers of attempts to develop cost estimation models. However, the evaluation of model accuracy and reliability of those models have gained interest in the last decade. A model can be finely tuned according to specific data, but the issue remains there is the selection of the most appropriate model. A model predictive accuracy is determined by the difference of the various accuracy measures. The one with minimum relative error is considered to be the best fit. The model predictive accuracy is needed to be statistically significant in order to be the best fit. This practice evolved into model evaluation. Models predictive accuracy indicators need to be statistically tested before taking a decision to use a model for estimation. The aim of this thesis is to statistically evaluate well known effort estimation models according to their predictive accuracy indicators using two new approaches; bootstrap confidence intervals and permutation tests. In this thesis, the significance of the difference between various accuracy indicators were empirically tested on the projects obtained from the International Software Benchmarking Standard Group (ISBSG) data set. We selected projects of Un-Adjusted Function Points (UFP) of quality A. Then, the techniques; Analysis Of Variance ANOVA and regression to form Least Square (LS) set and Estimation by Analogy (EbA) set were used. Step wise ANOVA was used to form parametric model. K-NN algorithm was employed in order to obtain analogue projects for effort estimation use in EbA. It was found that the estimation reliability increased with the pre-processing of the data statistically, moreover the significance of the accuracy indicators were not only tested statistically but also with the help of more complex inferential statistical methods. The decision of selecting non-parametric methodology (EbA) for generating project estimates in not by chance but statistically proved.
The Universal Mobile Telecommunication systems are one of the emerging cellular phone technologies which are known as the 3G systems. It support the high speed data transfer, speech, web browsing, email, video telephony, multimedia and the audio streaming. These services are divided in to the classes depending upon the QoS requirements. With the development of these cellular networks, a major problem came up; it was the call handover from one cell to the other cell during an ongoing session without dropping the connection with the base station. A lot of techniques were developed and used to cope with this major issue. The user’s movement is a dynamic process considering its location. This means that the mobile users can change its way any time with any speed, so there should be a mechanism and a way that the network should be aware of this process. For this purpose different types of handovers techniques are used which include soft, hard and softer handovers. The thesis work is about the investigation of different handovers in the 3G UMTS network which is the vital issue to the network to maintain the user’s connection during in the ongoing session with the user’s movement. The investigation is based on the UMTS QoS traffic classes. For this purpose the soft and the hard handovers techniques are analyzed in different scenarios implemented in the OPNET Modeler. To know and understand about the handover process between the Node B and the user equipment different statistics are calculated.
This paper investigates the possible opportunities for Higher Education in Pakistan towards gaining a competitive advantage in the international community, the link between Human capital development and GDP growth in Pakistan and finally compares the HE policies in Pakistan and the EU. We look at empirical evidence and statistical data to investigate how endogenous growth affects the GDP per capita with the case study of Pakistan. The empirical studies show that employing human capital variable in the endogenous production function does not have favorable results on the economic growth output. This leads us to believe that the measuring criterion for Human Capital has to be re-evaluated as there are several factors that affect the educational systems in different parts of the world See Barro (2001), Qaisar (2007) and De la Fuente and Doménech (2002). Hence the definition of Human Capital can vary from region to region in order to get accurate results from the endogenous production function. We performed Pearsons Correlation function on 7 years data of GDP growth, GDP per capita GDP PPP compared with various variables including higher education enrollment rates and labor force in Pakistan. The results show little or no correlation is all experiments which further validate empirical research by Qaisar Abbas. These results are confirming the theory that the Higher Education growth cannot be simply measured in terms of enrollment rates and other variables must be included in the equation. These variables can vary from country to country and a multi ethnic country like Pakistan, measuring these variables is a complex task due to heterogeneous environments. Empirical evidence states that there is a link between human capital and spatial heterogeneity in Pakistan. Unequal opportunities are defeating the advantages of existing supplies of Human Capital. Minimization of educational inequalities enables the poor to receive more benefits of economic growth and that in turn allows the increase in growth rates for the country. A public survey was conducted to investigate general public’s awareness and attitude towards acceptance of Higher Education as a key towards quality of Life and opportunities in Higher Education for a common man in Pakistan. Overall 280 respondents from across Pakistan were engaged. Public survey results show that there is a strong sense of awareness in the general public about higher education being the Key for economic revival in Pakistan. Despite being motivated for educational growth there are various elements that are unattractive for public to pursue a higher degree in Pakistan. We take of look at the survey response towards acceptance of a Pakistani degree against a foreign degree and find that education institutions of Pakistan are majorly unattractive for our prospective future Human Capital supplies. In light of these findings and research studies, we identify a few areas in which the education institutes can impact their internal and external environments to meet the challenges posed in front of them. In the second leg of this dissertation, we perform an analytical review of the background and present status of EU Higher Education policies in contrast to Pakistani Higher Education policies. This dissertation reveals a sharp contrast between the history, plans and implementation of HE Policies in the EU and Pakistan. In the EU, dynamic policy making in the light of the intergovernmental Bologna Process and the Lisbon Strategy is evident how HE has evolved in the EU and provided a roadmap to Lisbon/Bologna declarations [2]. In Pakistan, we can only witness some enthusiastic plans, but lack of implementation force backed with unrealistic economic forecasts which ultimately played a major role in policy failure [3][4]. A detailed analysis and comparison is performed between EU and Pakistan in order to identify benchmarks for Pakistan. We find there are a need of exchange programs at all levels in Pakistan in order to establish a knowledge based community which in turn can be expanded in collaboration wi th other communities and possibly with EU in the form of Bologna process. In the end of the dissertation we conclude that the existing theory of human capital growth has strong relevance in the field of higher education as indicated by our experiments and empirical evidence from Qaisar, Jamal and Hasan. The survey results support the fact that there economic growth is deeply severed due to insufficient supplies of Human capital, the studies and survey results support that fact that un-equal opportunities and lack of financial aid for students is defeating our cause to utilize our demographic dividend or working age group of Human Capital before we enter 2050 and become one of biggest populated nations in the world. Empirical studies tell us that at least 40% of development in East Asian countries can be attributed towards their Human Capital (Demographic dividend)[1]. Pakistan has to capitalize on her Human Capital stock in order to translate this opportunity of a demographic dividend into global economic power. We briefly look into the spatial heterogeneity aspect of human capital growth and knowledge spillovers as possible solution to minimize the silos culture within regions. This dissertation is a non-technical review of Endogenous Growth theories, its application across the countries and in Pakistan. We conclude this dissertation by suggesting some changes in the internal and external environments of higher education institutes and higher education policy re-evaluation in Pakistan. We have identified certain areas of improvement which the Government of Pakistan should consider.
I dagens filmbransch blir arbetet med ljud i film allt mer viktigt då publiken inte bara förväntar sig en visuell effekt utan även en helhetsupplevelse. För att filmen skall kunna förmedla rätt känsla till publiken, gäller det att det visuella, musiken och ljudet fungerar så optimalt som möjligt. Det är viktigt att ljudet hjälper filmen att få fram den känslan som regissören vill förmedla. För att ljudet till en film skall kunna hjälpa regissören att förmedla ett budskap eller känsla är det viktigt att arbetet med ljudet också får ta plats i en filmproduktion. Det här arbetet är en skildring mellan arbetet med tramp på ljudföretaget Europa Foley och boken The foley grail. Arbetet beskriver hur processen med tramp till en filmproduktion på Europa Foley kan gå till och vad som skiljer sig med trampet som beskrivs i The foley grail. De största skillnaderna är hur studion är uppbyggd och hur teamet jobbar med trampet till en filmproduktion. Dessutom tar reflektionen upp ett liknande fenomen som jag har upplevt under mitt arbete på Europa Foley och som även författaren Ament beskriver i The foley grail.