Cognitive Radio is an innovative technology that allows the secondary unlicensed users to share the spectrum with licensed primary users to utilize the spectrum. For maximum utilization of spectrum, in cognitive radio network spectrum sensing is an important issue. Cognitive user under extreme shadowing and channel fading can‟t sense the primary licensed user signal correctly and thus to improve the performance of spectrum sensing, collaboration between secondary unlicensed users is required. In collaborative spectrum sensing the observation of each secondary user is received by a base station acting as a central entity, where a final conclusion about the presence or absence of the primary user signal is made using a particular decision and fusion rule. Due to spatially correlated shadowing the collaborative spectrum sensing performance decreases, and thus optimum secondary users must be selected to, not only improve spectrum sensing performance but also lessen the processing overhead of the central entity. A particular situation is depicted in the project where according to some performance parameters, first those optimum secondary users that have enough spatial separation and high average received SNR are selected using Genetic Algorithm, and then collaboration among these optimum secondary users is done to evaluate the performance. The collaboration of optimal secondary user providing high probability of detection and low probability of false alarm, for sensing the spectrum is compared with the collaboration of all the available secondary users in that radio environment. At the end a conclusion has been made that collaboration of selected optimum secondary users provides better performance, then the collaboration of all the secondary users available.
Nowadays, electronic devices are powered by battery. That battery provides a constant voltage; hence a voltage regulator is necessary. The Buck Converter is a kind of voltage regulator which provides a constant output voltage, regardless of the input voltage. This thesis is about the Buck Converter's efficiency and its importance to achieve good results when designing.
With an upsurge in number of available smart phones, tablet PCs etc. most users find it easy to access Internet services using mobile applications. It has been a challenging task for mobile application developers to choose suitable security types (types of authentication, authorization, security protocols, cryptographic algorithms etc.) for mobile applications. Choosing an inappropriate security type for a mobile application may lead to performance degradation and vulnerable issues in applications. The choice of the security type can be done by decision making. Decision making is a challenging task for humans. When choosing a single alternative among a set of alternatives with multiple criteria, it is hard to know which one is the better decision. Mobile application developers need to incorporate Multi-Criteria Decision Making (MCDM) Models to choose a suitable security type for mobile application. A decision model for application security enhances decision making for mobile application developers to decide and set the required security types for the application. In this thesis, we discuss different types of MCDM models that have been applied in an IT security area and scope of applying MCDM models in application security area. Literature review and evaluation of the selected decision models gives a detailed overview on how to use them to provide application security.
In his extensive work of 1884 on the group classification of ordinary differential equations Lie performed, inter alia, the group classification of the particular type of the second-order equations y″ = F (x, y). In the present paper we extend Lie's classification to the third-order equations y‴ = F (x, y, y′).
It is known that the classification of third-order evolutionary equations with the constant separant possessing a nontrivial Lie-Bäcklund algebra (in other words, integrable equations) results in the linear equation, the KdV equation and the Krichever-Novikov equation. The first two of these equations are nonlinearly self-adjoint. This property allows to associate conservation laws of the equations in question with their symmetries. The problem on nonlinear self-adjointness of the Krichever-Novikov equation has not been solved yet. In the present paper we solve this problem and find the explicit form of the differential substitution providing the nonlinear self-adjointness.
A fourth-order non-linear evolutionary partial differential equation containing several arbitrary functions of the dependent variable is considered. This equation arises as a generalization of various non-linear models describing a non-linear heat diffusion, the dynamics of thin liquid films, etc. Equivalence transformations give more flexibility to the unified model. We determine the generators of the equivalence group and use them for specifying certain types of arbitrary functions when the model equation has additional symmetries, and hence admits non-trivial group invariant solutions. (c) 2006 Elsevier B.V. All rights reserved.
Four time-fractional generalizations of the Kompaneets equation are considered. Group analysis is performed for physically relevant approximations. It is shown that all approximations have nontrivial symmetries and conservation laws. The symmetries are used for constructing group invariant solutions, whereas the conservation laws allow to find non-invariant exact solutions. (C) 2014 Elsevier B.V. All rights reserved.
Process of heating of thin layer located between two vibrating surfaces is studied. Energy loss goes on due to viscous or dry friction. Optimal quantities of shear viscosity and friction corresponding to maximum energy loss are determined. Resonant behavior of loss must be taken into account in the description of "slow dynamics" of rocks and materials exposed to high-intensity seismic or acoustic irradiation as well as in various technologies. Bonding of materials by linear friction welding, widely used in propulsion engineering, can exemplify such a technology.
The simplest method of integration of second-order differential equations using the Lie's canonical forms of two-dimensional algebras is well-known. We propose a generalization of this method on a case of integration of second-order differential equation with a small parameter having two approximate symmetries. The solution of such problem is reduced to the followings: 1) to classify approximate Lie algebras with two essential operators. As a result, seven different types of such Lie algebras have been obtained; 2) to construct canonical form of basic operators of non-similar algebras of every types for their realization in R2; 3) to set up general forms of invariant equations and formulas of their approximate solutions. The similar problems are solved for systems of two ordinary differential equations with two approximate symmetries. On this way we have constructed representation of non-similar approximated Lie algebras in R3.
Method of integration of second-order ordinary differential equations with twodimensional Lie symmetry algebras by reducing basic symmetries to canonical forms is extended to second-order equations with a small parameter for their approximate integration using two essential approximate symmetries. Canonical forms of basic operators of corresponding approximate Lie algebras Lr, r = 2, 3, 4, as well as general forms of invariant differential equations and their solutions are presented. The similar problems are also solved for systems of two first-order ordinary differential equations with two approximate symmetries.
In this thesis, the performance of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter using a pair of stereo vision system to overcome label discontinuity and robust tracking in an Intelligent Vision Agent System (IVAS) is evaluated. This filter is widely used in multiple-target tracking applications such as surveillance, human tracking, radar, and etc. A pair of cameras is used to get the left and right image sequences in order to extract 3-D coordinates of targets’ positions in the real world scene. The 3-D trajectories of targets are tracked by GM-PHD filter. Many tracking algorithms fail to simultaneously maintain stability of tracking and label continuity of targets, when one or more targets are hidden for a while to camera’s view. The GM-PHD filter performs well in tracking multiple targets; however, the label continuity is not maintained satisfactorily in some situations such as full occlusion and crossing targets. In this project, the label continuity of targets is guaranteed by a new method of labeling, and the simulation results show satisfactory results. A random walk motion is used to validate the ability of the algorithm in tracking and maintaining targets’ labels. In order to evaluate the performance of the GM-PHD filter, a 3-D spatial test motion model is introduced. Here, the two target trajectories are generated in a way that either occlusion or crossing occurs in some time intervals. Then, the two key parameters, angular velocity and motion speed, are used to evaluate the performance of algorithm. The simulation results for two moving targets in occlusion and crossing show that the proposed system not only robustly tracks them, but also maintains the label continuity of two targets.
In many application domains such as weather forecasting, robotics and machine learning we need to model, predict and analyze the evolution of periodic systems. For instance, time series applications that follow periodic patterns appear in climatology where the CO2 emissions and temperature changes follow periodic or quasi-periodic patterns. Another example can be in robotics where the joint angle of a rotating robotic arm follows a periodic pattern. It is often very important to make long term prediction of the evolution of such systems. For modeling and prediction purposes, Gaussian processes are powerful methods, which can be adjusted based on the properties of the problem at hand. Gaussian processes belong to the class of probabilistic kernel methods, where the kernels encode the characteristics of the problems into the models. In case of the systems with periodic evolution, taking the periodicity into account can simplifies the problem considerably. The Gaussian process models can account for the periodicity by using a periodic kernel. Long term predictions need to deal with uncertain points, which can be expressed by a distribution rather than a deterministic point. Unlike the deterministic points, prediction at uncertain points is analytically intractable for the Gaussian processes. However, there are approximation methods that allow for dealing with uncertainty in an analytic closed form, such as moment matching. However, only some particular kernels allow for analytic moment matching. The standard periodic kernel does not allow for analytic moment matching when performing long term predictions. This work presents an analytic approximation method for long term forecasting in periodic systems. We present a different parametrization of the standard periodic kernel, which allows us to approximate moment matching in an analytic closed form. We evaluate our approximate method on different periodic systems. The results indicate that the proposed method is valuable for the long term forecasting of periodic processes.
During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial process, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relation. In this dissertation, after describing the advantages of fuzzy control we verify the equation of motion s=vt for an automobile by taking distance and speed as inputs and time as output, a hotel model is also discussed with two discrete inputs, food and service quality and one continuous output percentage of guests. At the end a short description of industrial application is added.
IPTV or Internet Protocol Television is a system where digital television is delivered to the end user using the Internet Protocol. It relies on the same technologies that are used for computer networks and adds new possibilities, such as video on demand, on top of the traditionally broadcasted TV. Video content for IPTV is typically compressed using a MPEG-2 or H.264 (MPEG-4 part 10) compression and sent in an MPEG-2 transport stream over IP. Since TV is a "real time service" packets are delivered using a simple unreliable transmission model and packets may arrive out of order or be lost. Lost packets are normally not re-transmitted since they anyhow would arrive too late to be useful and packet loss will thus decrease perceived quality for the end user. The Master thesis is an investigation of the parameters that affect the perceived quality in MPEG-2 Transport Stream (TS). The aim of the thesis is to develop an objective parametric model1 that can estimate the perceived quality at transport layer. The thesis work includes experimentation performed on High Definition (HD) video sequences with various bit rates and packet loss ratios (PLR).
Basic mathematic analysis of how financial markets work and different valuation models such as the Stochastic Market Price Estimator, valuation model created by the author.
This thesis investigates the performance of cognitive radio relay networks (CRRN) in Rayleigh fading channel under various power constraints. Here spectrum sharing approach is considered, whereby a secondary user (SU) may be allowed to transmit simultaneously with a primary user (PU) as long as SU interference to PU remains below a tolerable level. In addition, SU has to meet certain quality of service (QoS) constraints of its own link. To support these QoS constraints, the maximal data rate that can be reliably transmitted with arbitrarily small error of probability is found. It is observed that this capacity is affected by channel quality and interference limit allowed by PU. Ergodic capacity and outage capacity which are two well known capacities, are analysed for CRRN under interference power constraints. This thesis also finds effective capacity for CRRN, a link layer channel model that models the effect of channel fading on queuing behaviour of the link. Effective capacity under interference and secondary transmitter power constraints is also investigated. The way of analysing effective capacity under interference and transmit power constraints is extended to ergodic capacity and outage capacity. Here it is observed that, capacity is affected by the minimum of transmit power and interference power constraints. Monte-Carlo simulations are carried out to support theoretical results obtained in this thesis.
We show that the OQT is a precursor to geometric quantization
The local expressions of a Lagrangian half-form on a quantized Lagrangian submanifold of phase space are the wavefunctions of quantum mechanics. We show that one recovers Maslov's asymptotic formula for the solutions to Schrodinger's equation if one transports these half-forms by the flow associated with a Hamiltonian H. We then consider the case when the Hamiltonian flow is replaced by the flow associated with the Bohmian, and are led to the conclusion that the use of Lagrangian half-forms leads to a quantum mechanics on phase space. (C) Elsevier, Paris.
The notion of phase plays an essential role in both semiclassical and quantum mechanics. But what is exactly a phase, and how does it change with time? It turns out that the most universal definition of a phase can be given in terms of Lagrangian manifolds by exploiting the properties of the Poincare-Cartan form. Such a phase is defined, not in configuration space, but rather in phase-space and is thus insensitive to the appearance of caustics. Surprisingly enough, this approach allows us to recover the Heisenberg-Weyl formalism without invoking commutation relations for observables.
We replace the usual heuristic notion of quantum cell by that of 'quantum blob', which does not depend on the dimension of phase space. Quantum blobs, which are defined in terms of symplectic capacities, are canonical invariants. They allow us to prove an exact uncertainty principle for semiclassically quantized Hamiltonian systems. (C) 2003 Elsevier B.V. All rights reserved.
We propose a definition of quantum cells which is invariant under symplectic transformations. We use this notion to the study of positivity properties of the Wigner and Husimi functions, which allows us to precise and to improve known results. © 2004 Elsevier SAS. Tous droits réservés.
22 : The cohomological interpretation of the indices of Robbin and Salamon, (with S. de Gosson), Jean Leray ´99 Conference Proceedings, Math. Phys. Studies 4, Kluwer Academic Press, 2003
We show, using the symplectically invariant notion of 'quantum blob', that it is possible to attach a canonical optimal Gaussian pure state to an arbitrary quantum state. When at least one pair of conjugate variables satisfies the minimum uncertainty condition, then the associated Gaussian is uniquely determined up to an overall phase factor. (C) 2004 Elsevier B.V. All rights reserved
This book is devoted to a symplectic approach of classical and quantum mechanics
We define a Maslov index for symplectic paths by using the properties of Leray's index for pairs of Lagrangian paths. Our constructions are purely topological, and the index we define satisfies a simple system of five axioms. The fifth axiom establishes a relation between the spectral flow of a family of symmetric matrices and the Maslov index
We compare the indices of Robbin, salöamon, and McDuff with the cohomological index defined by Leray and extended by the author
We study the relation between the complete Maslov index defined by Leray and the author, and the Lagrangian path intersection index defined by Robbin and Salamon, and used by McDuff and Salamon in their study of symplectic topology.
we study the Maslov index of the monodromy matrix of periodic Hamiltonian orbit, extending substantially results of other authors
Generalised Mersenne Numbers (GMNs) were defined by Solinas in 1999 and feature in the NIST (FIPS 186-2) and SECG standards for use in elliptic curve cryptography. Their form is such that modular reduction is extremely efficient, thus making them an attractive choice for modular multiplication implementation. However, the issue of residue multiplication efficiency seems to have been overlooked. Asymptotically, using a cyclic rather than a linear convolution, residue multiplication modulo a Mersenne number is twice as fast as integer multiplication; this property does not hold for prime GMNs, unless they are of Mersenne's form. In this work we exploit an alternative generalisation of Mersenne numbers for which an analogue of the above property - and hence the same efficiency ratio - holds, even at bitlengths for which schoolbook multiplication is optimal, while also maintaining very efficient reduction. Moreover, our proposed primes are abundant at any bitlength, whereas GMNs are extremely rare. Our multiplication and reduction algorithms can also be easily parallelised, making our arithmetic particularly suitable for hardware implementation. Furthermore, the field representation we propose also naturally protects against side-channel attacks, including timing attacks, simple power analysis and differential power analysis, which is essential in many cryptographic scenarios, in constrast to GMNs.
In this chapter, applications of group analysis to delay differential equations are considered. Many mathematical models in biology, physics and engineering, where there is a time lag or aftereffect, are described by delay differential equations. These equations are similar to ordinary differential equations, but their evolution involves past values of the state variable. For the sake of completeness the chapter is started with a short introduction into the theory of delay differential equations. The mathematical background of these equations is followed by the section which deals with the definition of an admitted Lie group for them and some examples. The purpose of the next section is to give a complete group classification with respect to admitted Lie groups of a second-order delay ordinary differential equation. The reasonable generalization of the definition of an equivalence Lie group for delay differential equations is considered in the next section. The last section of the chapter is devoted to application of the developed theory to the reaction-diffusion equation with a delay.
In this chapter an introduction into applications of group analysis to equations with nonlocal operators, in particular, to integro-differential equations is given. The most known integro-differential equations are kinetic equations which form a mathematical basis in the kinetic theories of rarefied gases, plasma, radiation transfer, coagulation. Since these equations are directly associated with fundamental physical laws, there is special interest in studies of their solutions. The first section of this chapter contains a retrospective survey of different methods for constructing symmetries and finding invariant solutions of such equations. The presentation of the methods is carried out using simple model equations of small dimensionality, allowing the reader to follow the calculations in detail. In the next section, the classical scheme of the construction of determining equations of an admitted Lie group is generalized for equations with nonlocal operators. In the concluding sections of this chapter, the developed regular method of obtaining admitted Lie groups is illustrated by applications to some known integro-differential equations.
The first chapter is a brief, but a sufficiently comprehensive introduction to the methods of Lie group analysis of ordinary and partial differential equations. The chapter presents basic concepts from the theory: continuous transformation groups, their generators, Lie equations, groups admitted by differential equations, integration of ordinary differential equations using their symmetries, group classification and invariant solutions of partial differential equations. New trends in modern group analysis such as the theory of Lie-Bäcklund transformations groups and approximate groups are also reflected. The intention of the chapter is to give the basic ideas of classical and modern group analysis to beginner readers and provide useful materials for advanced specialists
This chapter is devoted to a group analysis of the Vlasov-Maxwell and related type equations. The equations form the basis of the collisionless plasma kinetic theory, and are also applied in gravitational astrophysics, in shallow-water theory, etc. Nonlocal operators in these equations appear in the form of the functionals defined by integrals of the distribution functions over momenta of particles. In the beginning sections the plasma kinetic theory equations are introduced and the way of looking at the symmetries of nonlocal equations is described. Much of the importance of the approach used in this chapter for calculating symmetries stems from the procedure of solving determining equations using variational differentiation. The set of symmetries obtained in the sections that follow comprises symmetries for the Vlasov-Maxwell equations of the non-relativistic and relativistic electron and electron-ion plasmas in both one- and three-dimensional cases, and symmetries for Benney equations. In the concluding sections of this chapter the procedure for symmetry calculation and the renormalization group algorithm go hand in hand to present illustrations from plasma kinetic theory, plasma dynamics, and nonlinear optics, which demonstrate the potentialities of the method in construction of analytic solutions to nonlocal problems of nonlinear physics.
This chapter deals with applications of the group analysis method to stochastic differential equations. These equations are often obtained by including random fluctuations in differential equations, which have been deduced from phenomenological or physical view. In contrast to deterministic differential equations, only few attempts to apply group analysis to stochastic differential equations can be found in the literature. It is worth to note that this theory is still developing. Before defining an admitted symmetry for stochastic differential equations an introduction into the theory of this type of equations is given. The introduction includes the discussion of a stochastic integration, a stochastic differential and a change of the variables (Itô formula) in stochastic differential equations. Applications of the Itô formula are considered in the next section which deals with the linearization problem. The Itô formula and the change of time in stochastic differential equations are the main tools of defining admitted transformations for them. After introducing an admitted Lie group and supporting material of the introduced definition, some examples of applications of the given definition are studied.
The chapter deals with applications of the group analysis method to the full Boltzmann kinetic equation and some similar equations. These equations form the foundation of the kinetic theory of rarefied gas and coagulation. They typically include special integral operators with quadratic nonlinearity and multiple kernels which are called collision integrals. Calculations of the 11-parameter Lie group G 11 admitted by the full Boltzmann equation with arbitrary intermolecular potential and its extensions for power potentials are presented. The found isomorphism of these Lie groups with the Lie groups admitted by the ideal gas dynamics equations allowed one to obtain an optimal system of admitted subalgebras and to classify all invariant solutions of the full Boltzmann equation. For equations similar to the full Boltzmann equation complete admitted Lie groups are derived by solving determining equations. The corresponding optimal systems of admitted subalgebras are constructed and representations of all invariant solutions are obtained.
In the early 1990s, Volvo Car Corporation in Olofström started a project focusing on monitoring systems for robotized Gas Metal Arc (GMA) welding. This resulted in a research project where Stefan Adolfsson, Department of Production and Materials Engineering, Lund University/ Blekinge Tekniska Högskola, presented in 1998 the doctoral thesis, Automatic Quality Monitoring in GMA Welding using Signal Processing Methods. This doctoral thesis presented a Sequential Probability Ratio Test, SPRT, concept for quality monitoring automatic robotized GMA welding. To create a cost efficient monitoring system the Industriellt Utvecklingscentrum i Olofström AB presented the idea of using a traditional PC computer as a system platform. A traditional PC computer offers high performance to relatively low cost. The main task of this thesis is to implement the SPRT concept for monitoring automatic robotized GMA welding in the LabVIEW environment on a PC and evaluate the real time capacity. A second task in the thesis is to make a survey of similar monitoring systems for robotized GMA welding. The final implementation monitors both mean and variance of the weld voltage and the weld current. In total four different SPRT concepts have been implemented. They are modifications of the SPRT concept that is presented in the doctoral thesis. Two of the SPRT concepts were developed during this exam thesis work and are intended to reduce the risk of false alarms caused by natural systematic variations. Since four different SPRT concepts have been implemented and every single concept monitor both mean and variance of the weld voltage and the weld current, it means that there are in total 16 SPRT algorithms working in parallel. The evaluation of the implementation shows that an ordinary PC is sufficient for real time monitoring and that only two of the four SPRT concepts are suitable. The result of the market research of comparable monitoring systems indicates that there exist a small number of comparable welding monitoring systems.
Dempster-Shafer theory nowadays is used to model the epistemic (subjective) uncertainty as an alternative to the traditional probabilistic approach. Few decades back, Bayesian probability theory was used for this purpose as to handle problems encountered in different engineering disciplines. Since bayesian theory primarily needs precise measurements from experiments, this requirement restricted its application for problems having weak and sparse information and urged for further research to explore new techniques. In the meanwhile concept of imprecise probability came to light ensued different formalism, among them Dempster-Shafer theory is a prominent frame work. In this thesis Dempster-Shafer theory (D-S Theory) a data fusion technique is discussed along with subsequent improvements for combining conflicting information in D-S structure. In mining engineering during underground extraction of minerals (coal particularly) chances of occurrence of natural hazards like mine fires, gas outbursts, flooding with water and subsidence of overlying strata etc are although rare but with high uncertainty thus provides weak information about the system. Their history having detailed records is short. To wrestle with such problems, Dempster-Shafer formalism is a strong and effective tool. Here in this thesis complete modus operandi of the Dempster-Shafer formalism has been narrated with the help of illustrative examples. And by using this technique, quality of mine air in a coal fire zone is ascertained and a Mine Fire Index (MFI) is developed which is easy to use even for the lower hierarchy of the mine management and is helpful in making decision.
As resources are limited, radio spectrum becomes congested due to the growth of wireless applications. However, measurements address the fact that most of the licensed spectrums experience low utilization even in intensively teeming areas. In the exertion to improve the utilization of the limited spectrum resources, cognitive radio networks have emerged as a powerful technique to resolve this problem. There are two types of user in cognitive radio networks (CRNs) named as primary user (PU) and secondary user (SU). Therein, the CRN enables the SU to utilize the unused licensed frequency of the PU if it possibly finds the vacant spectrum or white space (known as opportunistic spectrum access). Alternatively, SU can transmit simultaneously with the PU provided that transmission power of SU does not cause any harmful interference to the PU (known as spectrum sharing systems). In this thesis work, we study fundamental knowledge of the CRNs and focus on the performance analysis of the single input multiple output (SIMO) system for spectrum sharing approach. We assume that a secondary transmitter (SU-Tx) has full channel state information (CSI). The SU-Tx can adjust its transmit power not to cause harmful interference to the PU and obtain an optimal transmit rate. In particular, we derive the closed-form expressions for the cumulative distribution function (CDF), outage probability and an analytical expression for symbol error probability (SEP).
Malicious programs have been a serious threat for the confidentiality, integrity and availability of a system. Different researches have been done to detect them. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers tried to find new ways of detecting malicious programs. The application of data mining and machine learning is one of them and has shown good results compared to other approaches. A new category of malicious programs has gained momentum and it is called Spyware. Spyware are more dangerous for confidentiality of private data of the user of system. They may collect the data and send it to third party. Traditional techniques have not performed well in detecting Spyware. So there is a need to find new ways for the detection of Spyware. Data mining and machine learning have shown promising results in the detection of other malicious programs but it has not been used for detection of Spyware yet. We decided to employ data mining for the detection of spyware. We used a data set of 137 files which contains 119 benign files and 18 Spyware files. A theoretical taxonomy of Spyware is created but for the experiment only two classes, Benign and Spyware, are used. An application Binary Feature Extractor have been developed which extract features, called n-grams, of different sizes on the basis of common feature-based and frequency-based approaches. The number of features were reduced and used to create an ARFF file. The ARFF file is used as input to WEKA for applying machine learning algorithms. The algorithms used in the experiment are: J48, Random Forest, JRip, SMO, and Naive Bayes. 10-fold cross-validation and the area under ROC curve is used for the evaluation of classifier performance. We performed experiments on three different n-gram sizes, i.e.: 4, 5, 6. Results have shown that extraction of common feature approach has produced better results than others. We achieved an overall accuracy of 90.5 % with an n-gram size of 6 from the J48 classifier. The maximum area under ROC achieved was 83.3 % with Random Forest.
Louise Petrén is an interesting person in the history of Swedish mathematics. In her PhD thesis defended in Lund in 1911, she extended to higher-order equations Laplace’s method of integration of second-order linear hyperbolic equations with two independent variables. It is interesting to consider her results from point of view of invariants of differential equations and compare with the theory of the Laplace invariants. However, L. Petrén’s research was not known until recently among mathematicians working in group analysis. The aim of our talk is to introduce Louise Petrén as a person and to discuss her generalization of Laplace’s method. Lars Haikola will contribute with a family background of Louise Petrén.
Requirement engineering is the most important phase within the software development phases since it is used to extract requirements from the customers which are used by the next phases for designing and implementation of the system. Because of its importance, this thesis focuses on the term aspect oriented requirement engineering, which is the first phase in aspect oriented software development used for the identification and representation of requirements gathered in the form of concerns. Besides the overall explanation of aspect oriented requirement engineering phase, detail attention is given to a specific activity within AORE phase called conflict resolution. Several techniques proposed for conflict resolution between aspects is discussed along with an attempt to give a new idea in the form of an extension of the already proposed model for conflict resolution. The need for extension to the already proposed model is justified by the use of a case study which is applied on both the models i.e. on the original model and on the extended model to compare the results.
There is presently no wildfire model developed for Swedish conditions, only a fire danger rating system (FWI) has been developed for Swedish conditions. The demand for a wildfire model has not been great in the past in Sweden but the climate changes now taking place increases the risk of large and intensive wildfires in Sweden. The need for additional and better tools for sizing-up wildfires will be in great demand in the future. This pre-study is aimed at: - Presenting what has been done in the wildfire modeling field during the years and mainly the last twenty years. - Giving recommendations on the continued work with developing a Swedish wildfire model. The method that was used was literature and article survey. The study also looks into the required input data for a wildfire model and the input data available at the moment. This issue is highly crucial as the quality of the output of a wildfire model is depending upon the quality of the input data. During the study, a primitive wildfire model was constructed and refined in order to get an insight in the complexities and problems with developing an operational model. The following characterization of wildfire models was used during the study: - Statistical models: based primarily on statistics from earlier or experimental fires. They do not explicitly consider the controlling physical processes. - Semi-empirical models: based on physical laws, but enhanced with some empirical factors, often by lumping all physical mechanisms for heat transfer together. - Physical models: based on physical principles and distinguishing between physical mechanisms for heat transfer. The statistical models make no attempt to involve physical processes, as they are merely a statistical description of test fires. Thus the lack of a physical basis means that statistical models must be used carefully outside the test conditions. Semi-empirical models are often based on conservation of energy principles but do not make any difference between conduction, convection and radiation heat transfer. The semi-empirical model has low computational requirements and includes variables that are generally easy to measure in the field. So despite the issue with limited accuracy, the speed and simplicity of these models make them useful for operational use. Physical models have the advantage that they are based on known relationships and thus facilitating their scaling. Thus we can expect that physical models would provide the most accurate predictions and have the widest applicability. But the work on physical models is suffering of for example the lack of understanding of several processes, such as the characterization of the chemical processes taking place during combustion, the resulting flame characteristics and the isolation and quantification of physical processes governing heat transfer. The input data available today are generally not detailed enough for physical models. As a result, a very detailed physical model will still only give imprecise predictions. As better and more detailed input will be available, the use of physical models will be more justified. A semi-empirical model is recommended being developed in Sweden. This conclusion is based upon the following factors: - The accuracy of a semi-empirical model is generally much better than for a statistical model, also the use of a semi-empirical model is much wider than the use of a statistical model. - The amount of work required for developing a semi-empirical model will not differ much from the amount of work required for a statistical model. In both cases a number of test fires will have to be conducted to define and calibrate a number of fuel models representative of Sweden. - Presently the performance and application of physical models is not at an acceptable level (due to for example the complexity which they are to model and the computational capabilities of the PC’s of today) for operational use. The semi-empirical model for Sweden is recommended to be built upon Swedish conditions (i.e. built upon the type of vegetation found in Sweden) instead of trying to retrofit the local Swedish conditions into an existing model. This would most likely give the best output for Swedish conditions. A system for better input data - weather and fuel data – should be worked on as well. This could for example take advantage of the results of the very promising “Alarm”-project that is being conducted in western part of Sweden. Regarding the issue on better fuel data, new technology for satellite images or aerial photos and image classification techniques must be monitored as one major problem to be solved is distinguishing between the canopy fuel and the ground fuel. For more specific conclusions and reflections, please see the analysis and discussion, and conclusions sections of this report.
In power development technology there are pioneering various μ-grid concepts, that integrating multiple distributed power generation sources into a small network serving some or all of the energy needs of participating users can provide benefits including reduced energy costs, increased overall energy efficiency and improved environmental performance and local electric system reliability. In the fast progression of technology the electric vehicles, electric construction equipment and machinery such as- road building or quarries would greatly benefited from electrification and the by this revolutions, they able defending the environment from detrimental effect and potential energy (and local emission) reduction is sufficient. These types of sites are often remotely located and it is much costly to meet the high voltage utility grid or distribution grid, which are normally far-off from the sites. Hence, it would be advantageous to be able to set up such sites without having to build long and expensive connections to high voltage transmission and distribution grids. In this project we have design and proposed a self-sufficient smart DC micro-grid providing renewable energy resources to supply electric machinery is designed. This grid is capable of meeting energy as well as peak power demands of machine site while offering the possibility to fully rely on locally produced renewable energy. In this project we have included price and performance forecasting for solar and wind energy, charger, grid energy storage system and micro-grid power electronics. Therefore, with this design it is capable to provide efficient power supply to the site and can meet the demand at peak loads. Grid modeling and simulating results (including loads, storage and energy production) are done by MATLAB.
This paper focuses on the grounding methods for distribution systems and the characteristics and behavior of earth fault currents. At First different existing grounding methods such as isolated neutral, solidly grounded, Resonant grounding, low and high impedance groundings are introduced. Secondly, focus is placed on further describing these methods and the viability of each of these methods in different scenarios. Then these methods are further analyzed by using equivalent circuit designs. Using the equivalent circuit helps to derive the formulations and equations. It is shown that the designing of these systems in fact follows basic electrical properties such as voltage dividing and current dividing principles. The derived equations are further tested experimentally to prove the characteristics and behavior of these methods. Finally, the report is concluded by testing some of these methods (isolated neutral and grounding via resistor methods) in the laboratory and the results obtained are analyzed with the theoretical results to determine the characteristics and behavior of these methods. In this manner it is shown experimentally that in case of isolated neutral systems the summation of currents through all the phases (current flow through the line model) is zero when there is no earth fault. In case of an earth fault occurring across one of the phases the neutral-to-ground voltage becomes equal to the voltage across the faulted phase and also the sum of currents passing through the healthy phases is equal to the current flowing through the faulted phase. This phenomenon is proved in the second experiment. The third experiment proves the relation between the earth fault current and current via the resistor and the currents via the healthy phases whereby it is shown that the fault current is equal to the root square summation of currents of resistor and healthy phases. This is one of the characteristics of grounded via resistor system. The next experimental setup also focuses on the characteristics of grounded via resistor system whereby it is shown that the fault current and the total phase current are independent of any external load. The last experiment proves the phenomenon that phase-to-phase voltage remains intact during times of earth fault and the system continues the operation uninterrupted as it should be in case of isolated neutral grounding system.
This work is devoted to the investigation of evolution of intense quasi-harmonic signals in the case of infinite acoustic Reynolds numbers. The consideration is based on the zero viscocity limit solution of the Burgers equation, which reduces the Cole-Hopf solution to a "maximum" principle. This limit solution permits an easy way to get the profile of the waves, postition of shocks and their velocities at arbitrary times. The process of transformation of an initial quasi-monochromatic wave into s sawtooth wave is considered. It is shown that the nonlinearity leads to suppression of the initial amplitude modulation and to the transformation of the initial frequency modulation inot a shock amplitude modulation. The amplitude of the low frequency component generated by a quasi-mono-chromatic wave is found. It is shown that the interaction of this component with high frequency waves leads to phase modulation, which increases with distance. The amplitudes of the new components of the spectrum are found. Is is show n that when the value of phase modulation is small, the amplitudes of the satellites do not depend on the distance or the number of harmonics of the primary wave.
A simple mechanical system containing a low-frequency vibration mode and set of high-frequency acoustic modes is considered. The frequency response is calculated. Nonlinear behaviour and interaction between modes is described by system of functional equations. Two types of nonlinearities are taken into account. The first one is caused by the finite displacement of a movable boundary, and the second one is the volume nonlinearity of gas. New mathematical models based on nonlinear equations are suggested. Some examples of nonlinear phenomena are discussed on the base of derived solutions.
The new paradigm of cooperative communications is a promising resolution to carry out MIMO technique. In this thesis work, we study the performance of cooperative relay networks in which the transmission from a source to a destination is assisted by one or several relaying nodes employing amplify-and-forward (AF) and decode- and-forward (DF) protocols. The performance of two-way (or bi-directional) AF relay networks, which are proposed to avoid the pre-log factor 1 2 of spectral efficiency, is latter investigated. Specifically, the exact closed-form expressions for symbol error rate (SER), outage probability, and average sum-rate of bi-directional AF relay systems in independent but not identically distributed (i.n.i.d.) Rayleigh fading channels are derived. Our analyses are verified by comparing with the results from Monte-Carlo simulations.
This thesis concerns the integration of agent technology and mathematical optimization for improved decision support within the domain of analysis and planning of production and transportation. These two approaches have often been used separately in this domain but the research concerning how to combine them is very limited. The studied domain is considered to be complex due to the fact that many decision makers, which influence each other, often are involved in the decision making process. Moreover, problems in the domain are typically large and combinatorial, which makes them more difficult to solve. We argue that the integration of agent-based approaches and mathematical optimization has a high potential to improve analysis and planning of production and transportation. In order to support this hypothesis, we have developed and analyzed three different approaches to the integration of agent technology and mathematical optimization. First, we present a Multi-Agent-Based Simulation (MABS) model called TAPAS for simulation of decision-making and physical activities in supply chains. By using agent technology and optimization, we were able to simulate the decision-making of the involved actors as well as the interaction between them, which is difficult using traditional simulation techniques. In simulation experiments, TAPAS has been used to study the effects of different types of governmental taxes, and synchronization of timetables. Moreover, we provide an analysis of existing MABS applications with respect to a number of criteria. Also, we present a framework containing a number of abstract roles, responsibilities, and interactions, which can be used to simplify the process of developing MABS models. Second, we present an approach for efficient planning and execution of intermodal transports. The approach provides agent-based support for key tasks, such as, finding the optimal sequence of transport services (potentially provided by different transport operators) for a particular goods transport, and monitoring the execution of transports. We analyzed the requirements of such an approach and described a multi-agent system architecture meeting these requirements. Finally, an optimization model for a real world integrated production, inventory, and routing problem was developed. For solving and analyzing the problem, we developed an agent-based solution method based on the principles of Dantzig-Wolfe decomposition. The purpose was to improve resource utilization and to analyze the potential effects of introducing VMI (Vendor Managed Inventory). In a case study, we conducted simulation experiments, which indicated that an increased number of VMI customers may give a significant reduction of the total cost in the system.
We present improvements of the Frank–Wolfe (FW) method for static vehicular traffic and telecom routing. The FW method has been the dominating method for these problem types, but due to its slow asymptotic convergence it has been considered dead by methods oriented researchers. However, the recent introduction of conjugate FW methods has shown that it is still viable, and in fact the winner on multi-core computers. In this paper, we show how to speed up the FW iterations, by updating the subproblems in the FW method, instead of solving them from scratch. The subproblem updating is achieved by viewing the subproblems as network flow problems with a threaded representation of the shortest path trees. In addition, we introduce a new technique, thread following, implying that a single traversal of the thread is enough to find a new shortest path tree. Our computational tests show that very few nodes in practice are visited more than once when searching for improving arcs. Moreover, we update also the all-or-nothing solutions of the subproblems, resulting in significantly reduced loading times. For a set of standard test problems, we observe speedups in the region of 25–50% for the subproblem updating FW method, compared to the traditional non-updating version. We typically achieve higher speedups for more difficult problems and converged solutions.