Change search
Refine search result
1234567 51 - 100 of 2518
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    An Integrated Structured Analysis Approach to Intelligent Agent Communication1998Report (Other academic)
    Abstract [en]

    Intelligent multi-agent systems offer promising approaches for knowledge-intensive distributed applications. Now that such systems are becoming applied on a wider industrial scale, there is a practical need for structured analysis and design methods, similarly as exist for more conventional information and knowledge systems. This is still lacking for intelligent agent software. In this paper, we describe how the process of agent communication specification can be carried out through a structured analysis approach. The structured analysis approach we propose is an integrated extension of the CommonKADS methodology, a widely used standard for knowledge analysis and systems development. Our approach is based on and illustrated by a large-scale multi-agent application for distributed energy load management in industries and households, called Homebots, which is discussed as an extensive industrial case study.

  • 52. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    Pragmatics of Agent Communication1998Report (Other academic)
    Abstract [en]

    The process of agent communication modeling has not yet received much attention in the knowledge systems area. Conventional knowledge systems are rather simple with respect to their communication structure: often it is a straightforward question-and-answer sequence between system and end user. However, this is different in recent intelligent multi-agent systems. Therefore, agent communication aspects are now in need of a much more advanced treatment in knowledge management, acquisition and modeling. In general, a much better integration between the respective achievements of multi-agent and knowledge-based systems modeling is an important research goal. In this paper, we describe how agent communications can be specified as an extension of well-known knowledge modeling techniques. The emphasis is on showing how a structured process of communication requirements analysis proceeds, based on existing results from agent communication languages. The guidelines proposed are illustrated by and based on a large-scale industrial multi-agent application for distributed energy load management in industries and households, called Homebots. Homebots enable cost savings in energy consumption by coordinating their actions through an auction mechanism.

  • 53. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Smart Software as Costumer Assistant in Large-Scale Distributed Load Management1997Conference paper (Refereed)
  • 54. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    Homebots: Intelligent Decentralized Services for Energy Management1996Report (Other academic)
    Abstract [en]

    The deregulation of the European energy market, combined with emerging ad-vanced capabilities of information technology, provides strategic opportunities for new knowledge-oriented services on the power grid. HOMEBOTS is the name we have coined for one of these innovative services: decentralized power load management at the customer side, automatically carried out by a ‘society’ of interactive house-hold, industrial and utility equipment. They act as independent intelligent agents that communicate and negotiate in a computational market economy. The knowl-edge and competence aspects of this application are discussed, using an improved version of task analysis according to the COMMONKADS knowledge methodology. Illustrated by simulation results, we indicate how customer knowledge can be mo-bilized to achieve joint goals of cost and energy savings. General implications for knowledge creation and its management are discussed.

  • 55. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    HOMEBOTS: Intelligent Decentralized Services for Energy Management1996Conference paper (Refereed)
  • 56.
    Akser, M.
    et al.
    Ulster University, GBR.
    Bridges, B.
    Ulster University, GBR.
    Campo, G.
    Ulster University, GBR.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Curran, K.
    Ulster University, GBR.
    Fitzpatrick, L.
    Ulster University, GBR.
    Hamilton, L.
    Ulster University, GBR.
    Harding, J.
    Ulster University, GBR.
    Leath, T.
    Ulster University, GBR.
    Lunney, T.
    Ulster University, GBR.
    Lyons, F.
    Ulster University, GBR.
    Ma, M.
    University of Huddersfield, GBR.
    Macrae, J.
    Ulster University, GBR.
    Maguire, T.
    Ulster University, GBR.
    McCaughey, A.
    Ulster University, GBR.
    McClory, E.
    Ulster University, GBR.
    McCollum, V.
    Ulster University, GBR.
    Mc Kevitt, P.
    Ulster University, GBR.
    Melvin, A.
    Ulster University, GBR.
    Moore, P.
    Ulster University, GBR.
    Mulholland, E.
    Ulster University, GBR.
    Muñoz, K.
    BijouTech, CoLab, Letterkenny, Co., IRL.
    O’Hanlon, G.
    Ulster University, GBR.
    Roman, L.
    Ulster University, GBR.
    SceneMaker: Creative technology for digital storytelling2018In: Lect. Notes Inst. Comput. Sci. Soc. Informatics Telecommun. Eng. / [ed] Brooks A.L.,Brooks E., Springer Verlag , 2018, Vol. 196, p. 29-38Conference paper (Refereed)
    Abstract [en]

    The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017.

  • 57.
    Alahari, Yeshwanth
    et al.
    Blekinge Institute of Technology, School of Computing.
    Buddhiraja, Prashant
    Blekinge Institute of Technology, School of Computing.
    Analysis of packet loss and delay variation on QoE for H.264 andWebM/VP8 Codecs2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The popularity of multimedia services over Internet has increased in the recent years. These services include Video on Demand (VoD) and mobile TV which are predominantly growing, and the user expectations towards the quality of videos are gradually increasing. Different video codec’s are used for encoding and decoding. Recently Google has introduced the VP8 codec which is an open source compression format. It is introduced to compete with existing popular codec namely H.264/AVC developed by ITU-T Video Coding Expert Group (VCEG), as by 2016 there will be a license fee for H.264. In this work we compare the performance of H.264/AVC and WebM/VP8 in an emulated environment. NetEm is used as an emulator to introduce delay/delay variation and packet loss. We have evaluated the user perception of impaired videos using Mean Opinion Score (MOS) by following the International Telecommunication Union (ITU) Recommendations Absolute Category Rating (ACR) and analyzed the results using statistical methods. It was found that both video codec’s exhibit similar performance in packet loss, But in case of delay variation H.264 codec shows better results when compared to WebM/VP8. Moreover along with the MOS ratings we also studied the effect of user feelings and online video watching experience impacts on their perception.

  • 58.
    ALAM, MD. SHAMSER
    Blekinge Institute of Technology, School of Engineering.
    On sphere detection for OFDM based MIMO systems2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The mobile wireless communication systems has been growing fast and continuously over the past two decades. Therefore, in order to fulfill the demand for this rapid growth, the standardization bodies along with wireless researchers and mobile operators around the world have been constantly working on new technical specifications.An important problem in modern communication is known as NP complete problem in the Maximum Likelihood (ML) detection of signals transmitting over Multiple Input Multiple Output channel of the OFDM transceiver system. Development of the Sphere Decoder (SD) as a result of the rapid advancement in signal processing techniques provides ML detection for MIMO channels at polynomial time complexity average case. There are weaknesses in the existing SDs. The sphere decoder performance is very sensitive for the most current proposals in order to choose the search radius parameter. At high spectral efficiencies SNR is low or as the problem dimension is high and the complexity coefficient can become very large too. Digital communications of detecting a vector of symbols has importance as, is encountered in several different applications. These symbols are as the finite alphabet and transmitted over a multiple-input multiple-output (MIMO) channel with Gaussian noise. There are no limitation to the detection of symbols spatially multiplexed over a multiple-antenna channel and the multi user detection problem. Efficient algorithms are considered for the detection problems and have recognized well. The algorithm of sphere decoder, orders has optimal performance considering the error probability and this has proved extremely efficient in terms of computational complexity for moderately sized problems in case of signal to noise ratio. At high SNR the algorithm has a polynomial average complexity and it is understood the algorithm has an exponential worst case complexity. The efficiency of the algorithm is ordered the exponential rate derivation of growth. Complexity is positive for the finite SNR and small in the high SNR. To achieve the sphere decoding solution applying Schnorr-Euchner by Maximum likelihood method , Depth-first Stack-based Sequential decoding is used. This thesis focuses on the receiver part of the transceiver system and takes a good look at the near optimal algorithm for sphere detection of a vector of symbols transmitted over MIMO channel. The analysis and algorithms are general in nature.

  • 59.
    Alam, Tariq
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Muhammad
    Blekinge Institute of Technology, School of Computing.
    The Challenge of Usability Evaluation of Online Social Networks with a Focus on Facebook2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In today’s era online social networks are getting extensive popularity among internet users. People are using online social networks for different purposes like sharing information, chatting with friends, family and planning to hang out. It is then no surprise that online social network should be easy to use and easily understandable. Previously many researchers have evaluated different online social networks but there is no such study which addresses usability concerns about online social network with a focus on Facebook on an academic level (using students as subjects). The main rationale behind this study is to find out efficiency of different usability testing techniques from social network’s point of view, with a focus on Facebook, and issues related to usability. To conduct this research, we have adopted the combination of both qualitative and quantitative approach. Graduate students from BTH have participated in usability tests. Our findings are that although think aloud is more efficient then remote testing, but this difference is not very significant. We found from survey that different usability issues are in Facebook profile, media, Picture Tagging, Chatting etc.

  • 60.
    Alam, Zahidul
    Blekinge Institute of Technology, School of Computing.
    Usability of a GNU/Linux Distribution from Novice User’s Perspective2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The term Open Source Software (OSS) has been around for a long time in the world of computer science. Open source software development is a process by which we can manufacture economical and qualitative software and its source could be re-use in the improvement of the software. The success of OSS relies on several factors, e.g. usability, functionality, market focus etc. But in the end how popular the software will be measured by the number of users downloading the software and how much the software is usable to the users. Open Source Software achieve the status for stability, security and functionality. Most of this software has been utilized by expert level users of IT. But from the general users or the non-computer user’s point of view the usability issues of Open source software has been faced the most criticism [25, 26, 27, 28, 29, and 30]. This factor i.e. the usability issues of general user is also responsible for the limited distribution of the open source software [24]. The development process should apply the “user-centered” methodology [25, 26, 27, 28, 29, and 30]. In this thesis paper the issues of usability in OSS development and how the usability of open source software can be improved will be discussed. Beside this I investigate the usability quality of free Open Source Linux-based operating system Ubuntu and try to find out the usability standards of this OSS.

  • 61. Alaves, Dimas
    et al.
    Machado, Renato
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    da Costa, Daniel Benevides
    Legg, Andrei Piccinini
    Uchoa-Filho, Bartolomeu F.
    A dynamic hybrid antenna/relay selection scheme for the multiple-access relay channel2014In: 2014 11TH INTERNATIONAL SYMPOSIUM ON WIRELESS COMMUNICATIONS SYSTEMS (ISWCS), IEEE , 2014, p. 594-599Conference paper (Refereed)
    Abstract [en]

    We propose a dynamic hybrid antenna/relay selection scheme for multiple-access relay systems. The proposed scheme aims to boost the system throughput while keeping a good error performance. By using the channel state information, the destination node performs a dynamic selection between the signals provided by the multi-antenna relay, located in the inter-cell region, and the relay nodes geographically distributed over the cells. The multi-antenna relay and the single-antenna relay nodes employ the decode-remodulate-and-forward and amplify-and-forward protocols, respectively. Results reveal that the proposed scheme offers a good tradeoff between spectral efficiency and diversity gain, which is one of the main requirements for the next generation of wireless communications systems.

  • 62. Albin, Bernhardsson
    et al.
    Björling, Ivar
    Generation and evaluation of collision geometry based on drawings2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Many video games allow for creative expression. Attractive Interactive AB is developing such a game, in which players draw their own levels using pen and paper. For such a game to work, collision geometry needs to be generated from photos of hand-drawn video game levels.

    Objectives. The main goal of the thesis is to create an algorithm for generating collision geometry from photos of hand-drawn video game levels and to determine whether the generated geometry can replace handcrafted geometry. Handcrafted geometry is manually created using vector graphics editing tools.

    Methods. A method for generating collision geometry from photos of drawings is implemented. The quality of the generated geometry is evaluated and compared to handcrafted geometry in terms of vertex count and positional accuracy. Ground truths are used to determine the positional accuracy of collision geometry by calculating the resemblance of the created collision geometry and the respective ground truth.

    Results. The generated geometry has a higher positional accuracy and on average a lower vertex count than the handcrafted geometry. Performance measurements for two different configurations of the algorithm are presented.

    Conclusions. Collision geometry generated by the presented algorithm has a higher quality than handcrafted geometry. Thus, the generated geometry can replace handcrafted geometry.

  • 63.
    Albinsson, Mattias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Andersson, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Quality of Experience through Performance Optimization of Server-Client Communication2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied.

    In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time.

    After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience.

    The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement.

  • 64.
    Alborn, Jonas
    Blekinge Institute of Technology, School of Planning and Media Design.
    3D och kommunal fysisk planering2012Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    På senare år har tekniker för 3D-visualisering fått ett allt mer utbrett användande inom kommunal fysisk planering. Detta, plus att jag själv använder tekniken i mitt dagliga arbete som planarkitekt, väckte frågor kring skälet till införandet av tekniken, vilka beslut och förväntningar som legat till grund för införandet och vilket forskningsstöd tekniken har, vad gäller visualisering, tydlighet och kommunicerbarhet i planarbetet. Detta examensarbete i Fysisk planering vid BTH, vill belysa dessa frågor. Arbetet består av litteratursökning efter adekvat forskning i ämnet, en enkät ställd till ett litet urval av anställda och politiker i fyra kommuner som är medlemmar i ett 3D nätverk, en dokumentsökning på samma kommuners hemsidor samt en kompletterande enkätundersökning bland planarkitekter i fem andra kommuner som inte är medlemmar i ovan nämnda nätverk. Forskningsstöd för effektiviteten i eller för- och nackdelar med att använda 3D-modeller för ökad förståelse och kommunikation mellan tjänstemän och politiker samt med allmänheten, i samband med kommunal fysisk planering, saknas. Genom sammanställningar av forskning inom fälten miljöpsykologi och åskådlig planredovisning samt svensk arkitekturforskning kan man ändå få ledtrådar till möjligheter och svårigheter med användningen av 3D visualisering och dess roll som kommunikationsmedel. Rätt använd skulle tekniken kunna stärka möjligheten till åskådliggörande, men det finns också risker, kopplade till användande av tekniken. Det främsta skälet till att kommuner inför 3D-teknik inom fysisk planering, uppges av såväl Planarkitekter, kommuntjänstemän med ansvar för 3D tekniken, samt bland politiker vara önskan om att öka förståelsen hos medborgarna, av förslag till förändring av den fysiska miljön. Något som också oftast motsvaras i kommunernas dokument, där sådana finns. Alla kommuner har inte dokumenterade officiella inriktnings- och policydokument i frågan. Forskningen pekar dock på risker vid användning av visualisering i tidiga skeden av processen. I den avslutande diskussionen berörs arbetets frågeställningar, potentiella problem och möjligheter med användandet av tekniken samt förslag till områden för vidare studier. I slutsatserna konstateras att det finns en diskrepans mellan den övervägande positiva synen på användningen i kommunerna och det som forskningen visar.

  • 65.
    Aldalaty, Khalid
    Blekinge Institute of Technology, School of Engineering.
    Mobile IP handover delay reduction using seamless handover architecture2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Seamless communication is becoming the main aspect for the next generation of the mobile and wireless networks. Roaming among multiple wireless access networks connected together through one IP core makes the mobility support for the internet is very critical and more important research topics nowadays. Mobile IP is one of the most successful solutions for the mobility support in the IP based networks, but it has poor performance in term of handover delay. Many improvements have been done to reduce the handover delay, which result in two new standards: the Hierarchical MIP (HMIP) and the Fast MIP (FMIP), but the delay still does not match the seamless handover requirements. Finally Seamless MIP (S-MIP) has been suggested by many work groups, which combine between an intelligent handover algorithm and a movement tracking scheme. In This thesis, we show the handover delay reduction approaches, specifically the Seamless Mobile IP. The thesis studies the effects of the S-MIP in the handover delay and the network performance as well. A simulation study takes place to compare between the standard MIP and the new suggested S-MIP protocol in term of handover delay, packet loss and bandwidth requirement. The thesis concludes with the analyzing of the simulation results, evaluating the S-MIP performance and finally gives some suggestions for the future work.

  • 66. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ryrholm, Lisa
    Visual GUI testing in practice: challenges, problems and limitations2015In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 694-744Article in journal (Refereed)
    Abstract [en]

    In today’s software development industry, high-level tests such as Graphical User Interface (GUI) based system and acceptance tests are mostly performed with manual practices that are often costly, tedious and error prone. Test automation has been proposed to solve these problems but most automation techniques approach testing from a lower level of system abstraction. Their suitability for high-level tests has therefore been questioned. High-level test automation techniques such as Record and Replay exist, but studies suggest that these techniques suffer from limitations, e.g. sensitivity to GUI layout or code changes, system implementation dependencies, etc. Visual GUI Testing (VGT) is an emerging technique in industrial practice with perceived higher flexi- bility and robustness to certain GUI changes than previous high-level (GUI) test automation techniques. The core of VGT is image recognition which is applied to analyze and interact with the bitmap layer of a system’s front end. By coupling image recognition with test scripts, VGT tools can emulate end user behavior on almost any GUI-based system, regardless of implementation language, operating system or platform. However, VGT is not without its own challenges, problems and limitations (CPLs) but, like for many other automated test techniques, there is a lack of empirically-based knowledge of these CPLs and how they impact industrial applicability. Crucially, there is also a lack of information on the cost of applying this type of test automation in industry. This manuscript reports an empirical, multi-unit case study performed at two Swedish companies that develop safety-critical software. It studies their transition from manual system test cases into tests auto- mated with VGT. In total, four different test suites that together include more than 300 high-level system test cases were automated for two multi-million lines of code systems. The results show that the transitioned test cases could find defects in the tested systems and that all applicable test cases could be automated. However, during these transition projects a number of hurdles had to be addressed; a total of 58 different CPLs were identified and then categorized into 26 types. We present these CPL types and an analysis of the implications for the transition to and use of VGT in industrial software development practice. In addition, four high-level solutions are presented that were identified during the study, which would address about half of the identified CPLs. Furthermore, collected metrics on cost and return on investment of the VGT transition are reported together with information about the VGT suites’ defect finding ability. Nine of the identified defects are reported, 5 of which were unknown to testers with extensive experience from using the manual test suites. The main conclusion from this study is that even though there are many challenges related to the transition and usage of VGT, the technique is still valuable, flexible and considered cost-effective by the industrial practitioners. The presented CPLs also provide decision support in the use and advancement of VGT and potentially other automated testing techniques similar to VGT, e.g. Record and Replay.

  • 67.
    Aleksandr, Polescuk
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linking Residential Burglaries using the Series Finder Algorithm in a Swedish Context2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. A minority of criminals performs a majority of the crimes today. It is known that every criminal or group of offenders to some extent have a particular pattern (modus operandi) how crime is performed. Therefore, computers' computational power can be employed to discover crimes that have the same model and possibly are carried out by the same criminal. The goal of this thesis was to apply the existing Series Finder algorithm to a feature-rich dataset containing data about Swedish residential burglaries.

    Objectives. The following objectives were achieved to complete this thesis: Modifications performed on an existing Series Finder implementation to fit the Swedish police forces dataset and MatLab code converted to Python. Furthermore, experiment setup designed with appropriate metrics and statistical tests. Finally, modified Series Finder implementation's evaluation performed against both Spatial-Temporal and Random models.

    Methods. The experimental methodology was chosen in order to achieve the objectives. An initial experiment was performed to find right parameters to use for main experiments. Afterward, a proper investigation with dependent and independent variables was conducted.

    Results. After the metrics calculations and the statistical tests applications, the accurate picture revealed how each model performed. Series Finder showed better performance than a Random model. However, it had lower performance than the Spatial-Temporal model. The possible causes of one model performing better than another are discussed in analysis and discussion section.

    Conclusions. After completing objectives and answering research questions, it could be clearly seen how the Series Finder implementation performed against other models. Despite its low performance, Series Finder still showed potential, as presented in future work.

  • 68. Algestam, Henrik
    et al.
    Offesson, Marcus
    Lundberg, Lars
    Using components to increase mailtainability in a large telecommunication system2002Conference paper (Refereed)
  • 69.
    Ali, Hazrat
    Blekinge Institute of Technology, School of Computing.
    A Performance Evaluation of RPL in Contiki2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A Wireless Sensor Network is formed of several small devices encompassing the capability of sensing a physical characteristic and sending it hop by hop to a central node via low power and short range transceivers. The Sensor network lifetime strongly depends on the routing protocol in use. Routing protocol is responsible for forwarding the traffic and making routing decisions. If the routing decisions made are not intelligent, more re-transmissions will occur across the network which consumes limited resources of the wireless sensor network like energy, bandwidth and processing. Therefore a careful and extensive performance analysis is needed for the routing protocols in use by any wireless sensor network. In this study we investigate Objective Functions and the most influential parameters on Routing Protocol for Low power and Lossy Network (RPL) performance in Contiki (WSN OS) and then evaluate RPL performance in terms of Energy, Latency, Packet Delivery Ratio, Control overhead, and Convergence Time for the network. We have carried out extensive simulations yielding a detailed analysis of different RPL parameters with respect to the five performance metrics. The study provides an insight into the different RPL settings suitable for different application areas. Experimental results show ETX is a better objective, and that ContikiRPL provides very efficient network Convergence (14s), Control traffic overhead (1300 packets), Energy consumption (1.5% radio on time), Latency (0.5s), and Packet Delivery Ratio (98%) in our sample RPL simulation of one hour with 80 nodes, after careful configuration of DIO interval minimum/doublings, Radio duty cycling, and Frequency of application messages.

  • 70.
    Ali, Israr
    et al.
    Blekinge Institute of Technology, School of Computing.
    Shah, Syed Shahab Ali
    Blekinge Institute of Technology, School of Computing.
    Usability Requirements for GIS Application: Comparative Study of Google Maps on PC and Smartphone2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Smartphone is gaining popularity due to its feasible mobility, computing capacity and efficient energy. Emails, text messaging, navigation and visualizing geo-spatial data through browsers are common features of smartphone. Display of geo-spatial data is collected in computing format and made publically available. Therefore the need of usability evaluation becomes important due to its increasing demand. Identifying usability requirements are important as conventional functional requirements in software engineering. Non-functional usability requirements are objectives and testable using measurable metrics. Objectives: Usability evaluation plays an important role in the interaction design process as well as identifying user needs and requirements. Comparative usability requirements are identified for the evaluation of a geographical information system (Google Maps) on personal computer (Laptop) and smartphone (iPhone). Methods: ISO 9241-11 guide on usability is used as an input model for identifying and specifying usability level of Google Maps on both personal computer and smartphone for intended output. Authors set target value for usability requirements of tasks and questionnaire on each device, such as acceptability level of tasks completion, rate of efficiency and participant’s agreement of each measure through ISO 9241-11 respectively. The usability test is conducted using Co-discovery technique on six pairs of graduate students. Interviews are conducted for validation of test results and questionnaires are distributed to get feedback from participants. Results: The non-functional usability requirements were tested and used five metrics measured on user performance and satisfaction. Through usability test, the acceptability level of tasks completion and rate of efficiency was matched on personal computer but did not match on iPhone. Through questionnaire, both the devices did not match participant’s agreement of each measure but only effectiveness matched on personal computer. Usability test, interview and questionnaire feedback are included in the results. Conclusions: The authors provided suggestions based on test results and identified usability issues for the improvement of Google Maps on personal computer and iPhone.

  • 71.
    Ali, Muhammad Usman
    Blekinge Institute of Technology, School of Computing.
    Cloud Computing as a Tool to Secure and Manage Information Flow in Swedish Armed Forces Networks2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In the last few years cloud computing has created much hype in the IT world. It has provided new strategies to cut down costs and provide better utilization of resources. Apart from all drawbacks, the cloud infrastructure has been long discussed for its vulnerabilities and security issues. There is a long list of service providers and clients, who have implemented different service structures using cloud infrastructure. Despite of all these efforts many organizations especially with higher security concerns have doubts about the data privacy or theft protection in cloud. This thesis aims to encourage Swedish Armed Forces (SWAF) networks to move to cloud infrastructures as this is the technology that will make a huge difference and revolutionize the service delivery models in the IT world. Organizations avoiding it would lag behind but at the same time organizations should consider to adapt a cloud strategy most reliable and compatible with their requirements. This document provides an insight on different technologies and tools implemented specifically for monitoring and security in cloud. Much emphasize is given on virtualization technology because cloud computing highly relies on it. Amazon EC2 cloud is analyzed from security point of view. An intensive survey has also been conducted to understand the market trends and people’s perception about cloud implementation, security threats, cost savings and reliability of different services provided.

  • 72.
    Ali, Muhammad Usman
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aasim, Muhammad
    Blekinge Institute of Technology, School of Computing.
    Usability Evaluation of Digital Library BTH a case study2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Libraries have for hundreds of years been an important entity for every kind of institute, especially in the educational sector. So now it is an age of computers and internet. People are now using electronic resources to fulfill their needs and requirements of their life. Therefore libraries have also converted to computerized systems. People can access and use library resources just sitting at their computers by using the internet. This modern way of running a library has been called or given the name of digital libraries. Digital libraries are getting famous for flexibility of use and because more users can be facilitated at a time. As numbers of users are increasing, some issues relevant to interaction also arise while using digital libraries interface and utilizing its e-resources. In this thesis we evaluate usability factors and issues in digital libraries and the authors have taken as a case study the real time existing system of the digital library in BTH. This thesis report describes digital libraries and how users are being facilitated by them. Usability issues are also discussed relevant to digital libraries. Users have been the main source to evaluate and judge usability issues while interacting and using this digital library. The results obtained showed dis¬satisfaction of users regarding the usability evaluation of BTH:s digital library. The authors used usability evaluation techniques to evaluate functionality and services provided by the BTH digital library system interface. Moreover, based on the results of our case study, suggestions of improvement in BTH:s digital library are presented. Hopefully, these suggestions will help to make BTH digital library system more usable in an efficient and effective manner for users.

  • 73.
    Ali, Sajjad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Asad
    Blekinge Institute of Technology, School of Computing.
    Performance Analysis of AODV, DSR and OLSR in MANET2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A mobile ad hoc network (MANET) consists of mobile wireless nodes. The communication between these mobile nodes is carried out without any centralized control. MANET is a self organized and self configurable network where the mobile nodes move arbitrarily. The mobile nodes can receive and forward packets as a router. Routing is a critical issue in MANET and hence the focus of this thesis along with the performance analysis of routing protocols. We compared three routing protocols i.e. AODV, DSR and OLSR. Our simulation tool will be OPNET modeler. The performance of these routing protocols is analyzed by three metrics: delay, network load and throughput. All the three routing protocols are explained in a deep way with metrics. The comparison analysis will be carrying out about these protocols and in the last the conclusion will be presented, that which routing protocol is the best one for mobile ad hoc networks.

  • 74.
    Ali, Wajahat
    et al.
    Blekinge Institute of Technology, School of Computing.
    Muhammad, Asad
    Blekinge Institute of Technology, School of Computing.
    Response Time Effects on Quality of Security Experience2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The recent decade has witnessed an enormous development in internet technology worldwide. Initially internet was designed for applications such as Electronic Mail and File Transfer. With technology evolving and becoming popular, people use internet for e-banking, e-shopping, social networking, e-gaming, voice and a lot of other applications. Most of the internet traffic is generated by activities of end users, when they request a specific webpage or web based application. The high demand for internet applications has driven service operators to provide reliable services to the end user and user satisfaction has now become a major challenge. Quality of Service is a measure of the performance of a particular service. Quality of Experience is a subjective measure of user’s perception of the overall performance of network. The high demand for internet usage in everyday life has got people concerned about security of information over web pages that require authentication. User perceived Quality of Security Experience depends on Quality of Experience and Response Time for web page authentication. Different factors such as jitter, packet loss, delay, network speed, supply chains and the type of security algorithm play a vital role in the response time for authentication. In this work we have tried to do qualitative and quantitative analysis of user perceived security and Quality of Experience with increasing and decreasing Response Times towards a web page authentication. We have tried to derive a relationship between Quality of Experience of security and Response Time.

  • 75.
    Ali, Waqas
    Blekinge Institute of Technology, School of Computing.
    Case Study Of Mobile Internet User Experience2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Mobile Internet is currently considered as the future for the Internet. Apparently the number of mobile handset sold as compared to desktop PCs is noticeable. These hints depict the potential of mobile Internet and the future market strongly relying on mobile devices. But at the same time mobile internet users are growing slower in numbers. Particularly in market where the internet access is very simple through computers, mobile internet users seems not very enthusiastic to use internet on mobile phones. Author of this study supposed on the basis of literature findings that this lack of interest is due to an unsatisfactory mobile internet user experience. This thesis work is an effort into the complex area of mobile internet and shed some light on how to improve user experience for mobile internet. The main focus of this research work is the identification of hurdles/challenges for mobile internet user experience and explores the concepts present in academia. In order to understand it properly, the author performed a systematic literature review (SLR). The overall objective of SLR is to examine the existing work on thesis study topic. This in depth study of literature revealed that mobile internet user experience is categorized into aspects, elements and factors by different researchers and considered as a central part of mobile internet user experience. There are few other factors that affect and make this job complicated and difficult such as usage context and user expectations. In this work current problems of the mobile internet user experience are identified systematically that never happened before and then discussed in a way that provide a better understanding of mobile internet user experience to academia. To fulfill the aim and objectives author of this study conducted the detailed systematic review analysis of the empirical studies from year 1998 to 2012. The research studies were identified from the most authentic databases that are scientifically and technically peer reviewed such as Scopus, Evillage, IEEE Xplore, ACM digital library. From SLR results, we have found different aspects, elements, factors and challenges of mobile internet user experience. The most common challenge faced by user and reported in academia was screen size, input facilities, usability of services, and data traffic costs. The information attained during this thesis study through academia (literature) is presented in a descriptive way which reflects that there is an emerging trend of using internet on mobile devices. Through this study author presented the influencing perspective of mobile internet user experience that needs to be considered for the advancement of mobile internet. The presented work adds contribution in a sense as to the best of knowledge no systematic review effort has been done in this area.

  • 76.
    Ali, Zahoor
    et al.
    Blekinge Institute of Technology, School of Computing.
    Arfeen, Muhammad Qummer ul
    Blekinge Institute of Technology, School of Computing.
    The role of Machine Learning in Predicting CABG Surgery Duration2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Operating room (OR) is one of the most expensive resources of a hospital. Its mismanagement is associated with high costs and revenues. There are various factors which may cause OR mismanagement, one of them is wrong estimation of surgery duration. The surgeons underestimate or overestimate surgery duration which causes underutilization or overutilization of OR and medical staff. Resolving the issue of wrong estimate can result improvement of the overall OR planning. Objectives. In this study we investigate two different techniques of feature selection, compare different regression based modeling techniques for surgery duration prediction. One of these techniques (with lowest mean absolute) is used for building a model. We further propose a framework for implementation of this model in the real world setup. Results. In our case the selected technique (correlation based feature selection with best first search in backward direction) for feature selection could not produce better results than the expert’s opinion based approach for feature selection. Linear regression outperformed on both the data sets. Comparatively the mean absolute error of linear regression on experts’ opinion based data set was the lowest. Conclusions. We have concluded that patterns exist for the relationship of the resultant prediction (surgery duration) and other important features related to patient characteristics. Thus, machine learning tools can be used for predicting surgery duration. We have also concluded that the proposed framework may be used as a decision support tool for facilitation in surgery duration prediction which can improve the planning of ORs and their resources.

  • 77.
    Alipour, Philip Baback
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Muhammad
    Blekinge Institute of Technology, School of Computing.
    An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings.

  • 78.
    Alisic, Senadin
    et al.
    Blekinge Institute of Technology, School of Management.
    Karapistoli, Eirini
    Blekinge Institute of Technology, School of Management.
    Katkic, Adis
    Blekinge Institute of Technology, School of Management.
    Key Drivers for the Successful Outsourcing of IT Services2012Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Background: Services are without doubt the driving force in today’s economies in many countries. The increased importance of the service sector in industrialized economies and its productivity rates are testified by the fact that the current list of Fortune 500 companies contains more service companies and fewer manufacturing companies than in previous decades. Many products today are being transformed into services or have a higher service component than previously. In the development of this increasingly important bundling of services with products, outsourcing and offshoring play a key role. Companies have been outsourcing work for many years now appointing the latter a well-established phenomenon. Outsourcing to foreign countries, referred to as offshoring, has also been fuelled by ICT and globalization, where firms can capitalize on price and cost differentials between countries. Constant improvements in technology and global communications virtually guarantee that the future will bring much more outsourcing of services, and more specifically, outsourcing of IT services. While outsourcing and offshoring strategies play an important role in IT services, we would like to investigate the drivers that affect the successful outcome of an offshore outsourcing engagement. Purpose: The principle aim of the present study is therefore twofold: a) to identify key drivers for the successful outsourcing of IT services seen from the outsourcing partner’s perspective and b) to investigate how the outsourcing partner prioritizes these drivers.

  • 79.
    Allahyari, Hiva
    Blekinge Institute of Technology, School of Computing.
    On the concept of Understandability as a Property of Data mining Quality2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the comprehensibility and understandability of models generated from data in the context of data mining and knowledge discovery. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. From the results of the survey, we obtain that models represented as decision trees are more understandable than models represented as decision rules. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

  • 80. Allahyari, Hiva
    et al.
    Lavesson, Niklas
    User-oriented Assessment of Classification Model Understandability2011Conference paper (Refereed)
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the understandability of classification models in the context of data mining. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. The results indicate that decision tree models are more understandable than rule-based models. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

  • 81.
    Allberg, Petrus
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Applied machine learning in the logistics sector: A comparative analysis of supervised learning algorithms2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    BackgroundMachine learning is an area that is being explored with great haste these days, which inspired this study to investigate how seven different supervised learning algorithms perform compared to each other. These algorithms were used to perform classification tasks on logistics consignments, the classification is binary and a consignment can either be classified as missed or not.

    ObjectivesThe goal was to find which of these algorithms perform well when used for this classification task and to see how the results varied with different sized datasets. Importance of the features which were included in the datasets has been analyzed with the intention of finding if there is any connection between human errors and these missed consignments.

    MethodsThe process from raw data to a predicted classification has many steps including data gathering, data preparation, feature investigation and more. Through cross-validation, the algorithms were all trained and tested upon the same datasets and then evaluated based on the metrics recall and accuracy.

    ResultsThe scores on both metrics increase with the size of the datasets, and when comparing the seven algorithms, two does not perform equally compared to the other five, which all perform moderately the same.

    Conclusions Any of the five algorithms mentioned prior can be chosen for this type of classification, or to further study based on other measurements, and there is an indication that human errors could play a part on whether a consignment gets classified as missed or not.

  • 82.
    Allblom, Viktor
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Evaluating Agent Strategies for the TAC Supply Chain Management Competition2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The TAC Supply Chain Management game was designed to capture many of the challenges involved in dynamic supply chain practices. To evaluate the game I created four different agents, which operate according to simple but very different strategies. In addition, an advanced agent was created to see if the game was advanced enough not to be dominated by simple strategies. While the game is advanced enough to resist simple strategies, it is so simplified that it will never help solve any real world problems unless it is expanded to include more factors/problems of supply chain management.

  • 83.
    Almrot, Emil
    et al.
    Blekinge Institute of Technology, School of Computing.
    Andersson, Sebastian
    Blekinge Institute of Technology, School of Computing.
    A study of the advantages & disadvantages of mobile cloud computing versus native environment2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    The advent of cloud computing has enabled the possibility of moving complex calculations and device operations to the “cloud” in an effective way. With cloud computing being applied to mobile devices some of the primary constraints of mobile computing, such as battery life and hardware with less computational power, could be resolved by moving complex operations and computations to the cloud. This thesis aims to identify advantages and disadvantages associated with running cloud based applications on mobile devices. We also present a study of the power consumption of five cloud based mobile applications and compare the results to their non-cloud counterparts. The results from the experiment show that migrating all your applications to the cloud will not significantly reduce the power consumption of your mobile device at the moment, but that mobile cloud computing has matured within the last year and will continue doing so with the development of cloud computing.

  • 84.
    Almström, Malin
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Olsson, Christina
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Requirement Specification for Information Security to Health Systems, Case Study: IMIS2003Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [sv]

    During 2001-2002 a prototype, IMIS (Integrated Mobile Information System) was developed at BTH (Blekinge University of Technology) to demonstrate how mobile IT-systems can be used in healthcare. The prototype was based on the activity theory of Engeström. An ongoing project started in spring 2003. The purpose of the project is further development of IMIS with special focus in the diabetes healthcare. Participants in the project are scientists and students at BTH, ALMI Företagspartner, Blekinge FoU-enhet, Barndiabetesförbundet Blekinge, Blekinge Diabetesförening, Vårdcentralen Ronneby and Vårdcentralen Sölvesborg. The goal of IMIS is to develop a secure communication platform, which follows requirements from caretaker and caregiver as well as the Swedish laws regulating digital information and healthcare. The output of this master thesis is a requirement specification of information security for healthcare where IMIS has been used as a case study. The requirements specification follows the international standard SS-ISO/IEC 17799.

  • 85.
    Alpadie, Irene
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Karlsson, Eva
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Ett IT-verktygs användbarhet inom hemtjänsten: en utvärdering av IMIS2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Vi har utvärderat IMIS med hjälp av användbarhetstester tillsammans med hemtjänstpersonal verksamma på fältet. Målet med utvärderingen har varit att redovisa eventuella problem av användbarhetskaraktär och utifrån dessa ge förslag på förbättringar inför en vidareutveckling av systemet. Syftet med vårt arbete har varit att bidra med ökad kunskap när det gäller användbarheten för IT-verktyg inom hemtjänsten. IMIS (Integrated Mobile Information System) är ett interaktivt informationssystem, som studenter från IT-programmet vid Blekinge Tekniska högskola har utvecklat. Systemet är en prototyp ämnad att utvecklas vidare för bruk inom hemtjänsten. Det är tänkt att fungera som ett hjälpmedel för att underlätta samordning och kommunikation mellan vårdpersonal, göra information om vårdtagarna lättillgänglig för vårdgivarna, samt kunna ge en helhetsbild av vårdtagarna och deras personliga nätverk. Hemtjänst innebär att den som inte kan klara sig på egen hand på grund av sjukdom, handikapp eller åldersförändringar kan få hjälp i det egna hemmet med bl.a. städning och personlig omvårdnad av kommunalt anställda vårdbiträden. De senaste 20 åren har datoriseringen ökat markant och persondatorer har numera blivit en förbrukningsvara för en stor del av befolkningen. Nya produkter och system ställer dock ofta till med problem för användarna eftersom de inte alltid är anpassade till de tilltänkta användarnas färdigheter och behov. Mobila kommunikationslösningar inom hemtjänsten är ett relativt nytt koncept. Det unika med dessa lösningar är att de i första hand riktar sig till de som arbetar ute hos vårdtagarna. Från en communicator, som är en kombinerad trådlös telefon och handdator, kopplar personalen upp sig mot Internet där alla med behörighet kan ta del av informationen i samma stund som den skrivs, var de än befinner sig. Användbarhet kan definieras och göras mätbart på många olika sätt. Vi har valt att utgå från den internationella standarden för användbarhet ISO 9241-11, där vi fokuserar på ändamålsenlighet och tillfredsställelse. För att genomföra våra tester har vi använt metoden kooperativ utvärdering som är en vidareutveckling av metoden thinking-aloud. Metoden går i korta drag ut på att användare kommenterar högt under tiden de arbetar med uppgifter. Det är en empirisk och formativ utvärderingsmetod av mindre formell karaktär där observatören deltar i utvärderingen. Vi har även använt frågeformulär både före, under och efter de praktiska testerna. Testerna genomfördes tillsammans med sju fast anställda vårdbiträden inom hemtjänsten i Karlshamns kommun. De flesta av deltagarna hade ingen eller mycket liten datorvana. Alla testpersoner var dock positiva till systemet och ingen tyckte att systemet verkade speciellt svårt att använda. Efter att ha provat verktyget, tyckte de flesta att det var lättare än de hade fått intryck av från början. Sex av deltagarna trodde att de kunde ha nytta av de funktioner de hade fått prova på i arbetet, och de skulle även rekommendera att kommunen köpte in systemet till hemtjänsten. Alla tyckte att systemet var tilltalande och effektivt. Endast tre av deltagarna klarade av alla uppgifterna inom utsatt tid. Dessa tre hade alla lite eller medel datorvana. Det största problemet relaterat till systemet var deltagarnas förvirring i samband med att huvudfönstret inte laddades om när de klickade på en länk i menyn, vilket innebar att innehåll i huvudfönster och meny inte alltid stämde överens. Tillsammans med de teoretiska studierna har användbarhetstesterna gett oss ökad insikt om användbarheten för IT-verktyg inom hemtjänsten.

  • 86.
    Al-Refai, Ali
    et al.
    Blekinge Institute of Technology, School of Computing.
    Pandiri, Srinivasreddy
    Blekinge Institute of Technology, School of Computing.
    Cloud Computing: Trends and Performance Issues2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Cloud Computing is a very fascinating concept these days, it is attracting so many organiza-tions to move their utilities and applications into a dedicated data centers, and so it can be accessed from the Internet. This allows the users to focus solely on their businesses while Cloud Computing providers handle the technology. Choosing a best provider is a challenge for organizations that are willing to step into the Cloud Computing world. A single cloud center generally could not deliver large scale of resources for the cloud tenants; therefore, multiple cloud centers need to collaborate to achieve some business goals and to provide the best possible services at lowest possible costs. How-ever a number of aspects, legal issues, challenges, and policies should be taken into consideration when moving our service into the Cloud environment. Objectives: The aim of this research is to identify and elaborate the major technical and strategy differences between the cloud-computing providers in order to enable the organizations managements, system designers and decision makers to have better insight into the strategies of the different Cloud Computing providers. It is also to understand the risks and challenges due to implementing Cloud Computing, and “how” those issues can be moderated. This study will try to define Multi-Cloud Computing by studying the pros and cons of this new domain. It is also aiming to study the concept of load balancing in the cloud in order to examine the performance over multiple cloud environments. Methods: In this master thesis a number of research methods are used, including the systematic litera-ture review, contacting experts from the relevant field (Interviews) and performing a quantitative methodology (Experiment). Results: Based on the findings of the Literature Review, Interviews and Experiment, we got out the results for the research questions as, 1) A comprehensive study for identifying and comparing the major Cloud Computing providers, 2) Addressing a list of impacts of Cloud Computing (legal aspects, trust and privacy). 3) Creating a definition for Multi-Cloud Computing and identifying the benefits and drawbacks, 4) Finding the performance results on the cloud environment by performing an expe-riment on a load balancing solution. Conclusions: Cloud Computing becomes a central interest for many organizations nowadays. More and more companies start to step into the Cloud Computing service technologies, Amazon, Google, Microsoft, SalesForce, and Rackspace are the top five major providers in the market today. However, there is no Cloud that is perfect for all services. The legal framework is very important for the protection of the user’s private data; it is an important key factor for the safety of the user’s personal and sensitive information. The privacy threats vary according to the nature of the cloud scenario, since some clouds and services might face a very low privacy threats compare to the others, the public cloud that is accessed through the Internet is one of the most means when it comes the increasing threats of the privacy concerns. Lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust. The evolution of Cloud Computing shows that it is likely, in a near future, the so-called Cloud will be in fact a Multi-cloud environment composed of a mixture of private and public Clouds to form an adaptive environment. Load balancing in the Cloud Computing environment is different from the typical load balancing. The architecture of cloud load balancing is using a number of commodity servers to perform the load balancing. The performance of the cloud differs depending on the cloud’s location even for the same provider. HAProxy load balancer is showing positive effect on the cloud’s performance at high amount of load, the effect is unnoticed at lower amounts of load. These effects can vary depending on the location of the cloud.

  • 87.
    Alsbjer, Maria
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Att hitta ingångar i formandet av programmeringskunskap2001Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Att lära sig programmera på nybörjarnivå inom ramen för en högskoleutbildning är en process som för vissa studenter tycks helt oproblematisk medan den för andra ter sig smärtsam eller rent av oöverstiglig. Varför är det så och hur kan man hitta ingångar i formandet av programmeringskunskap som öppnar möjligheter för alla som vill lära sig? Med dessa frågeställningar som utgångspunkt har jag tittat på två olika utbildningsprogram vid Blekinge Tekniska Högskola (BTH) - Informationsteknologiprogrammet respektive Medieteknik-programmet. Jag har intervjuat ett urval studenter och lärare från de båda programmen och ställt frågor bl a kring tidigare erfarenheter av datorer och programmering, synen på programmering, upplevda svårigheter med att lära sig programmera och tänkbara orsaker till dessa svårigheter. Resultaten från intervjuerna har reflekterats genom ett antal texter som behandlar föreställningar och förhållningssätt i formandet av programmeringskunskap såväl som mer övergripande frågor kring epistemologiska utgångspunkter i formandet av all slags kunskap, men med fokus på programmeringskunskap. Min metodologi är förankrad i genus-forskningsteorier kring de vetenskapliga kunskapsprocesserna, inom vilka frågor rörande bl a metod- och teoriinnehåll problematiseras. Denna förankring har möjliggjort den kontextuella analys i vilken jag försökt lyfta fram allmängiltiga såväl som specifika tendenser, d v s både faktorer som kan härröras till respektive program jag studerat och faktorer som berör programmeringsundervisning i högskolemiljö generellt. I den avslutande diskussionen har jag lagt särskild vikt vid de frågeställningar som jag uppfattar har den mest signifikanta betydelsen för hur möjligheter kan skapas för att programmeringskunskap ska uppfattas som "tillgänglig" av alla som har intresse av att lära sig programmera. Dessa frågeställningar berör just den övergripande kunskapssyn inom vilken programmeringsundervisning alltför ofta formas; en kunskapssyn som ger uttryck för ett mer eller mindre utpräglat prestationstänkande. Frågeställningar kring vilka konsekvenser en "likriktad" undervisning kan få för olika individer tas också upp och möjligheter till skapandet av mångfald i undervisningen diskuteras liksom lärarnas roll i detta sammanhang. Med detta arbete vill jag bidra till kunskapsunderlag för ett pedagogiskt utvecklingsarbete - till gagn för studenter och lärare såväl som programmeringsämnet i sig.

  • 88.
    Alsén, Maria
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Järgenstedt, Nils
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Kommunikation med hjälp av mock-uper2003Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    In several cases, systems that have been developed have been very time consuming and cost a lot of money, but they still do not fulfil the users requirements and requests. To make new systems better, you have to find a way to communicate that allows the developers to understand the needs of the user. The aim for our thesis is to highlight the importance of communication in system development. To investigate this we have choosen to do a study of the real-estate system. The work methods that have been used include mock-ups and informal conversations with the user, who is employed by the Church of Sweden in Ronneby. The purpose of this thesis is, among other things, to provide the Church of Sweden in Ronneby with a report, which can be of help for further development of the system. The system was developed by a work group at Blekinge Institute of Technology. Our question of issue is: Would the Church of Sweden in Ronneby obtain a more useful system if the developers applied the guidelines and experience that exists within the area of HCI? Further more we have looked in to if the communication has improved with the use of mock-ups to increase the interaction? Human-computer interaction as a term was adopted in the mid-1980s as a means of describing this new field of study. The focus of interest described the new way of looking at the interaction between computers and people. In this field it is important to involve the user in the developing process from the beginning to the end. Together with the user, we have made some propositions based on previous documentation from the project. Mock-ups are a prototype of paper that shows the user what the system will look like. This method was applied together with the user, to find a more logical structure for the system. After gathering the result of the case study, we can honestly say that communication do increase because of mock-ups. Good communication is not something that you can learn from books; you have to adjust to the situation. It is important when you develop a system to work with a model that allows you to go back and change things in previous phases that are incorrect, and also to include the user in the whole process.

  • 89.
    Altaf, Moaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    SMI-S for the Storage Area Network (SAN) Management2014Student thesis
    Abstract [en]

    The storage vendors have their own standards for the management of their storage resources but it creates interoperability issues on different storage products. With the recent advent of the new protocol named Storage Management Initiative-Specification (SMI-S), the Storage Networking Industry Association (SNIA) has taken a major step in order to make the storage management more effective and organized. SMI-S has replaced its predecessor Storage Network Management Protocol (SNMP) and it has been categorized as an ISO standard. The main objective of the SMI-S is to provide interoperability management of the heterogeneous storage vendor systems by unifying the Storage Area Network (SAN) management, hence making the dreams of the network managers come true. SMI-S is a guide to build systems using modules that ‘plug’ together. SMI-S compliant storage modules that use CIM ‘language’ and adhere to CIM schema interoperate in a system regardless of which vendor built them. SMI-S is object-oriented, any physical or abstract storage-related elements can be defined as a CIM object. SMI-S can unify the SAN management systems and it works well with the heterogeneous storage environment. SMI-S has offered a cross-platform, cross-vendor storage resource management. This thesis work discusses the use of SMI-S at Compuverde which is a storage solution provider, located in the heart of the Karlskrona, the southeastern part of Sweden. Compuverde was founded by Stefan Bernbo in Karlskrona,Sweden. Just like all others leading storage providers, Compuverde has also decided to deploy the Storage Management Initiative-Specification (SMI-S) to manage their Storage Area Network (SAN) and to achieve interoperability. This work was done to help Compuverde to deploy the SMI-S protocol for the management of the Storage Area Network (SAN) which, among many of its features, would create alerts/traps in case of a disk failure in the SAN. In this way, they would be able to keep the data of their clients, safe and secure and keep their reputation for being reliable in the storage industry. Since Compuverde regularly use Microsoft Windows and Microsoft have started to support SMI-S for storage provisioning in System Center 2012 Virtual Machine Manager (SCVMM), this work was done using the SCVMM 2012 and the Windows Server 2012.The SMI-S provider which was used for this work was QNAP TS- 469 Pro.

  • 90. Alves, Dimas I.
    et al.
    Machado, Renato
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Legg, Andrei P.
    Uchoa-Filho, Bartolomeu F.
    Cooperative multiple-access scheme with antenna selection and incremental relaying2014In: 2014 INTERNATIONAL TELECOMMUNICATIONS SYMPOSIUM (ITS), São Paulo: IEEE , 2014Conference paper (Refereed)
    Abstract [en]

    A cooperative multiple-access scheme for wireless communications systems with antenna selection and incremental relaying is proposed. The scheme aims to improve the system throughput while preserving good performance in terms of bit error rate. The system consists of N nodes which send their information to both the destination node and the multiple-antenna relay station. Based on the channel state information, the destination node decides whether or not relaying will be performed. When the relaying is performed, the decode-remodulate-and-forward protocol is used with the best antenna. Results reveal that the proposed scheme achieves a good tradeoff between throughput and bit error rate, which makes suitable to be considered for multi-user networks.

  • 91.
    Amin, Khizer
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Minhas, Mehmood ul haq
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Facebook Blocket with Unsupervised Learning2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    The Internet has become a valuable channel for both business-to- consumer and business-to-business e-commerce. It has changed the way for many companies to manage the business. Every day, more and more companies are making their presence on Internet. Web sites are launched for online shopping as web shops or on-line stores are a popular means of goods distribution. The number of items sold through the internet has sprung up significantly in the past few years. Moreover, it has become a choice for customers to do shopping at their ease. Thus, the aim of this thesis is to design and implement a consumer to consumer application for Facebook, which is one of the largest social networking website. The application allows Facebook users to use their regular profile (on Facebook) to buy and sell goods or services through Facebook. As we already mentioned, there are many web shops such as eBay, Amazon, and applications like blocket on Facebook. However, none of them is directly interacting with the Facebook users, and all of them are using their own platform. Users may use the web shop link from their Facebook profile and will be redirected to web shop. On the other hand, most of the applications in Facebook use notification method to introduce themselves or they push their application on the Facebook pages. This application provides an opportunity to Facebook users to interact directly with other users and use the Facebook platform as a selling/buying point. The application is developed by using a modular approach. Initially a Python web framework, i.e., Django is used and association rule learning is applied for the classification of users’ advertisments. Apriori algorithm generates the rules, which are stored as separate text file. The rule file is further used to classify advertisements and is updated regularly.

  • 92.
    Amini, Noradin
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Erixon, Leif
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Applikationsintegrering: en analys av metoder och teknik2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Abstract In the contemporary world of information technology you find a multitude of applications and systems covering a broad spectrum of areas of need in different companies. One effect of this multitude of programs is the difficulty to make them exchange information with each other or to collaborate, since they are developed by different programming languages for different platforms, with different standards and different data formats. Our aim with this work is to describe how it is possible to tie these programs together to make them actually communicate with each other in order to exchange information, share their native methods and also to become a part of the overall business processes. In this integration task you will, among other things, find different levels of application integration such as data level, method level, application interface level and user interface level integration. Application integration also involves hardware components, called middleware, that facilitate the physical connection between applications. There is a range of different middleware products offered today on the market. The functionalities of those products varies greatly depending on which technology or technologies they are built around and which vendor they come from. In order to make a connection to a real life situation we have made up a company with a need of integration. By trying to choose a solution for this company we discuss what to integrate, data, methods etc, and the technical solution, middleware product, which might be useful to integrate the different kind of applications in our imaginary company. Finally we have come up with some conclusions in our work. These are both of a more general art and also a conclusion specific to our case study.

  • 93.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rohani, Sarmad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

  • 94.
    Amjad, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Malhi, Rohail Khan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Burhan, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

  • 95.
    ananth, Indirajith Vijai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

  • 96.
    Anastasiadis, Kleanthis
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Co-ordinating artefacts and actions in order to run a taxi business2004Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Work place studies in a taxi business in order to identify eventual problems and give design solutions.

  • 97.
    Ande, Rama kanth
    et al.
    Blekinge Institute of Technology, School of Computing.
    Amarawadi, Sharath Chandra
    Blekinge Institute of Technology, School of Computing.
    Evaluation of ROS and Arduino Controllers for the OBDH Subsystem of a CubeSat2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    CubeSat projects in various universities around the world have become predominant in the study and research for developing CubeSats. Such projects have broadened the scope for understanding this new area of space research. Different CubeSats have been developed by other universities and institutions for different applications. The process of design, development and deployment of CubeSats involves several stages of theoretical and practical work ranging from understanding the concepts associated with communication subsystems, data handling subsystems to innovations in the field like implementing compatible operating systems in the CubeSat processors and new designs of transceivers and other components. One of the future trend setting research areas in CubeSat projects is the implementation of ROS in CubeSat. Robot Operating System (ROS) is aiming to capture the future of many embedded systems including Robotics. In this thesis, an attempt is made to understand the challenges faced during implementing ROS in CubeSat to provide a foundation for the OBDH subsystem and provide important guidelines for future developers relying on ROS run CubeSats. Since using traditional transceivers and power supply would be expensive, we have tried simulating Arduino to act as transceiver and power supply subsystems. Arduino is an open-source physical computing platform based on a simple microcontroller board, and a development environment for writing software for the board designed to make the process of using electronics in major embedded projects more accessible and inexpensive. Another important focus in this thesis has been to establish communication between CubeSat kit and Arduino. The major motivating factor for this thesis was to experiment with and come up with alternate ways which could prove as important measures in future to develop an effective and useful CubeSat by cutting down on development costs. An extensive literature review is carried out on the concepts of Arduino boards and ROS and its uses in Robotics which served as a base to understand its use in CubeSat. Experiment is conducted to communicate the CubeSat kit with Arduino. The results from the study of ROS and experiments with Arduino have been highly useful in drafting major problems and complications that developers would encounter while implementing ROS in CubeSat. Comprehensive analysis to the results obtained serve as important suggestions and guidelines for future researchers working in this field.

  • 98.
    Anderdahl, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Darner, Alice
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems Using 3D Vector Fields with OpenGL Compute Shaders2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed. Objectives. This thesis proposes a texture-based 3D vector field particle system, computed on the Graphics Processing Unit, and compares it to an equation-based particle system. Methods. Several tests were conducted comparing different situations and parameters for the methods. All of the tests measured the computational time needed to execute the different methods. Results. We show that the texture-based method was effective in very specific situations where it was expected to outperform the equation-based. Otherwise, the equation-based particle system is still the most efficient. Conclusions. Generally the equation-based method is preferred, except for in very specific cases. The texture-based is most efficient to use for static particle systems and when a huge number of forces is applied to a particle system. Texture-based vector fields is hardly useful otherwise.

  • 99.
    Andersen, Dennis
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Screen-Space Subsurface Scattering, A Real-time Implementation Using Direct3D 11.1 Rendering API2015Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    Context Subsurface scattering - the effect of light scattering within a material. Lots of materials on earth possess translucent properties. It is therefore an important factor to consider when trying to render realistic images. Historically the effect has been used for offline rendering with ray tracers, but is now considered a real-time rendering technique and is done based on approximations off previous models. Early real-time methods approximates the effect in object texture space which does not scale well with real-time applications such as games. A relatively new approach makes it possible to apply the effect as a post processing effect using GPGPU capabilities, making this approach compatible with most modern rendering pipelines.

    Objectives The aim of this thesis is to explore the possibilities of a dynamic real-time solution to subsurface scattering with a modern rendering API to utilize GPGPU programming and modern data management, combined with previous techniques

    Methods The proposed subsurface scattering technique is implemented in a delimited real-time graphics engine using a modern rendering API to evaluate the impact on performance by conducting several experiments with specific properties.

    Results The result obtained hints that by using a flexible solution to represent materials, execution time lands at an acceptable rate and could be used in real-time. These results shows that the execution time grows nearly linearly with consideration to the number of layers and the strength of the effect. Because the technique is performed in screen space, the performance scales with subsurface scattering screen coverage and screen resolution.

    Conclusions The technique could be used in real-time and could trivially be integrated to most existing rendering pipelines. Further research and testing should be done in order to determine how the effect scales in a complex 3D-game environment.

  • 100.
    Anderssom, Mikael
    et al.
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Bengtsson, Patrik
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Holst, Robert
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Pamiro2005Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    This report describes the work with creating a CMS (Content Manager System). A CMS is a web tool so you easy can publish materials on your web page. With a CM Scan you can without any knowledge of programming web pages create and maintain a web page. This will result in that you and your organisation save time and money. You can also split the responsibility for the webpage in different roles, but still keep the control. We have chosen to create our CMS in a script language called php4 and with the database server MySQL, we have also used JavaScript a lot. User-friendly and simplicity are two words that we have been working against, and that is why we spent some extra time creating a system that is easy to work with. The report also describes the difficulties with our dedicated server, and how we installed it.

1234567 51 - 100 of 2518
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf