Change search
Refine search result
2345678 201 - 250 of 1523
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Brozovic, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    ON EFFICIENT AUTOMATED METHODS FOR SIMULATION OUTPUT DATA ANALYSIS2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    With the increase in computing power and software engineering in the past years computer based stochastic discrete-event simulations have become very commonly used tool to evaluate performance of various, complex stochastic systems (such as telecommunication networks). It is used if analytical methods are too complex to solve, or cannot be used at all. Stochastic simulation has also become a tool, which is often used instead of experimentation in order to save money and time by the researchers. In this thesis, we focus on the statistical correctness of the final estimated results in the context of steady-state simulations performed for the mean analysis of performance measures of stable stochastic processes. Due to various approximations the final experimental coverage can differ greatly from the assumed theoretical level, where the final confidence intervals cover the theoretical mean at much lower frequency than it was expected by the preset theoretical confidence level. We present the results of coverage analysis for the methods of dynamic partially-overlapping batch means, spectral analysis and mean squared error optimal dynamic partially-overlapping batch means. The results show that the variants of dynamic partially-overlapping batch means, that we propose as their modification under Akaroa2, perform acceptably well for the queueing processes, but perform very badly for auto-regressive process. We compare the results of modified mean squared error optimal dynamic partially-overlapping batch means method to the spectral analysis and show that the methods perform equally well.

  • 202. Buchinger, Shelley
    et al.
    Lopes, Rui J.
    Jumisko-Pyykkö, Satu
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Advances in tools, techniques and practices for multimedia QoE2015In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721Article in journal (Refereed)
    Abstract [en]

    It has been realized that the success of multimedia services or applications relies on the analysis of the entire user experience (UX). The relevance of this paradigm ranges from Internet protocol television to video-on-demand systems for distributing and sharing professional television (TV) and user-generated content that is consumed and produced ubiquitously. To obtain a pleasurable user experience, a large amount of aspects have to be taken into account. Major challenges in this context include the identification of relevant UX factors and the quantification of their influence on Quality of Experience (QoE). This special issue is dedicated to advances in, tools, techniques and practices for multimedia QoE that tackle several of the aforementioned challenges.

  • 203.
    Budda, Shiva Tarun
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Analysis of Proxy based Encrypted communication in IoT environments: Security and Privacy ~ Distributed Systems Security2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 204. Bulling, Andreas
    et al.
    Dachselt, Raimund
    Duchowski, Andrew T.
    Jacob, Robert J.
    Stellmach, Sophie
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Gaze Interaction in the Post-WIMP World2012Conference paper (Refereed)
    Abstract [en]

    With continuous progression away from desktop to post-WIMP applications, including multi-touch, gestural, or tangible interaction, there is high potential for eye gaze as a more natural human-computer interface in numerous contexts. Examples include attention-aware adaptations or the combination of gaze and hand gestures for interaction with distant displays. This SIG meeting provides a discussion venue for researchers and practitioners interested in gaze interaction in the post-WIMP era. We wish to draw attention to this emerging field and eventually formulate fundamental research questions. We will discuss the potential of gaze interaction for diverse application areas, interaction tasks, and multimodal user interface combinations. Our aims are to promote this research field, foster a larger research community, and establish the basis for a workshop at CHI 2013.

  • 205.
    Bunyakitanon, Monchai
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Peng, Mengyuan
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Measurement of Live Migration Algorithms2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This thesis involves the area of virtualization. We have studied about improving load balancing in data center by using automated live migration techniques. The main idea is to migrate virtual machine(s) automatically from high loaded hosts to less loaded hosts with efficiency. The successful implementation can help administrator of data center maintain a load-balanced environment with less effort than before. For example, such a system can automatically identify hotspots and coldspots in a large data center and also decide which virtual machine to migrate, and which host should the machine be migrated to. We have implemented previously developed Push and Pull strategies on a real testbed for Xen and KVM. A new strategy, Hybrid, which is the combination of Push and Pull, has been created. All scripts applied in the experiments are Python-based for further integration to the orchestration framework OpenStack. By implementing the algorithms on a real testbed, we have solved a node failure problem in the algorithms, which was not detected previously through simulation. The results from simulation and those from testbed are similar. E.g. Push strategy has quick responses when the load is medium to high, while Pull strategy has quick responses when the load is low to medium. The Hybrid strategy behaves similar to Push strategy with high load and to Pull strategy with low load, but with greater number of migration attempts, and it responds quickly regardless to the load. The results also show that our strategies are able to handle different incidents such as burst, drain, or fluctuation of load over time. The comparison of results from different hypervisors, i.e., Xen and KVM, shows that both hypervisors conduct in the same way when applying same strategies in the same environment. It means the strategies are valid for both of them. Xen seems to be faster in improving the System performance. The migration attempts are similar, but KVM has much less Migrations over time than Xen with same scenario.

  • 206.
    Burke, Clive
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Implementation and Evaluation of Virtual Network Functions Performance in the Home Environment2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 207.
    Byanyuma, Mastidia
    et al.
    Neslon Mandela African Institution of Science and Technology, TZA.
    Zaipuna, Yonah
    Neslon Mandela African Institution of Science and Technology, TZA.
    Simba, Fatuma
    University of Dar es Salaam, TZA.
    Trojer, Lena
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Utilization of Broadband Connectivity in Rural and Urban-Underserved Areas: The case of Selected Areas in Arusha-Tanzania2018In: International Journal of Computing and Digital Systems, ISSN 1446-8956, E-ISSN 1329-7147, Vol. 7, no 2, p. 75-83Article in journal (Refereed)
    Abstract [en]

     Utilization is a key aspect in the management of any societal resource not only when it is scarce but in all cases to allow for optimum benefits to be accrued to everyone in the society. Internet bandwidth, which is a rare commodity especially in rural areas is hardly available where needed at the same cost and quality due to various reasons. Tanzania as a case study is among countries that have invested much in international, national and metro backbone networks, but still, there are areas without or with inadequate internet access services implying a significant utilization problem. In this paper, we present as a case study, the status of broadband connectivity in selected rural areas in Tanzania (Arusha) and the status is used to make recommendations for optimized utilization of installed capacity.

  • 208.
    Bäckström, Ola
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Från rigg till sändning: En studie i ljudproduktionsflöde på SVT2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Abstrakt Detta arbete har ljudproducenten i fokus. Det undersöker och tar fram verktyg som du som ljudproducent kan använda dig av för att öppna upp för ett större kreativt utrymme i en produktion. Undersökningen tar också upp och bearbetar en problematik kring ljudproduktion, nämligen den att ljud, generellt sett, ofta underprioriteras och inte ägnas lika stort allvar som övriga sektioner i en produktion. Arbetet har SVT (Sveriges Television) i fokus och hur ljudproduktionen ser ut där. Undersökningen tar fram olika teorier framtagna ur tidigare forskning och tillämpar dem i olika produktioner på Sveriges SVT. Abstract The focus of this thesis is the sound producer. It examines and produces tools that you, as a sound producer, can use to open up for a larger creative space in a production. The research also discuss the problem with sound production, namely that sound, generally speaking, is often under prioritized and is not given the same seriousness as the other sections in the production. The thesis has SVT (Sweden's television) in focus and how their sound production looks like. It uses different theories taken from earlier research and practices them on different productions at SVT.

  • 209.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Caspersen, Michael E.
    Nordström, Marie
    Beauty and the Beast: on the readability of object-oriented example programs2016In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 24, no 2, p. 231-246Article in journal (Refereed)
    Abstract [en]

    Some solutions to a programming problem are more elegant or more simple than others and thus more understandable for students. We review desirable properties of example programs from a cognitive and a measurement point of view. Certain cognitive aspects of example programs are captured by common software measures, but they are not sufficient to capture a key aspect of understandability: readability. We propose and discuss a simple readability measure for software, SRES, and apply it to object-oriented textbook examples. Our results show that readability measures correlate well with human perceptions of quality. Compared with other readability measures, SRES is less sensitive to commenting and white-space. These results also have implications for software maintainability measures.

  • 210.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hilburn, Thomas B.
    Team Projects in Computing Education2015In: ACM Transactions on Computing Education, ISSN 1946-6226, E-ISSN 1946-6226, Vol. 15, no 4, p. 16:1-16:4, article id 16Article, review/survey (Refereed)
    Abstract [en]

    Team projects are a way to expose students to conflicting project objectives, and "[t]here should be a strong real-world element … to ensure that the experience is realistic" [ACM/IEEE-CS 2015b]. Team projects provide students an opportunity to put their education into practice and prepare them for their professional careers. The aim of this special issue is to collect and share evidence about the state-of-practice of team projects in computing education and to help educators in designing and running team projects. From a record number of 69 submitted abstracts, 19 were invited to submit a full paper. Finally, nine papers were accepted for publication in this and a subsequent issue. The articles presented in the present issue cover the following topics: real projects for real clients, open source projects, multidisciplinary team projects, student and team assessment, and cognitive and psychological aspects of team projects.

  • 211.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hilburn, Thomas B.
    Team Projects in Computing Education II2016In: ACM Transactions on Computing Education, ISSN 1946-6226, E-ISSN 1946-6226, Vol. 16, no 2, p. 4:1-4:4, article id 4Article, review/survey (Refereed)
    Abstract [en]

    Team projects are a way to expose students to conflicting project objectives, and "[t]here should be a strong real-world element … to ensure that the experience is realistic" [ACM/IEEE-CS 2015b]. Team projects provide students an opportunity to put their education into practice and prepare them for their professional careers. The aim of this special issue is to collect and share evidence about the state-of-practice of team projects in computing education and to help educators in designing and running team projects. From a record number of 69 submitted abstracts, 19 were invited to submit a full paper. Finally, nine papers were accepted for publication in this and a subsequent issue. The articles presented in the present issue cover the following topics: real projects for real clients, open source projects, multidisciplinary team projects, student and team assessment, and cognitive and psychological aspects of team projects.

  • 212.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Paech, Barbara
    The Role of Method Chains and Comments in Software Readability and Comprehension – An Experiment2016In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 42, no 9, p. 886-898Article in journal (Refereed)
    Abstract [en]

    Software readability and comprehension are important factors in software maintenance. There is a large body of research on software measurement, but the actual factors that make software easier to read or easier to comprehend are not well understood. In the present study, we investigate the role of method chains and code comments in software readability and comprehension. Our analysis comprises data from 104 students with varying programming experience. Readability and comprehension were measured by perceived readability, reading time and performance on a simple cloze test. Regarding perceived readability, our results show statistically significant differences between comment variants, but not between method chain variants. Regarding comprehension, there are no significant differences between method chain or comment variants. Student groups with low and high experience, respectively, show significant differences in perceived readability and performance on the cloze tests. Our results do not show any significant relationships between perceived readability and the other measures taken in the present study. Perceived readability might therefore be insufficient as the sole measure of software readability or comprehension. We also did not find any statistically significant relationships between size and perceived readability, reading time and comprehension.

  • 213.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Störrle, Harald
    QAware GmbH, DEU.
    Toll, Daniel
    Linné University, SWE.
    van Assema, Jelle
    University of Amsterdam, NLD.
    Duran, Rodrigo
    Aalto University, FIN.
    Hooshangi, Sara
    George Washington University, USA.
    Jeuring, Johan
    Utrecht University, NLD.
    Keuning, Hieke
    Windesheim University of Applied Sciences, NLD.
    Kleiner, Carsten
    University of Applied Sciences & Arts Hannover, DEU.
    MacKellar, Bonnie
    St John’s University, USA.
    “I know it when I see it”: Perceptions of Code Quality2018In: ITiCSE-WGR 2017 - Proceedings of the 2017 ITiCSE Conference on Working Group ReportsVolume 2018-January, 30 January 2018, ACM Digital Library, 2018, p. 70-85Conference paper (Refereed)
    Abstract [en]

    Context. Code quality is a key issue in software development. The ability to develop high quality software is therefore a key learning goal of computing programs. However, there are no universally accepted measures to assess the quality of code and current standards are considered weak. Furthermore, there are many facets to code quality. Defining and explaining the concept of code quality is therefore a challenge faced by many educators.

    Objectives. In this working group, we investigated code quality as perceived by students, educators, and professional developers, in particular, the differences in their views of code quality and which quality aspects they consider as more or less important. Furthermore, we investigated their sources for information about code quality and its assessment.

    Methods. We interviewed 34 students, educators and professional developers regarding their perceptions of code quality. For the interviews they brought along code from their own experience to discuss and exemplify code quality.

    Results. There was no common definition of code quality among or within these groups. Quality was mostly described in terms of indicators that could measure an aspect of code quality. Among these indicators, readability was named most frequently by all groups. The groups showed significant differences in the sources they use for learning about code quality with education ranked lowest in all groups.

    Conclusions. Code quality should be discussed more thoroughly in educational programs.

  • 214.
    Böttcher, Anja Verena
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Twitter, News Aggegators & Co: Journalistic Gatekeeping in the Age of Digital Media Culture2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    With the advent of blogs, search engines, RSS and news feeds, the role of online journalists as those who shape everyone’s social reality has decreased. . Currently because of emerging digital tools and outlets, “amateurs” have the ability to not only assemble their own news stream, but they can also publish their own material to a wide audience in only seconds. Gatekeeping is a fundamental concept of media studies and has long been a tool to determine the power of journalists over society. However, digital media requires reconsidering gatekeeping in the traditional sense, as new gatekeepers, such as Twitter-users, “gatejumpers,” new forms of digital influencers, and content aggregators also have gatekeeping power. In my research, I will review and examine the relationship of traditional journalism pratices for gatekeeping in comparison to online journalists.

  • 215.
    Callele, David
    et al.
    Experience First Design Inc., CAN.
    Dueck, Philip
    Experience First Design Inc., CAN.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hynninen, Peitsa
    Aalto University Espoo, FIN.
    Experience requirements in video games definition and testability2015In: Requirements Engineering Conference (RE), 2015 IEEE 23rd International, IEEE, 2015Conference paper (Refereed)
    Abstract [en]

    A properly formed requirement is testable, a necessity for ensuring that design goals are met. While challenging in productivity applications, entertainment applications such as games compound the problem due to their subjective nature. We report here on our efforts to create testable experience requirements, the associated scope challenges and challenges with test design and result interpretation. We further report on issues experienced when performing focus group testing and provide practitioner guidance.

  • 216.
    Callele, David
    et al.
    University of Saskatchewan, CAN.
    Penzenstadler, Birgit
    California State University Long Beach, USA.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Public policy challenges: An RE perspective2018In: CEUR Workshop Proceedings / [ed] Chitchyan R.,Venters C.C.,Penzenstadler B., CEUR-WS , 2018, p. 24-33Conference paper (Refereed)
    Abstract [en]

    In this perspective paper, we investigate the parallels between public policy and IT projects from the perspective of traditional RE practice. Using the mainstream media as an information source (as would an average citizen), over a period of approximately one year we captured documents that presented analyses of public policy issues. The documents were categorized into eight topic areas, then analyzed to identify patterns that RE practitioners would recognize. We found evidence of policy failures that parallel project failures traceable to requirements engineering problems. Our analysis revealed evidence of bias across all stakeholder groups, similar to the rise of the “beliefs over facts” phenomenon often associated with “fake news”. We also found substantial evidence of unintended consequences due to inadequate problem scoping, terminology definition, domain knowledge, and stakeholder identification and engagement. Further, ideological motivations were found to affect constraint definitions resulting in solution spaces that may approach locally optimal but may not be globally optimal. Public policy addresses societal issues; our analysis supports our conclusion that RE techniques could be utilized to support policy creation and implementation. © 2018 SPIE. All rights reserved.

  • 217.
    Callele, David
    et al.
    University of Saskatchewan, CAN.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Process for Product and Service Definition2016In: 9th International Workshop on Software Product Management (IWSPM 2016), IEEE, 2016, p. 322-327Conference paper (Refereed)
    Abstract [en]

    This short paper presents an iterative and incrementalprocess to improve the probability that the product or service definition leading to requirements and implementation is both representative of the market needs and has a reasonable expecta-tion of a financially viable business model. Rather than a relative-ly linear process wherein marketing delivers a product definition to the development team, this process ensures that all assump-tions are validated during the definition stage and that all team members are engaged. The process balances the need to address current challenges against future opportunities, providing short-term customer satisfaction (and justification for purchasing or adoption) and a coherent vision for future development efforts (and maintaining and growing the customer base). The process is applied to a case in the agriculture commodities sector.

  • 218.
    Callele, David
    et al.
    University of Saskatchewan, CAN.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Penzenstadler, Birgit
    California State University Long Beach, USA.
    New Frontiers for Requirements Engineering2017In: 2017 IEEE 25th International Requirements Engineering Conference, RE 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 184-193, article id 8048904Conference paper (Refereed)
    Abstract [en]

    Requirements Engineering (RE) has grown from its humble beginnings to embrace a wide variety of techniques, drawn from many disciplines, and the diversity of tasks currently performed under the label of RE has grown beyond that encom-passed by software development. We briefly review how RE has evolved and observe that RE is now a collection of best practices for pragmatic, outcome-focused critical thinking-A pplicable to any domain. We discuss an alternative perspective on, and de-scription of, the discipline of RE and advocate for the evolution of RE toward a discipline that supports the application of RE prac-tice to any domain. We call upon RE practitioners to proactively engage in alternative domains and call upon researchers that adopt practices from other domains to actively engage with their inspiring domains. For both, we ask that they report upon their experience so that we can continue to expand RE frontiers. © 2017 IEEE.

  • 219.
    Cardellini, Valeria
    et al.
    Universita degli Studi di Roma Tor Vergata, ITA.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grassi, Vincenzo
    Universita degli Studi di Roma Tor Vergata, ITA.
    Iannucci, Stefano
    Universita degli Studi di Roma Tor Vergata, ITA.
    Lo Presti, F.
    Universita degli Studi di Roma Tor Vergata, ITA.
    Mirandola, Raffaela
    Politecnico di Milano, ITA.
    MOSES: A platform for experimenting with qos-driven self-adaptation policies for service oriented systems2017In: Lecture Notes in Computer Science, Springer Verlag , 2017, Vol. 9640, p. 409-433Conference paper (Refereed)
    Abstract [en]

    Architecting software systems according to the service-oriented paradigm, and designing runtime self-adaptable systems are two relevant research areas in today’s software engineering. In this chapter we present MOSES, a software platform supporting QoS-driven adaptation of service-oriented systems. It has been conceived for service-oriented systems architected as composite services that receive requests generated by different classes of users. MOSES integrates within a unified framework different adaptation mechanisms. In this way it achieves a greater flexibility in facing various operating environments and the possibly conflicting QoS requirements of several concurrent users. Besides providing its own self-adaptation functionalities, MOSES lends itself to the experimentation of alternative approaches to QoS-driven adaptation of service-oriented systems thanks to its modular architecture. © Springer International Publishing AG 2017.

  • 220.
    Carlson, Jan
    et al.
    Malardalen Univ, Vasteras, Sweden..
    Papatheocharous, Efi
    Swedish Inst Comp Sci, Stockholm, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Context Model for Architectural Decision Support2016In: PROCEEDINGS 2016 1ST INTERNATIONAL WORKSHOP ON DECISION MAKING IN SOFTWARE ARCHITECTURE, IEEE Computer Society, 2016, p. 9-15Conference paper (Refereed)
    Abstract [en]

    Developing efficient and effective decision making support includes identifying means to reduce repeated manual work and providing possibilities to take advantage of the experience gained in previous decision situations. For this to be possible, there is a need to explicitly model the context of a decision case, for example to determine how much the evidence from one decision case can be trusted in another, similar context. In earlier work, context has been recognized as important when transferring and understanding outcomes between cases. The contribution of this paper is threefold. First, we describe different ways of utilizing context in an envisioned decision support system. Thereby, we distinguish between internal and external context usage, possibilities of context representation, and context inheritance. Second, we present a systematically developed context model comprised of five types of context information, namely organization, product, stakeholder, development method & technology, and market & business. Third, we exemplary illustrate the relation of the context information to architectural decision making using existing literature.

  • 221.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Resilient Smart Grids2014In: 2014 FIRST INTERNATIONAL SCIENTIFIC-PRACTICAL CONFERENCE PROBLEMS OF INFOCOMMUNICATIONS SCIENCE AND TECHNOLOGY (PIC S&T), IEEE , 2014, p. 79-82Conference paper (Refereed)
    Abstract [en]

    The usefulness of configurable and shared experiment platforms in design and implementation of future Resilient Smart Grids is demonstrated. A set of antagonistic threats is identified and remotely controlled experiments to harness those are presented and assessed.

  • 222.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    The art of war in the cyber world2018In: 2017 4th International Scientific-Practical Conference Problems of Infocommunications Science and Technology, PIC S and T 2017 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 42-44Conference paper (Refereed)
    Abstract [en]

    The paper focus on cyber weapons used in Advanced Persistent Threat (ATP) attacks in present and future cyber warfare. The combined use of propaganda and cyber warfare supports military operations on the ground and is exemplified with the ongoing Russian hybrid warfare in Ukraine. New models and methods to develop future trustworthy critical infrastructures in our societies are presented. Some mitigation ideas to meet the challenges of future hybrid warfare are also discussed. © 2017 IEEE.

  • 223.
    Carlsson, Oskar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nabhani, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    User and Entity Behavior Anomaly Detection using Network Traffic2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 224.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A study on performance measures for auto-scaling CPU-intensive containerized applications2019In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543Article in journal (Refereed)
    Abstract [en]

    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented.

  • 225.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cardellini, Valeria
    University of Rome, ITA.
    Interino, Gianluca
    University of Rome, ITA.
    Palmirani, Monica
    University of Bologna, ITA.
    Research challenges in legal-rule and QoS-aware cloud service brokerage2018In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 78, no Part 1, p. 211-223Article in journal (Refereed)
    Abstract [en]

    Abstract The ICT industry and specifically critical sectors, such as healthcare, transportation, energy and government, require as mandatory the compliance of ICT systems and services with legislation and regulation, as well as with standards. In the era of cloud computing, this compliance management issue is exacerbated by the distributed nature of the system and by the limited control that customers have on the services. Today, the cloud industry is aware of this problem (as evidenced by the compliance program of many cloud service providers), and the research community is addressing the many facets of the legal-rule compliance checking and quality assurance problem. Cloud service brokerage plays an important role in legislation compliance and QoS management of cloud services. In this paper we discuss our experience in designing a legal-rule and QoS-aware cloud service broker, and we explore relate research issues. Specifically we provide three main contributions to the literature: first, we describe the detailed design architecture of the legal-rule and QoS-aware broker. Second, we discuss our design choices which rely on the state of the art solutions available in literature. We cover four main research areas: cloud broker service deployment, seamless cloud service migration, cloud service monitoring, and legal rule compliance checking. Finally, from the literature review in these research areas, we identify and discuss research challenges.

  • 226.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-Aware Adaptation in Managed Cassandra Datacenters2016In: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, p. 60-71Conference paper (Refereed)
    Abstract [en]

    Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

  • 227.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-aware Auto-scaling Algorithms for Cassandra Virtual Data Centers2017In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543, Vol. 20, no 3, p. 2065-2082Article in journal (Refereed)
    Abstract [en]

    Apache Cassandra is an highly scalable and available NoSql datastore, largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra Virtual Data Centers (VDCs). This paper address the problem of energy efficient auto-scaling of Cassandra VDC in managed Cassandra data centers. We propose three energy-aware autoscaling algorithms: \texttt{Opt}, \texttt{LocalOpt} and \texttt{LocalOpt-H}. The first provides the optimal scaling decision orchestrating horizontal and vertical scaling and optimal placement. The other two are heuristics and provide sub-optimal solutions. Both orchestrate horizontal scaling and optimal placement. \texttt{LocalOpt} consider also vertical scaling. In this paper: we provide an analysis of the computational complexity of the optimal and of the heuristic auto-scaling algorithms; we discuss the issues in auto-scaling Cassandra VDC and we provide best practice for using auto-scaling algorithms; we evaluate the performance of the proposed algorithms under programmed SLA variation, surge of throughput (unexpected) and failures of physical nodes. We also compare the performance of energy-aware auto-scaling algorithms with the performance of two energy-blind auto-scaling algorithms, namely \texttt{BestFit} and \texttt{BestFit-H}. The main findings are: VDC allocation aiming at reducing the energy consumption or resource usage in general can heavily reduce the reliability of Cassandra in term of the consistency level offered. Horizontal scaling of Cassandra is very slow and make hard to manage surge of throughput. Vertical scaling is a valid alternative, but it is not supported by all the cloud infrastructures.

  • 228.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimal adaptation for Apache Cassandra2016In: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Conference paper (Refereed)
  • 229.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbad, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Energy-Aware Adaptation Model for Big Data Platforms2016In: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, p. 349-350Conference paper (Refereed)
    Abstract [en]

    Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

  • 230.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    Spindox S.p.A, ITA.
    Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, p. 207-214, article id 8064125Conference paper (Refereed)
    Abstract [en]

    Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

  • 231.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

  • 232.
    Chadalapaka, Gayatri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. BTH.
    Performance Assessment of Spectrum Sharing Systems: with Service Differentiation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 233.
    Chai, Yi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A novel progressive mesh representation method based on the half-edge data structure and √3 subdivision2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Progressive mesh (PM) representation can perfectly meet the requirements of generating multi-resolutions for a detailed 3D model. This research proposes a new PM representation method to improve the PM representation storage efficiency and reduce PM generation time. In existing PM representation methods, more than 4 adjacent vertices will be stored for one vertex in the PM representation. Furthermore, the methods always use the inefficient vertex and face list representation during the generation process. In our proposed method, only three vertices are stored by using the √3 subdivision scheme and the efficient half-edge data structure replaces the vertex and face list representation. To evaluate the proposed method, a designed experiment is conducted by using three common testing 3D models. The result illustrates the improvements by comparing to other previous methods.

  • 234.
    Chalasani, Trishala
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    AUTOMATED ASSESSMENT FOR THE THERAPY SUCCESS OF FOREIGN ACCENT SYNDROME: Based on Emotional Temperature2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Foreign Accent Syndrome is a rare neurological disorder, where among other symptoms of the patient’s emotional speech is affected. As FAS is one of the mildest speech disorders, there has not been much research done on the cost-effective biomarkers which reflect recovery of competences speech.

    Objectives. In this pilot study, we implement the Emotional Temperature biomarker and check its validity for assessing the FAS. We compare the results of implemented biomarker with another biomarker based on the global distances for FAS and identify the better one.

    Methods. To reach the objective, the emotional speech data of two patients at different phases of the treatment are considered. After preprocessing, experiments are performed on various window sizes and the observed correctly classified instances in automatic recognition are used to calculate Emotional temperature. Further, we use the better biomarker for tracking the recovery in the patient’s speech.

    Results. The Emotional temperature of the patient is calculated and compared with the ground truth and with that of the other biomarker. The Emotional temperature is calculated to track the emergence of compensatory skills in speech.

    Conclusions. A biomarker based on the frame-view of speech signal has been implemented. The implementation has used the state of art feature set and thus is an unproved version of the classical Emotional Temperature. The biomarker has been used to automatically assess the recovery of two patients diagnosed with FAS. The biomarker has been compared against the global view biomarker and has advantages over it. It also has been compared to human evaluations and captures the same dynamics.

  • 235.
    Chamala, Navneet Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reasons Governing the Adoption and Denial of TickITplus: A Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software Process Improvement (SPI) initiatives like Capability Maturity Model (CMM), Capability Maturity Model Integration (CMMI), Bootstrap etc., have been developed on the primary agenda of continuous software process improvement. Similarly, about two decades ago, the United Kingdom Accreditation Services (UKAS) have laid down a set of guidelines based on the ISO quality standards for providing certification to organizations named TickIT. TickIT is now obsolete with its successor scheme TickITplus taking up its place with a lot of significant additions. All the companies which were certified based on TickIT guidelines (more than 1000 companies) were asked to move to TickITplus in order to keep their TickIT certification. However, until now it has been three years since the inception of TickITplus and only 70 companies have adopted TickITplus. This is way below relative to the number of TickIT certified organizations. The present thesis is done in order to find the factors why most of the companies have not adopted TickITplus and also why the 70 organizations have moved to TickITplus.  

    Objectives In this study, an attempt has been made to accomplish the following objectives: Identifying the changes that have been brought about in the new scheme. The factors that a software organization looks into while adopting or migrating to a new software quality certification scheme are identified. Validate these factors with the help of survey and interviews. Analyze the results of survey and interviews to provide the reasons why most of the organizations haven’t adopted TickITplus certification scheme.

    Methods. This research is done by using a mixed method approach by incorporating both quantitative and qualitative research methods. An online survey is conducted with the help of an online questionnaire as part of the quantitative leg. Two survey questionnaires have been framed to gather responses. With respect to the qualitative research method interviews are conducted to get a wider understanding about the factors that led an organization to migrate or not to migrate to TickITplus. The gathered data is analyzed using statistical methods like bivariate and univariate analysis for the quantitative method and thematic coding has been applied for the qualitative method. Triangulation method is used to validate the data obtained by correlating the results from the survey and interviews with those extracted from the literature review.

    Results. Results pertaining to the reasons why companies have moved to and also why other companies haven’t taken up TickITplus have been gathered from the survey and interviews. It was identified that high costs and low customer demand were the main reasons for the organizations not to choose TickITplus while among the organizations which have moved to TickITplus have also chosen the scheme based on customer requirement. However, few other reasons apart from these have also been identified which are presented in this document

    Conclusions. Conclusions have been drawn citing the importance of costs incurred for implementing TickITplus as a reason for not selecting TickITplus as it was considered very expensive. Among other reasons customer requirement was also low which was identified as a factor for the relatively low number of TickITplus certified organizations. On the other hand, among the TickITplus certified firms, customer demand forms the prominent reason for moving to TickITplus and lack of appropriate people to take up the work was considered as an important hindrance while implementing TickITplus. Several other reasons and challenges have also been identified which are clearly detailed in the document.

  • 236.
    Charla, Shiva Bhavani Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Examining Various Input Patterns Effecting Software  Application Performance: A Quasi-experiment on Performance Testing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays, non-functional testing has a great impact on the real-time environment. Non-functional testing helps to analyze the performance of the application on both server and client. Load testing attempts to cause the system under test to respond incorrectly in a situation that differs from its normal operation, but rarely encountered in real world use. Examples include providing abnormal inputs to the software or placing real-time software under unexpectedly high loads. High loads are induced over the application to test the performance, but there is a possibility that particular pattern of the low load could also induce load on a real-time system. For example, repeatedly making a request to the system every 11 seconds might cause a fault if the system transitions to standby state after 10 seconds of inactivity. The primary aim of this study is to find out various low load input patterns affecting the software, rather than simply high load inputs. A quasi-experiment was chosen as a research method for this study. Performance testing was performed on the web application with the help of a tool called HP load runner. A comparison was made between low load and high load patterns to analyze the performance of the application and to identify bottlenecks under different load.

  • 237.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Papatheocharous, Efi
    RISE SICS AB, SWE.
    Borg, Markus
    RISE SICS AB, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Component selection in Software Engineering: Which attributes are the most important in the decision process?2018In: EUROMICRO Conference Proceedings, IEEE conference proceedings, 2018, p. 198-205Conference paper (Refereed)
    Abstract [en]

    Abstract— Component-based software engineering is a common approach to develop and evolve contemporary software systems where different component sourcing options are available: 1)Software developed internally (in-house), 2)Software developed outsourced, 3)Commercial of the shelf software, and 4) Open Source Software. However, there is little available research on what attributes of a component are the most important ones when selecting new components. The object of the present study is to investigate what matters the most to industry practitioners during component selection. We conducted a cross-domain anonymous survey with industry practitioners involved in component selection. First, the practitioners selected the most important attributes from a list. Next, they prioritized their selection using the Hundred-Dollar ($100) test. We analyzed the results using Compositional Data Analysis. The descriptive results showed that Cost was clearly considered the most important attribute during the component selection. Other important attributes for the practitioners were: Support of the component, Longevity prediction, and Level of off-the-shelf fit to product. Next an exploratory analysis was conducted based on the practitioners’ inherent characteristics. Nonparametric tests and biplots were used. It seems that smaller organizations and more immature products focus on different attributes than bigger organizations and mature products which focus more on Cost

  • 238.
    Chatzipetrou, Panagiota
    et al.
    Aristotle Univ Thessaloniki, Dept Informat, GR-54006 Thessaloniki, Greece..
    Angelis, Lefteris
    Aristotle Univ Thessaloniki, Dept Informat, GR-54006 Thessaloniki, Greece..
    Barney, Sebastian
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An experience-based framework for evaluating alignment of software quality goals2015In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 23, no 4, p. 567-594Article in journal (Refereed)
    Abstract [en]

    Efficient quality management of software projects requires knowledge of how various groups of stakeholders involved in software development prioritize the product and project goals. Agreements or disagreements among members of a team may originate from inherent groupings, depending on various professional or other characteristics. These agreements are not easily detected by conventional practices (discussions, meetings, etc.) since the natural language expressions are often obscuring, subjective, and prone to misunderstandings. It is therefore essential to have objective tools that can measure the alignment among the members of a team; especially critical for the software development is the degree of alignment with respect to the prioritization goals of the software product. The paper proposes an experience-based framework of statistical and graphical techniques for the systematic study of prioritization alignment, such as hierarchical cluster analysis, analysis of cluster composition, correlation analysis, and closest agreement-directed graph. This framework can provide a thorough and global picture of a team's prioritization perspective and can potentially aid managerial decisions regarding team composition and leadership. The framework is applied and illustrated in a study related to global software development where 65 individuals in different roles, geographic locations and professional relationships with a company, prioritize 24 goals from individual perception of the actual situation and for an ideal situation.

  • 239.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ouriques, Raquel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Approaching the Relative Estimation Concept with Planning Poker2018In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, p. 21-25Conference paper (Refereed)
    Abstract [en]

    Simulation is a powerful instrument in the education process that can help students experience a reality context and understand complex concepts required to accomplish practitioners' tasks. The present study aims to investigate the software engineering students' perception about the usefulness of the Planning Poker technique in relation to their understanding of the relative estimation concept. We conducted a simulation exercise where students first estimated tasks applying the concepts of relative estimation based on the concepts explained in the lecture, and then to estimate tasks applying the Agile Planning Poker technique. To investigate the students' perception, we used a survey at the end of each exercise. The preliminary results did not show statistical significance on the students' confidence to estimate relatively the user stories. However, from the students' comments and feedback, there are indications that students are more confident in using Agile Planning Poker when they are asked to estimate user stories. The study will be replicated in the near future to a different group of students with a different background, to have a better understanding and also identify possible flaws of the exercise. © 2018 Association for Computing Machinery.

  • 240.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Van Solingen, Rini
    Delft University of Technology, NLD.
    When and who leaves matters: Emerging results from an empirical study of employee turnover2018In: International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society , 2018, article id a53Conference paper (Refereed)
    Abstract [en]

    Background: Employee turnover in GSD is an extremely important issue, especially in Western companies offshoring to emerging nations. Aims: In this case study we investigated an offshore vendor company and in particular whether the employees' retention is related with their experience. Moreover, we studied whether we can identify a threshold associated with the employees' tendency to leave the particular company. Method: We used a case study, applied and presented descriptive statistics, contingency tables, results from Chi-Square test of association and post hoc tests. Results: The emerging results showed that employee retention and company experience are associated. In particular, almost 90% of the employees are leaving the company within the first year, where the percentage within the second year is 50-50%. Thus, there is an indication that the 2 years' time is the retention threshold for the investigated offshore vendor company. Conclusions: The results are preliminary and lead us to the need for building a prediction model which should include more inherent characteristics of the projects to aid the companies avoiding massive turnover waves. © 2018 ACM.

  • 241.
    CHAVALI, SRIKAVYA
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Select...
    AUTOMATION OF A CLOUD HOSTED APPLICATION: Performance, Automated Testing, Cloud Computing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.

     

    Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.

     

    Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.

     

    Results: Automation of test report analyzer functionality of cloud hosted application is made using TestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.

     

    Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time can be made effectively and application can be tested continuously increasing the efficiency and the quality of an application.

  • 242. Che, X.
    et al.
    Niu, Y.
    Shui, B.
    Fu, J.
    Fei, G.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zhang, Y.
    A novel simulation framework based on information asymmetry to evaluate evacuation plan2015In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 31, no 6-8, p. 853-861Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a novel framework to simulate the crowd behavior under emergency situations in a confined space with multiple exits. In our work, we take the information asymmetry into consideration, which is used to model the different behaviors presented by pedestrians because of their different knowledge about the environment. We categorize the factors influencing the preferred velocity into two groups, the intrinsic and extrinsic factors, which are unified into a single space called influence space. At the same time, a finite state machine is employed to control the individual behavior. Different strategies are used to compute the preferred velocity in different states, so that our framework can produce the phenomena of decision change. Our experimental results prove that our framework can be employed to analyze the factors influencing the escape time, such as the number and location of exits, the density distribution of the crowd and so on. Thus it can be used to design and evaluate the evacuation plans. © 2015 Springer-Verlag Berlin Heidelberg

  • 243.
    Chebudie, Abiy Biru
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Monitoring of Video Streaming Quality from Encrypted Network Traffic: The Case of YouTube Streaming2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The video streaming applications contribute to a major share of the Internet traffic. Consequently, monitoring and management of video streaming quality has gained a significant importance in the recent years. The disturbances in the video, such as, amount of buffering and bitrate adaptations affect user Quality of Experience (QoE). Network operators usually monitor such events from network traffic with the help of Deep Packet Inspection (DPI). However, it is becoming difficult to monitor such events due to the traffic encryption. To address this challenge, this thesis work makes two key contributions. First, it presents a test-bed, which performs automated video streaming tests under controlled time-varying network conditions and measures performance at network and application level. Second, it develops and evaluates machine learning models for the detection of video buffering and bitrate adaptation events, which rely on the information extracted from packets headers. The findings of this work suggest that buffering and bitrate adaptation events within 60 second intervals can be detected using Random Forest model with an accuracy of about 70%. Moreover, the results show that the features based on time-varying patterns of downlink throughput and packet inter-arrival times play a distinctive role in the detection of such events.

  • 244.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Structure Preserving Binary Image Morphing using Delaunay Triangulation2017In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 85, p. 8-14Article in journal (Refereed)
    Abstract [en]

    Mathematical morphology has been of a great significance to several scientific fields. Dilation, as one of the fundamental operations, has been very much reliant on the common methods based on the set theory and on using specific shaped structuring elements to morph binary blobs. We hypothesised that by performing morphological dilation while exploiting geometry relationship between dot patterns, one can gain some advantages. The Delaunay triangulation was our choice to examine the feasibility of such hypothesis due to its favourable geometric properties. We compared our proposed algorithm to existing methods and it becomes apparent that Delaunay based dilation has the potential to emerge as a powerful tool in preserving objects structure and elucidating the influence of noise. Additionally, defining a structuring element is no longer needed in the proposed method and the dilation is adaptive to the topology of the dot patterns. We assessed the property of object structure preservation by using common measurement metrics. We also demonstrated such property through handwritten digit classification using HOG descriptors extracted from dilated images of different approaches and trained using Support Vector Machines. The confusion matrix shows that our algorithm has the best accuracy estimate in 80% of the cases. In both experiments, our approach shows a consistent improved performance over other methods which advocates for the suitability of the proposed method.

  • 245.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Towards Query by Text Example for pattern spotting in historical documents2016In: Proceedings - CSIT 2016: 2016 7th International Conference on Computer Science and Information Technology, IEEE Computer Society, 2016, article id 7549479Conference paper (Refereed)
    Abstract [en]

    Historical documents are essentially formed of handwritten texts that exhibit a variety of perceptual environment complexities. The cursive and connected nature of text lines on one hand and the presence of artefacts and noise on the other hand hinder achieving plausible results using current image processing algorithm. In this paper, we present a new algorithm which we termed QTE (Query by Text Example) that allows for training-free and binarisation-free pattern spotting in scanned handwritten historical documents. Our algorithm gives promising results on a subset of our database revealing ∌83% success rate in locating word patterns supplied by the user.

  • 246.
    Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, p. 47-52, article id 8073567Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

  • 247.
    Chen, Hao
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Xu, Luyang
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Architecture and Framework for Programmable Automation Controller: A Systematic Literature Review and A Case Study2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. PAC controller is a strengthened version of PLC controller. Its function is very similar, but its essence and construction are different. PLC and PAC have many successful applications in the field of industrial automation control. There is a lot of literature about the software architecture of PLC control system. However, there is almost no relevant literature on software architecture based on PAC control system. A well-performing and stable automatic control system is indispensable to the design and development of suitable software architecture. The quality and pattern of software architecture can even affect the stability and efficiency of the control system.

    Objectives. Based on these problems, we defined two primary objectives. The first is to investigate the architecture of some existing large industrial control systems, to analyze and summarize the scenarios and advantages and disadvantages of these architectural patterns. The second, based on the results of effort for the first objective, we want to propose and design a set of automated control solution architecture model based on PAC control system, which is implemented and applied in a printing house. In the process, we sum up the challenges and obstacles encountered in implementing the solution and provide some guidance or reference for those involved in the field.

    Methods. For the first objective, we used a systematic literature review to collect data about existing ICS architecture. Concerning the second objective, a case study was conducted in a printing house in Karlskrona Sweden, in the study, we proposed a software architecture model suitable for PAC automation control system. Then, we developed and tested the automation control system and summarized some challenges and obstacles in the process of the implementation.

    Results. The existing ICS (Industrial Control System) architecture models and critical problems and challenges in the implementation of ICS are identified. From the existing literature, we have summarized five commonly used large industrial control system architecture models, which are mainly using composite structures, that is, a combination of multiple architecture patterns. Also, some critical problems in the industrial control system, such as information security, production reliability, etc. are also identified. In the case study, we put forward an automatic control solution for Printing House based on SLR results. We designed the hardware deployment architecture of the system and the software control architecture. Generally speaking, this architecture is based on C/S architecture. In the development of client, we adopt the popular MVC architecture mode. In the longitudinal view of the whole system, an extended hierarchical architecture model is adopted. In the core control system, we adopt the modular architecture design idea. The whole control system is composed of 6 parts, four subsystems of PAC terminal, one server-side program and one client program. After a long time, development and test, our system finally goes online for the production, and its production efficiency is improved compared with the old system. Its expansion functions, such as Production Report and Tag Print, are deeply satisfying for the customers.

    Conclusions. In this research, we summarize and compare the advantages and disadvantages of several commonly used industrial control systems. Besides, we proposed a software architecture model and developed an automation control system based on PAC. We fill the gap that there is a lack of studies about the software architecture about the implementation of the automation control system based on PAC. Our result can help software engineers and developers in ICS fields to develop their own PAC based automation control system.

  • 248.
    Chen, Mingda
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    He, Yao
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Exploration on Automated Software Requirement Document Readability Approaches2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The requirements analysis phase, as the very beginning of software development process, has been identified as a quite important phase in the software development lifecycle. Software Requirement Specification (SRS) is the output of requirements analysis phase, whose quality factors play an important role in the evaluation work. Readability is a quite important SRS quality factor, but there are few available automated approaches for readability measurement, because of the tight dependency on readers' perceptions. Low readability of SRS documents has a serious impact on the whole process of software development. Therefore, it's extremely urgent to propose effective automated approaches for SRS documents readability measurement. Using traditional readability indexes to analyze readability of SRS documents automatically is a potentially feasible approach. However, the effectiveness of this approach is not systematically evaluated before.

    Objectives. In this study, firstly, we aim to understand the readability of texts and investigate approaches to score texts readability manually. Then investigate existing automated readability approaches for texts with their working theories. Next, evaluate the effectiveness of measuring the readability of SRS documents by using these automated readability approaches. Finally, rank these automated approaches by their effectiveness.

    Methods. In order to find out the way how human score the readability of texts manually and investigate existing automated readability approaches for texts, systematic literature review is chosen as the research methodology. Experiment is chosen to explore the effectiveness of automated readability approaches.

    Results. We find 67 articles after performing systematic literature review. According to systematic literature review, human judging the readability of texts through reading is the most common way of scoring texts readability manually. Additionally, we find four available automated readability assessments tools and seven available automated readability assessments formulas. After executing the experiment, we find the actual value of effectiveness of all selected approaches are not high and Coh-Metrix presents the highest actual value of effectiveness of automated readability approach among the selected approaches.

    Conclusions. Coh-Metrix is the most effective automated readability approach, but the feasibility in directly applying Coh-Metrix in SRS documents readability assessments cannot be permitted. Since the actual value of evaluated effectiveness is not high enough. In addition, all selected approaches are based on metrics of readability measures, but no semantic factors are blended in readability assessments. Hence studying more on human perception quantifying and adding semantic analysis in SRS documents readability assessment could be two research directions in future.

  • 249.
    Chen, Shajin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Weibo's Role in Shaping Public Opinion and Political Participation in China2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This thesis examines the role of microblogging in shaping public opinion and political participation in China with particular focus on the question of what sociopolitical implications and challenges that weibo phenomenon has brought to the Chinese society. I explore some of the prominent features of weibo for the role they plays in framing public sphere. Along with an in-depth study of two weibo cases, the results show that microblogging provide a unique platform for Chinese citizens to participate in civic engagement and to organize their collective opinions. The study also demonstrates that weibo has a significant impact on spurring social change. Further, weibo discourse encourages interaction between government and ordinary citizens, and it also changes traditional Chinese politics through enabling public political participation. However, the spread of rumors and network violence are some of the disadvantages inherent to the weibo phenomenon that should be of concern. More importantly, the analysis reveals that the initial reasons behind the weibo phenomenon were the long-term social conflicts and continuous information control by the state. Weibo certainly provides a remarkable platform for the freedom of speech but it should not be considered as a panacea for the social changes in China.

  • 250.
    Chennamsetty, Harish
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experimentation in Global Software Engineering2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Software engineering researchers are guided by research principles to conduct any type of research. Though, there are many guidelines to detail how a particular research method can be applied, there is always a necessity to continue and to improve the existing empirical research strategies. The context of this thesis is to address guidelines for conducting controlled experiments in Global Software Engineering (GSE). With this thesis, the state-of-the-art of conducting experiments in GSE research has been explored. Objectives: The goal of this thesis is to analyze the existing experiments in GSE research. Research problems addressed with GSE experiments and the state-of-the-art of overall GSE experiment design need to be analyzed. Appropriate guidelines should be drawn in order to provide strategies to future GSE researchers in mitigating or solving GSE specific experimentation challenges. Methods: A systematic literature review (SLR) is conducted to review all the GSE experiments that are found in the literature. The search process was done on 6 databases. A specific search and quality assessment criterion is used to select these GSE experiments. Furthermore, scientific interviews are conducted with GSE research experts to evaluate a set of guidelines (thesis author’s recommendations) that address the challenges when conducting GSE experiments. Thematic analysis has been performed to analyze the evaluation results and to further improve or implement any suggestions given by the interviewees. Conclusions: The results obtained from the SLR have provided a chance to understand the state-of-the-art and to analyze the challenges or problems when conducting controlled experiments in GSE. The challenges that are identified in GSE controlled experiments are found to be with experiment study-setting, involving subjects and addressing GSE relevant threats to validity in a GSE experiments. 9 guidelines are framed, with each guideline addressing a specific challenge. The final guidelines (that resulted after interviews) provide effective recommendations to GSE researchers when conducting controlled experiments.

2345678 201 - 250 of 1523
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf