Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bouhennache, Rafik
    et al.
    Science and technology institute, university center of Mila, DZA.
    Bouden, Toufik
    ohammed Seddik Ben Yahia University of Jijel, DZA.
    Taleb-Ahmed, Abdmalik
    university of V alenciennes, FRA.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology.
    A new spectral index for the extraction of built-up land features from Landsat 8 satellite imagery2018In: Geocarto International, ISSN 1010-6049, E-ISSN 1752-0762Article in journal (Refereed)
    Abstract [en]

    Extracting built-up areas from remote sensing data like Landsat 8 satellite is a challenge. We have investigated it by proposing a new index referred as Built-up Land Features Extraction Index (BLFEI). The BLFEI index takes advantage of its simplicity and good separability between the four major component of urban system, namely built-up, barren, vegetation and water. The histogram overlap method and the Spectral Discrimination Index (SDI) are used to study separability. BLFEI index uses the two bands of infrared shortwaves, the red and green bands of the visible spectrum. OLI imagery of Algiers, Algeria, was used to extract built-up areas through BLFEI and some new previously developed built-up indices used for comparison. The water areas are masked out leading to Otsu’s thresholding algorithm to automatically find the optimal value for extracting built-up land from waterless regions. BLFEI, the new index improved the separability by 25% and the accuracy by 5%.

  • 2.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Structure Preserving Binary Image Morphing using Delaunay Triangulation2017In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 85, p. 8-14Article in journal (Refereed)
    Abstract [en]

    Mathematical morphology has been of a great significance to several scientific fields. Dilation, as one of the fundamental operations, has been very much reliant on the common methods based on the set theory and on using specific shaped structuring elements to morph binary blobs. We hypothesised that by performing morphological dilation while exploiting geometry relationship between dot patterns, one can gain some advantages. The Delaunay triangulation was our choice to examine the feasibility of such hypothesis due to its favourable geometric properties. We compared our proposed algorithm to existing methods and it becomes apparent that Delaunay based dilation has the potential to emerge as a powerful tool in preserving objects structure and elucidating the influence of noise. Additionally, defining a structuring element is no longer needed in the proposed method and the dilation is adaptive to the topology of the dot patterns. We assessed the property of object structure preservation by using common measurement metrics. We also demonstrated such property through handwritten digit classification using HOG descriptors extracted from dilated images of different approaches and trained using Support Vector Machines. The confusion matrix shows that our algorithm has the best accuracy estimate in 80% of the cases. In both experiments, our approach shows a consistent improved performance over other methods which advocates for the suitability of the proposed method.

  • 3. Danielsson, Max
    et al.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rasmusson, Jim
    Sony Mobile Communications AB.
    Feature Detection and Description using a Harris-Hessian/FREAK Combination on an Embedded GPU2016Conference paper (Refereed)
    Abstract [en]

    GPUs in embedded platforms are reaching performance levels comparable to desktop hardware, thus it becomes interesting to apply Computer Vision techniques. We propose, implement, and evaluate a novel feature detector and descriptor combination, i.e., we combine the Harris-Hessian detector with the FREAK binary descriptor. The implementation is done in OpenCL, and we evaluate the execution time and classification performance. We compare our approach with two other methods, FAST/BRISK and ORB. Performance data is presented for the mobile device Xperia Z3 and the desktop Nvidia GTX 660. Our results indicate that the execution times on the Xperia Z3 are insufficient for real-time applications while desktop execution shows future potential. Classification performance of Harris-Hessian/FREAK indicates that the solution is sensitive to rotation, but superior in scale variant images.

  • 4.
    Javadi, Mohammad Saleh
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Computer Vision Algorithms for Intelligent Transportation Systems Applications2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent years, Intelligent Transportation Systems (ITS) have emerged as an efficient way of enhancing traffic flow, safety and management. These goals are realized by combining various technologies and analyzing the acquired data from vehicles and roadways. Among all ITS technologies, computer vision solutions have the advantages of high flexibility, easy maintenance and high price-performance ratio that make them very popular for transportation surveillance systems. However, computer vision solutions are demanding and challenging due to computational complexity, reliability, efficiency and accuracy among other aspects.

    In this thesis, three transportation surveillance systems based on computer vision are presented. These systems are able to interpret the image data and extract the information about the presence, speed and class of vehicles, respectively. The image data in these proposed systems are acquired using Unmanned Aerial Vehicle (UAV) as a non-stationary source and roadside camera as a stationary one. The goal of these works is to enhance the general performance in accuracy and robustness of the systems with variant illumination and traffic conditions.

    This is a compilation thesis in systems engineering consists of three parts. The red thread through each part is a transportation surveillance system. The first part presents a change detection system using aerial images of a cargo port. The extracted information shows how the space is utilized at various times for further management and development of the port. The proposed solution can be used at different viewpoints and illumination levels e.g. sunset. The method is able to transform the images taken from different viewpoints and match them together and then using a proposed adaptive local threshold to detect discrepancies between them. In the second part, a vision-based vehicle's speed estimation system is presented. The measured speeds are essential information for law enforcement as well as estimation of traffic flow at certain points on the road. The system employs several intrusion lines to extract the movement pattern of each vehicle (non-equidistant sampling) as an input feature to the proposed analytical model. In addition, other parameters such as camera sampling rate and distances between intrusion lines are also taken into account to address the uncertainty in the measurements and to obtain the probability density function of the vehicle's speed. In the third part, a vehicle classification system is provided to categorize vehicles into “private cars", “light trailers", “lorry or bus" and “heavy trailer". This information can be used by authorities for surveillance and development of the roads. The proposed system consists of multiple fuzzy c-means clusterings using input features of length, width and speed of each vehicle. The system has been constructed using prior knowledge of traffic regulations regarding each class of vehicle in order to enhance the classification performance.

  • 5.
    Wen, Wei
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Khatibi, Siamak
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Towards Measuring of Depth Perception from Monocular Shadow Technique with Application in a Classical Painting2016In: Journal of Computers, ISSN 1796-203X, Vol. 11, p. 310-319Article in journal (Refereed)
    Abstract [en]

    Depth perception is one of important abilities of the human visual system to perceive the three dimensional world. Shadow technique that offers different depth information from different viewing points, known as Da Vinci stereopsis, has been used in classical paintings. In this paper, we report a method towards measuring the relative depth information stimulated by Da Vinci stereopsis in a classical painting. We set up a positioning array of cameras for capturing images from the portrait using a high resolution camera, where the changes of shadow areas are measured by featuring the effects as point and line changes. The result shows that 3D effects of the classical painting are not only a perceptual phenomenon but they are also physically tangible and can be measured. We confirm validity of the method by its implementation even on a typical single image and comparison of results between the single image and the portrait.

  • 6.
    Wen, Wei
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Siamak, Khatibi
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 3, p. 620-Article in journal (Refereed)
    Abstract [en]

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera.

  • 7.
    Westphal, Florian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Large collections of historical document images have been collected by companies and government institutions for decades. More recently, these collections have been made available to a larger public via the Internet. However, to make accessing them truly useful, the contained images need to be made readable and searchable. One step in that direction is document image binarization, the separation of text foreground from page background. This separation makes the text shown in the document images easier to process by humans and other image processing algorithms alike. While reasonably well working binarization algorithms exist, it is not sufficient to just being able to perform the separation of foreground and background well. This separation also has to be achieved in an efficient manner, in terms of execution time, but also in terms of training data used by machine learning based methods. This is necessary to make binarization not only theoretically possible, but also practically viable.

    In this thesis, we explore different ways to achieve efficient binarization in terms of execution time by improving the implementation and the algorithm of a state-of-the-art binarization method. We find that parameter prediction, as well as mapping the algorithm onto the graphics processing unit (GPU) help to improve its execution performance. Furthermore, we propose a binarization algorithm based on recurrent neural networks and evaluate the choice of its design parameters with respect to their impact on execution time and binarization quality. Here, we identify a trade-off between binarization quality and execution performance based on the algorithm’s footprint size and show that dynamically weighted training loss tends to improve the binarization quality. Lastly, we address the problem of training data efficiency by evaluating the use of interactive machine learning for reducing the required amount of training data for our recurrent neural network based method. We show that user feedback can help to achieve better binarization quality with less training data and that visualized uncertainty helps to guide users to give more relevant feedback.

  • 8.
    Westphal, Florian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Document Image Binarization Using Recurrent Neural Networks2018In: Proceedings - 13th IAPR International Workshop on Document Analysis Systems, DAS 2018, 2018, p. 263-268Conference paper (Refereed)
    Abstract [en]

    In the context of document image analysis, image binarization is an important preprocessing step for other document analysis algorithms, but also relevant on its own by improving the readability of images of historical documents. While historical document image binarization is challenging due to common image degradations, such as bleedthrough, faded ink or stains, achieving good binarization performance in a timely manner is a worthwhile goal to facilitate efficient information extraction from historical documents. In this paper, we propose a recurrent neural network based algorithm using Grid Long Short-Term Memory cells for image binarization, as well as a pseudo F-Measure based weighted loss function. We evaluate the binarization and execution performance of our algorithm for different choices of footprint size, scale factor and loss function. Our experiments show a significant trade-off between binarization time and quality for different footprint sizes. However, we see no statistically significant difference when using different scale factors and only limited differences for different loss functions. Lastly, we compare the binarization performance of our approach with the best performing algorithm in the 2016 handwritten document image binarization contest and show that both algorithms perform equally well.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf