Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Computer Vision for Traffic Surveillance Systems: Methods and Applications
Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences. (Systems Engineering)ORCID iD: 0000-0002-6834-5676
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Computer vision solutions play a significant role in intelligent transportation systems (ITS) by improving traffic flow, safety and management. In addition, they feature prominently in autonomous vehicles and their future development. The main advantages of vision-based systems are their flexibility, coverage and accessibility. Moreover, computational power and recent algorithmic advances have increased the promise of computer vision solutions and broadened their implementation. However, computational complexity, reliability and efficiency remain among the challenges facing vision-based systems.

Most traffic surveillance systems in ITS comprise three major criteria: vehicle detection, tracking and classification. In this thesis, computer vision systems are introduced to accomplish goals corresponding to these three criteria: 1) to detect the changed regions of an industrial harbour's parking lot using aerial images, 2) to estimate the speed of the vehicles on the road using a stationary roadside camera and 3) to classify vehicles using a stationary roadside camera and aerial images.

The first part of this thesis discusses change detection in aerial images, which is the core of many remote sensing applications. The aerial images were taken over an industrial harbour using unmanned aerial vehicles on different days and under various circumstances. This thesis presents two approaches to detecting changed regions: a local pattern descriptor and three-dimensional feature maps. These methods are robust to varying illumination and shadows. Later, the introduced 3D feature map generation model was employed for vehicle detection in aerial images.

The second part of this thesis deals with vehicle speed estimation using roadside cameras. Information regarding the flow, speed and number of vehicles is essential for traffic surveillance systems. In this thesis, two vision-based vehicle speed estimation approaches are proposed. These analytical models consider the measurement uncertainties related to the camera sampling time. The main contribution of these models is to estimate a speed probability density function for every vehicle. Later, the speed estimation model was utilised for vehicle classification using a roadside camera.

Finally, in the third part, two vehicle classification models are proposed for roadside and aerial images. The first model utilises the proposed speed estimation method to extract the speed of the passing vehicles. Then, we used a fuzzy c-means algorithm to classify vehicles using their speeds and dimension features. The results show that vehicle speed is a useful feature for distinguishing different categories of vehicles. The second model employs deep neural networks to detect and classify heavy vehicles in aerial images. In addition, the proposed 3D feature generation model was utilised to improve the performance of the deep neural network. The experimental results show that 3D feature information can significantly reduce false positives in the deep learning model's output.

This thesis comprises two chapters: Introduction, and Publications. In the introduction section, we discuss the motivation for computer vision solutions and their importance. Furthermore, the concepts and algorithms used to construct the proposed methods are explained. The second chapter presents the included publications.

Place, publisher, year, edition, pages
Karlshamn: Blekinge Tekniska Högskola, 2021. , p. 149
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 1
Keywords [en]
Intelligent transportation systems, ITS, Computer visions systems
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing Computer Vision and Robotics (Autonomous Systems)
Research subject
Systems Engineering
Identifiers
URN: urn:nbn:se:bth-20924ISBN: 978-91-7295-416-8 (print)OAI: oai:DiVA.org:bth-20924DiVA, id: diva2:1517920
Public defence
2021-03-03, Zoom, 08:30 (English)
Opponent
Supervisors
Available from: 2021-01-15 Created: 2021-01-14 Last updated: 2022-02-18Bibliographically approved
List of papers
1. Adjustable Contrast Enhancement Using Fast Piecewise Linear Histogram Equalization
Open this publication in new window or tab >>Adjustable Contrast Enhancement Using Fast Piecewise Linear Histogram Equalization
2020 (English)In: PROCEEDINGS OF THE 2020 3RD INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS PROCESSING (ICIGP 2020), Association for Computing Machinery , 2020, p. 57-61Conference paper, Published paper (Refereed)
Abstract [en]

Histogram equalization is a technique to enhance the contrast of the image by redistributing the histogram. In this paper, a fast piecewise linear histogram equalization method is introduced based on an adjustable degree of enhancement and piecewise continuous transformation functions using frequencies of different grey-levels. This method aims to address and maximize the contrast enhancement of the image by stretching the entire spectrum. For this purpose, particular nodes (bins) on the histogram are simultaneously detected that in comparison with recursive methods, it requires less computational time. Then, the particular nodes are stretched using transformation functions to align with the reference nodes. The experimental results indicate that the performance of the proposed method is promising in terms of contrast enhancement. Moreover, this method preserves the texture of various regions in the image very well through the equalization process by using the degree of enhancement. © 2020 Owner/Author.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2020
Keywords
Contrast enhancement, Histogram equalization, Histogram modification, Image/video enhancement, Equalizers, Graphic methods, Linear transformations, Piecewise linear techniques, Textures, Computational time, Histogram equalizations, Piecewise linear, Piecewise-continuous, Recursive methods, Reference nodes, Transformation functions, Image enhancement
National Category
Control Engineering
Identifiers
urn:nbn:se:bth-19394 (URN)10.1145/3383812.3383830 (DOI)000579359200011 ()2-s2.0-85083108101 (Scopus ID)9781450377201 (ISBN)
Conference
3rd International Conference on Image and Graphics Processing, ICIGP 2020; Singapore; 8 February 2020 through 10 February 2020
Note

Sponsor: Nanyang Technological University,University of Bologna (UNIBO)

Open Access

Available from: 2020-04-24 Created: 2020-04-24 Last updated: 2021-01-14Bibliographically approved
2. Change detection in aerial images using a Kendall's TAU distance pattern correlation
Open this publication in new window or tab >>Change detection in aerial images using a Kendall's TAU distance pattern correlation
2016 (English)In: PROCEEDINGS OF THE 2016 6TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP), IEEE, 2016Conference paper, Published paper (Refereed)
Abstract [en]

Change detection in aerial images is the core of many remote sensing applications to analyze the dynamics of a wide area on the ground. In this paper, a remote sensing method is proposed based on viewpoint transformation and a modified Kendall rank correlation measure to detect changes in oblique aerial images. First, the different viewpoints of the aerial images are compromised and then, a local pattern descriptor based on Kendall rank correlation coefficient is introduced. A new distance measure referred to as Kendall's Tau-d (Tau distance) coefficient is presented to determine the changed regions. The developed system is applied on oblique aerial images with very low aspect angles that obtained using an unmanned aerial vehicle in two different days with drastic change in illumination and weather conditions. The experimental results indicate the robustness of the proposed method to variant illumination, shadows and multiple viewpoints for change detection in aerial images.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Aerial images, change detection, Kendall rank correlation, optical remote sensing
National Category
Signal Processing
Identifiers
urn:nbn:se:bth-13878 (URN)10.1109/EUVIP.2016.7764604 (DOI)000391630800023 ()978-1-5090-2781-1 (ISBN)
Conference
2016 6th European Workshop on Visual Information Processing (EUVIP), Marseille
Available from: 2017-02-03 Created: 2017-02-03 Last updated: 2021-01-14Bibliographically approved
3. Change detection in aerial images using three-dimensional feature maps
Open this publication in new window or tab >>Change detection in aerial images using three-dimensional feature maps
2020 (English)In: Remote Sensing, E-ISSN 2072-4292, Vol. 12, no 9, article id 1404Article in journal (Refereed) Published
Abstract [en]

Interest in aerial image analysis has increased owing to recent developments in and availabilityofaerialimagingtechnologies,likeunmannedaerialvehicles(UAVs),aswellasagrowing need for autonomous surveillance systems. Variant illumination, intensity noise, and different viewpointsareamongthemainchallengestoovercomeinordertodeterminechangesinaerialimages. In this paper, we present a robust method for change detection in aerial images. To accomplish this, the method extracts three-dimensional (3D) features for segmentation of objects above a defined reference surface at each instant. The acquired 3D feature maps, with two measurements, are then used to determine changes in a scene over time. In addition, the important parameters that affect measurement, such as the camera’s sampling rate, image resolution, the height of the drone, and the pixel’sheightinformation,areinvestigatedthroughamathematicalmodel. Toexhibititsapplicability, the proposed method has been evaluated on aerial images of various real-world locations and the results are promising. The performance indicates the robustness of the method in addressing the problems of conventional change detection methods, such as intensity differences and shadows.

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
aerial images; 3D change detection; optical vehicle surveillance; remote sensing; unmanned aerial vehicle
National Category
Signal Processing Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:bth-19422 (URN)10.3390/rs12091404 (DOI)000543394000051 ()
Note

open access

Available from: 2020-05-01 Created: 2020-05-01 Last updated: 2023-08-28Bibliographically approved
4. Design of a video-based vehicle speed measurement system: an uncertainty approach
Open this publication in new window or tab >>Design of a video-based vehicle speed measurement system: an uncertainty approach
2018 (English)In: 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 2018, pp. 44-49., IEEE, 2018, article id 8640964Conference paper, Published paper (Refereed)
Abstract [en]

Speed measurement is one of the key components of intelligent transportation systems. It provides suitable information for traffic management and law enforcement. This paper presents a versatile and analytical model for a video-based speed measurement in form of the probability density function (PDF). In the proposed model, the main factors contributing to the uncertainties of the measurement are considered. Furthermore, a guideline is introduced in order to design a video-based speed measurement system based on the traffic and other requirements. As a proof of concept, the model has been simulated and tested for various speeds. An evaluation validates the strength of the model for accurate speed measurement under realistic circumstances.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Intelligent transportation systems, Machine vision, Motion analysis, Pattern recognition, Speed measurement
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17163 (URN)10.1109/ICIEV.2018.8640964 (DOI)000462610300008 ()9781538651612 (ISBN)
Conference
Joint 7th International Conference on Informatics, Electronics and Vision and 2nd International Conference on Imaging, Vision and Pattern Recognition, ICIEV-IVPR 2018; Kitakyushu; Japan; 25-28 June 2018
Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2024-10-21Bibliographically approved
5. Analytical Modeling for a Video-Based Vehicle Speed Measurement Framework
Open this publication in new window or tab >>Analytical Modeling for a Video-Based Vehicle Speed Measurement Framework
2020 (English)In: Sensors, E-ISSN 1424-8220, Vol. 20, no 1, article id 160Article in journal (Refereed) Published
Abstract [en]

 Traffic analyses, particularly speed measurements, are highly valuable in terms of road safety and traffic management. In this paper, an analytical model is presented to measure the speed of a moving vehicle using an off-the-shelf video camera. The method utilizes the temporal sampling rate of the camera and several intrusion lines in order to estimate the probability density function (PDF) of a vehicle’s speed. The proposed model provides not only an accurate estimate of the speed, but also the possibility of being able to study the performance boundaries with respect to the camera framerate as well as the placement and number of intrusion lines in advance. This analytical modelis verified by comparing its PDF outputs with the results obtained via a simulation of the corresponding movements. In addition,as aproof-of-concept, the proposed model is implemented for avideo-based vehicle speed measurement system. The experimental results demonstrate the model’s capability in terms of taking accurate measurements of the speed via a consideration of the temporal sampling rate and lowering the deviation by utilizing more intrusion lines. The analytical model is highly versatile and can be used as the core of various video-based speed measurement systems in transportation and surveillance applications.

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
vehicle speed measurement; temporal sampling; analytical modeling; motion analysis; pattern recognition; image processing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17160 (URN)10.3390/s20010160 (DOI)000510493100160 ()2-s2.0-85077333600 (Scopus ID)
Note

open access

Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2023-02-16Bibliographically approved
6. Vehicle speed measurement model for video-based systems
Open this publication in new window or tab >>Vehicle speed measurement model for video-based systems
2019 (English)In: Computers & electrical engineering, ISSN 0045-7906, E-ISSN 1879-0755, Vol. 76, p. 238-248Article in journal (Refereed) Published
Abstract [en]

Advanced analysis of road traffic data is an essential component of today's intelligent transportation systems. This paper presents a video-based vehicle speed measurement system based on a proposed mathematical model using a movement pattern vector as an input variable. The system uses the intrusion line technique to measure the movement pattern vector with low computational complexity. Further, the mathematical model introduced to generate the pdf (probability density function) of a vehicle's speed that improves the speed estimate. As a result, the presented model provides a reliable framework with which to optically measure the speeds of passing vehicles with high accuracy. As a proof of concept, the proposed method was tested on a busy highway under realistic circumstances. The results were validated by a GPS (Global Positioning System)-equipped car and the traffic regulations at the measurement site. The experimental results are promising, with an average error of 1.77 % in challenging scenarios.

Place, publisher, year, edition, pages
Elsevier, 2019
Keywords
Intelligent transportation systems; Machine vision; Motion analysis; Pattern recognition; Speed measurement system
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17161 (URN)10.1016/j.compeleceng.2019.04.001 (DOI)000470954900019 ()
Note

open access

Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2021-01-14Bibliographically approved
7. Vehicle classification based on multiple fuzzy c-means clustering using dimensions and speed features
Open this publication in new window or tab >>Vehicle classification based on multiple fuzzy c-means clustering using dimensions and speed features
2018 (English)In: Procedia Computer Science, Elsevier, 2018, Vol. 126, p. 7p. 1344-1350Conference paper, Published paper (Refereed)
Abstract [en]

Vehicle classification has a significant use in traffic surveillance and management. There are many methods proposed to accomplish this task using variety of sensorS. In this paper, a method based on fuzzy c-means (FCM) clustering is introduced that uses dimensions and speed features of each vehicle. This method exploits the distinction in dimensions features and traffic regulations for each class of vehicles by using multiple FCM clusterings and initializing the partition matrices of the respective classifierS. The experimental results demonstrate that the proposed approach is successful in clustering vehicles from different classes with similar appearanceS. In addition, it is fast and efficient for big data analysiS.

Place, publisher, year, edition, pages
Elsevier, 2018. p. 7
Series
Procedia Computer Science, ISSN 1877-0509
Keywords
Vehicle classification, Fuzzy c-means clustering, Intelligent transportation systems, Pattern recognition
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17165 (URN)10.1016/j.procS.2018.08.085 (DOI)000525954400142 ()
Conference
22nd International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES2018), Belgrade
Note

open access

Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2022-11-04Bibliographically approved
8. Vehicle Detection in Aerial Images Based on 3D Depth Maps and Deep Neural Networks
Open this publication in new window or tab >>Vehicle Detection in Aerial Images Based on 3D Depth Maps and Deep Neural Networks
2021 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 8381-8391Article in journal (Refereed) Published
Abstract [en]

Object detection in aerial images, particularly of vehicles, is highly important in remote sensing applications including traffic management, urban planning, parking space utilization, surveillance, and search and rescue. In this paper, we investigate the ability of three-dimensional (3D) feature maps to improve the performance of deep neural network (DNN) for vehicle detection. First, we propose a DNN based on YOLOv3 with various base networks, including DarkNet-53, SqueezeNet, MobileNet-v2, and DenseNet-201. We assessed the base networks and their performance in combination with YOLOv3 on efficiency, processing time, and the memory that each architecture required. In the second part, 3D depth maps were generated using pairs of aerial images and their parallax displacement. Next, a fully connected neural network (fcNN) was trained on 3D feature maps of trucks, semi-trailers and trailers. A cascade of these networks was then proposed to detect vehicles in aerial images. Upon the DNN detecting a region, coordinates and confidence levels were used to extract the corresponding 3D features. The fcNN used 3D features as the input to improve the DNN performance. The data set used in this work was acquired from numerous flights of an unmanned aerial vehicle (UAV) across two industrial harbors over two years. The experimental results show that 3D features improved the precision of DNNs from 88.23 % to 96.43 % and from 97.10 % to 100 % when using DNN confidence thresholds of 0.01 and 0.05, respectively. Accordingly, the proposed system was able to successfully remove 72.22 % to 100 % of false positives from the DNN outputs. These results indicate the importance of 3D features utilization to improve object detection in aerial images for future research. CCBY

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Convolutional neural networks, 3D depth maps, Object detection, Aerial images
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:bth-20923 (URN)10.1109/ACCESS.2021.3049741 (DOI)000608205500001 ()2-s2.0-85099218070 (Scopus ID)
Note

open access

Available from: 2021-01-14 Created: 2021-01-14 Last updated: 2021-02-04Bibliographically approved

Open Access in DiVA

fulltext(62846 kB)854 downloads
File information
File name FULLTEXT02.pdfFile size 62846 kBChecksum SHA-512
a02a35d9efcbef232adcc8c6eaafe655e870aadd6b495eabaf624a707a421688568ed8ad39d8f3d0d350968944929fba2b629155a1f1b68e1c59061bfdc652a4
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Javadi, Saleh
By organisation
Department of Mathematics and Natural Sciences
Engineering and TechnologyElectrical Engineering, Electronic Engineering, Information EngineeringSignal ProcessingComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 873 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 2092 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf