Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 27) Show all publications
Wen, W. & Khatibi, S. (2018). The impact of curviness on four different image sensor forms and structures. Sensors, 18(2), Article ID 429.
Open this publication in new window or tab >>The impact of curviness on four different image sensor forms and structures
2018 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 2, article id 429Article in journal (Refereed) Published
Abstract [en]

The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
MDPI AG, 2018
Keywords
Critical points, Curviness quantification, Fill factor, Grid structure, Hessian matrix, Hexagonal image, Pixel form, Software-based, Virtual, Image processing, Image sensors, Semantics, Grid structures, Hessian matrices, Pixels
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-15920 (URN)10.3390/s18020429 (DOI)000427544000112 ()2-s2.0-85041511195 (Scopus ID)
Note

open access

Available from: 2018-02-22 Created: 2018-02-22 Last updated: 2018-04-26Bibliographically approved
Wen, W. & Khatibi, S. (2018). Virtual deformable image sensors: Towards to a general framework for image sensors with flexible grids and forms. Sensors, 18(6), Article ID 1856.
Open this publication in new window or tab >>Virtual deformable image sensors: Towards to a general framework for image sensors with flexible grids and forms
2018 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 6, article id 1856Article in journal (Refereed) Published
Abstract [en]

Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
MDPI AG, 2018
Keywords
Deformable sensor, Framework, Hexagonal, HoG, Penrose, Pixel form, Sensor grid, Deformation, Image sensors, Pixels, Histogram of gradients, Sensor arrangements, Sensor configurations, Sensor grids, Specific distribution, Anchors
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Other Computer and Information Science
Identifiers
urn:nbn:se:bth-16631 (URN)10.3390/s18061856 (DOI)000436774300190 ()2-s2.0-85048303219 (Scopus ID)
Note

open access

Available from: 2018-06-27 Created: 2018-06-27 Last updated: 2018-08-21Bibliographically approved
Wen, W. & Siamak, K. (2017). Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image. Sensors, 17(3), 620
Open this publication in new window or tab >>Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
2017 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 3, p. 620-Article in journal (Refereed) Published
Abstract [en]

Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera.

Place, publisher, year, edition, pages
MDPI, 2017
Keywords
fill factor; virtual image; image sensor; pipeline; virtual response function; sensor irradiance
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:bth-14046 (URN)10.3390/s17030620 (DOI)000398818700193 ()
Note

open access

Available from: 2017-03-24 Created: 2017-03-24 Last updated: 2018-01-13Bibliographically approved
Wen, W. & Khatibi, S. (2016). Back to basics: Towards novel computation and arrangement of spatial sensory in images. Acta Polytechnica, 56(5), 409-416
Open this publication in new window or tab >>Back to basics: Towards novel computation and arrangement of spatial sensory in images
2016 (English)In: Acta Polytechnica, ISSN 1210-2709, Vol. 56, no 5, p. 409-416Article in journal (Refereed) Published
Abstract [en]

The current camera has made a huge progress in the sensor resolution and the low-luminance performance. However, we are still far from having an optimal camera as powerful as our eye is. The study of the evolution process of our visual system indicates attention to two major issues: the form and the density of the sensor. High contrast and optimal sampling properties of our visual spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose a novel software-based method to create images on a compact dense hexagonal grid, derived from a simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible to achieve image processing operations in the orbit domain and the generated hexagonal images are superior, in detection of curvature edges, to the square images. We believe that the orbit domain image processing has a great potential to be the standard processing for hexagonal images.

Place, publisher, year, edition, pages
CZECH TECHNICAL UNIV PRAGUE, 2016
Keywords
Convolution, Fill factor, Hexagonal pixel, Hexagonal processing, Hexagonal sensor array, Orbit functions, Orbit transform, Square pixel
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:bth-13798 (URN)10.14311/AP.2016.56.0409 (DOI)000411584300010 ()2-s2.0-85008440583 (Scopus ID)
Note

open access

Available from: 2017-01-20 Created: 2017-01-20 Last updated: 2018-01-13Bibliographically approved
Zepernick, H.-J., Iqbal, M. I. & Khatibi, S. (2016). Quality of Experience of Digital Multimedia Broadcasting Services: An Experimental Study. In: 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE): . Paper presented at IEEE 6th International Conference on Communications and Electronics (IEEE ICCE), JUL 27-29, 2016, Ha Long, VIETNAM (pp. 437-442). IEEE
Open this publication in new window or tab >>Quality of Experience of Digital Multimedia Broadcasting Services: An Experimental Study
2016 (English)In: 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), IEEE, 2016, p. 437-442Conference paper, Published paper (Refereed)
Abstract [en]

Digital multimedia broadcasting (DMB), also know as mobile TV, has been developed as a digital radio transmission technology that supports multimedia services such as TV, radio and datacasting. Especially, the terrestrial version of DMB, referred to as T-DMB, has been widely deployed in South Korea to deliver multimedia services to mobile devices ranging from smartphones to laptops, car navigation systems, and telematic devices for automotives. Although T-DMB is claimed to theoretically work without difficulties in vehicles with speeds up to 300 km/h, in practice, occasional skips and other temporal and spatial artifacts have been observed. In this paper, we provide an experimental study of the Quality of Experience (QoE) of T-DMB with focus on TV services. The study is based on a measurement campaign that was conducted in a live T-DMB system in South Korea consisting of TV broadcasters and DMB receivers in vehicles. In particular, a comprehensive subjective test has been conducted on the DMB test material that was obtained in the measurement campaign. A statistical analysis of the user ratings obtained from the subjective tests is reported to quantify the QoE of T-DMB in terms of mean opinion scores (MOSs) and higher order statistics. The obtained results may be used to develop related QoE models for this type of systems and services. In particular, the results may suggest to exploit insights obtained from higher order statistics such as skewness and kurtosis into QoE modeling rather than considering only MOS and variance.

Place, publisher, year, edition, pages
IEEE, 2016
National Category
Communication Systems
Identifiers
urn:nbn:se:bth-13681 (URN)000389228000072 ()978-1-5090-1801-7 (ISBN)
Conference
IEEE 6th International Conference on Communications and Electronics (IEEE ICCE), JUL 27-29, 2016, Ha Long, VIETNAM
Available from: 2016-12-30 Created: 2016-12-30 Last updated: 2016-12-30Bibliographically approved
Wen, W. & Khatibi, S. (2016). Towards Measuring of Depth Perception from Monocular Shadow Technique with Application in a Classical Painting. Paper presented at International Conference on Computer and Electrical Engineering (ICCEE), Paris. Journal of Computers, 11, 310-319
Open this publication in new window or tab >>Towards Measuring of Depth Perception from Monocular Shadow Technique with Application in a Classical Painting
2016 (English)In: Journal of Computers, ISSN 1796-203X, Vol. 11, p. 310-319Article in journal (Refereed) Published
Abstract [en]

Depth perception is one of important abilities of the human visual system to perceive the three dimensional world. Shadow technique that offers different depth information from different viewing points, known as Da Vinci stereopsis, has been used in classical paintings. In this paper, we report a method towards measuring the relative depth information stimulated by Da Vinci stereopsis in a classical painting. We set up a positioning array of cameras for capturing images from the portrait using a high resolution camera, where the changes of shadow areas are measured by featuring the effects as point and line changes. The result shows that 3D effects of the classical painting are not only a perceptual phenomenon but they are also physically tangible and can be measured. We confirm validity of the method by its implementation even on a typical single image and comparison of results between the single image and the portrait.

Place, publisher, year, edition, pages
Oulo, Finland: Academic Publications, 2016
Keywords
Depth Perception, Da Vinci stereopsis, pictorial cues, classic painting, shadow technique
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:bth-11168 (URN)000378258000005 ()
Conference
International Conference on Computer and Electrical Engineering (ICCEE), Paris
Available from: 2015-12-10 Created: 2015-12-10 Last updated: 2018-01-10Bibliographically approved
Wen, W. & Khatibi, S. (2015). A Software Method to Extend Tonal Levels and Widen Tonal Range of CCD Sensor Images. In: 2015 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS): . Paper presented at 9th International Conference on Signal Processing and Communication Systems (ICSPCS), DEC 14-16, 2015, Cairns, AUSTRALIA. IEEE Communications Society
Open this publication in new window or tab >>A Software Method to Extend Tonal Levels and Widen Tonal Range of CCD Sensor Images
2015 (English)In: 2015 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), IEEE Communications Society, 2015Conference paper, Published paper (Refereed)
Abstract [en]

As one of important outcomes of the past decades of researches on sensor arrays for digital cameras, the manufacturers of sensor array technology have responded to the necessity and importance of obtaining an optimal fill factor, which has great impact on collection of incident photons on the sensor, with hardware solution e.g. by introducing microlenses. However it is still impossible to make a fill factor of 100% due to the physical limitations in practical development and manufacturing of digital camera. This has been a bottle neck problem for improving dynamic range and tonal levels for digital cameras e.g. CCD cameras. In this paper we propose a software method to not only widen the recordable dynamic range of a captured image by a CCD camera but also extend its tonal levels. In the method we estimate the fill factor and by a resampling process a virtual fill factor of 100% is achieved where a CCD image is rearranged to a new grid of virtual subpixels. A statistical framework including local learning model and Bayesian inference is used for estimating new sub-pixel intensity values. The highest probability of sub-pixels intensity values in each resampled pixel area is used to estimate the pixel intensity values of the new image. The results show that in comparison to the methods of histogram equalization and image contrast enhancement, which are generally used for improving the displayable dynamic range on only one image, the tonal levels and dynamic range of the image is extended and widen significantly and respectively.

Place, publisher, year, edition, pages
IEEE Communications Society, 2015
Keywords
CCD sensor, tonal range, fill factor, quantum efficiency
National Category
Communication Systems
Identifiers
urn:nbn:se:bth-12967 (URN)000380405700050 ()978-1-4673-8118-5 (ISBN)
External cooperation:
Conference
9th International Conference on Signal Processing and Communication Systems (ICSPCS), DEC 14-16, 2015, Cairns, AUSTRALIA
Available from: 2016-08-31 Created: 2016-08-30 Last updated: 2016-09-20Bibliographically approved
Wen, W. & Khatibi, S. (2015). Novel Software-based Method to Widen Dynamic Range of CCD Sensor Images. In: Yu-Jin Zhang (Ed.), : . Paper presented at International Conference on Image and Graphics 2015, Tianjin, China (pp. 572-583). Springer, 9218
Open this publication in new window or tab >>Novel Software-based Method to Widen Dynamic Range of CCD Sensor Images
2015 (English)In: / [ed] Yu-Jin Zhang, Springer, 2015, Vol. 9218, p. 572-583Conference paper, Published paper (Refereed)
Abstract [en]

In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords
Dynamic range, Fill factor, CCD sensors, Sensitive area, Quantum efficiency
National Category
Signal Processing
Identifiers
urn:nbn:se:bth-11169 (URN)10.1007/978-3-319-21963-9_53 (DOI)978-3-319-21963-9 (ISBN)
Conference
International Conference on Image and Graphics 2015, Tianjin, China
Available from: 2015-12-11 Created: 2015-12-11 Last updated: 2017-01-10Bibliographically approved
Siddiqui, R. & Khatibi, S. (2015). Robust Visual Odometry Estimation of Road Vehicle from Dominant Surfaces for Large Scale Mapping. IET Intelligent Transport Systems, 9(3), 314-322
Open this publication in new window or tab >>Robust Visual Odometry Estimation of Road Vehicle from Dominant Surfaces for Large Scale Mapping
2015 (English)In: IET Intelligent Transport Systems, ISSN 1751-956X, E-ISSN 1751-9578, Vol. 9, no 3, p. 314-322Article in journal (Refereed) Published
Abstract [en]

Every urban environment contains a rich set of dominant surfaces which can provide a solid foundation for visual odometry estimation. In this work visual odometry is robustly estimated by computing the motion of camera mounted on a vehicle. The proposed method first identifies a planar region and dynamically estimates the plane parameters. The candidate region and estimated plane parameters are then tracked in the subsequent images and an incremental update of the visual odometry is obtained. The proposed method is evaluated on a navigation dataset of stereo images taken by a car mounted camera that is driven in a large urban environment. The consistency and resilience of the method has also been evaluated on an indoor robot dataset. The results suggest that the proposed visual odometry estimation can robustly recover the motion by tracking a dominant planar surface in the Manhattan environment. In addition to motion estimation solution a set of strategies are discussed for mitigating the problematic factors arising from the unpredictable nature of the environment. The analyses of the results as well as dynamic environmental strategies indicate a strong potential of the method for being part of an autonomous or semi-autonomous system.

Place, publisher, year, edition, pages
The Institution of Engineering and Technology, 2015
Keywords
object tracking, road safety, distance measurement, stereo image processing, cameras, road vehicles, pose estimation, motion estimation
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:bth-6319 (URN)10.1049/iet-its.2014.0100 (DOI)000351633300009 ()oai:bth.se:forskinfo23C6F06F396E6D21C1257DAF004FFEA3 (Local ID)oai:bth.se:forskinfo23C6F06F396E6D21C1257DAF004FFEA3 (Archive number)oai:bth.se:forskinfo23C6F06F396E6D21C1257DAF004FFEA3 (OAI)
External cooperation:
Available from: 2015-05-26 Created: 2014-12-15 Last updated: 2018-01-11Bibliographically approved
Siddiqui, R. & Khatibi, S. (2014). Bio-inspired Metaheuristic based Visual Tracking and Ego-motion Estimation. In: : . Paper presented at International Conference on Pattern Recognition Applications and Methods (ICPRAM), Angers, France. SCITEPRESS
Open this publication in new window or tab >>Bio-inspired Metaheuristic based Visual Tracking and Ego-motion Estimation
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The problem of robust extraction of ego-motion from a sequence of images for an eye-in-hand camera configuration is addressed. A novel approach toward solving planar template based tracking is proposed which performs a non-linear image alignment and a planar similarity optimization to recover camera transformations from planar regions of a scene. The planar region tracking problem as a motion optimization problem is solved by maximizing the similarity among the planar regions of a scene. The optimization process employs an evolutionary metaheuristic approach in order to address the problem within a large non-linear search space. The proposed method is validated on image sequences with real as well as synthetic image datasets and found to be successful in recovering the ego-motion. A comparative analysis of the proposed method with various other state-of-art methods reveals that the algorithm succeeds in tracking the planar regions robustly and is comparable to the state-of-the art methods. Such an application of evolutionary metaheuristic in solving complex visual navigation problems can provide different perspective and could help in improving already available methods.

Place, publisher, year, edition, pages
SCITEPRESS, 2014
Keywords
Camera Tracking, Visual Odometry, Planar Template based Tracking, Particle Swarm Optimization.
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:bth-6478 (URN)10.5220/0004811105690579 (DOI)oai:bth.se:forskinfo4BBA2C8A69DD0B45C1257DAF004F8A63 (Local ID)oai:bth.se:forskinfo4BBA2C8A69DD0B45C1257DAF004F8A63 (Archive number)oai:bth.se:forskinfo4BBA2C8A69DD0B45C1257DAF004F8A63 (OAI)
Conference
International Conference on Pattern Recognition Applications and Methods (ICPRAM), Angers, France
Available from: 2014-12-17 Created: 2014-12-15 Last updated: 2018-01-11Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4327-117x

Search in DiVA

Show all publications