Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.ORCID iD: 0000-0003-3887-5972
Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.ORCID iD: 0000-0003-4327-117X
2017 (English)In: Sensors, E-ISSN 1424-8220, Vol. 17, no 3, p. 620-Article in journal (Refereed) Published
Abstract [en]

Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera.

Place, publisher, year, edition, pages
MDPI , 2017. Vol. 17, no 3, p. 620-
Keywords [en]
fill factor; virtual image; image sensor; pipeline; virtual response function; sensor irradiance
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:bth-14046DOI: 10.3390/s17030620ISI: 000398818700193OAI: oai:DiVA.org:bth-14046DiVA, id: diva2:1084300
Note

open access

Available from: 2017-03-24 Created: 2017-03-24 Last updated: 2022-02-10Bibliographically approved
In thesis
1. Biological Inspired Deformable Image Sensor
Open this publication in new window or tab >>Biological Inspired Deformable Image Sensor
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Nowadays, cameras are everywhere thanks to the tremendous progress on sensor technology. However, their performance is far away from what we experience by our eyes. The study from evolution process shows how the sensor arrangement of retina in human vision has differentiated from other species and is formed into a specific combination of sub-arrangements from hexagonal to elliptical ones. There are three major key differences between our visual cell arrangement and current camera sensors which are: the sub-arrangements, the pixel form and the pixel density.

Despite the advances in sensor technology we face limitations in their further development; i.e. to make the cameras close to the visual system. This is due to the optical diffraction limit which prevents us to increase the sensor resolution, and rigidity of hardware implementation which prevent us to change the image sensor after manufacturing. In the thesis the possibilities to overcome such limitations are investigated where the intention is to find a closer sensory solution to the visual system in comparison to the current ones.

Breaking the diffraction barrier and solving the rigidity problem are simultaneously achieved by introducing and estimating virtual subpixels. A statistical framework consisting of local learning model and Bayesian inference for predicting the incident photons captured on each such a subpixel is used to resample the captured image by any current camera sensor. By investigating the virtual variation of pixel size and fill factor the validity of the proposed idea is proven by which the results show significant changes of dynamic range and tonal levels in relation to the variation. As an example, for both monochrome and color images the results show that by virtual increase of fill factor to 100%, the dynamic range of the images are widened and the tonal levels are enriched significantly over 256 levels for each channel.

The results of virtual variation of the fill factor and pixel size indicates that it is feasible to change the rigidity of the image sensor using the software-based method. Inspired by the mosaic in the fovea, the center of human retina, the hexagonal sub-arrangement and pixel form are proposed to generate images based on the estimated virtual subpixels. Compared to the original square images, not only the dynamic range and tonal levels are improved, but also the hexagonal images are superior in detection of edges, i.e. more edge points on the contour of the objects are detected in hexagonal images.

The evaluation of different sub-arrangements or pixel forms of the image sensor is a challenging task and should be directed to a more specific task. Since the curvature contours contain most of the information related to object perception and human vision is highly evolved to detect curvature object, the task is focused to investigate the impact of the curviness on the different pixel forms and sub-arrangements, by comparing two categories of images; having curved versus linear edges of the objects in a pair of images which have exact similar contents but different contours. The detectability of each of the different sensor structures for curviness is estimated and the results show that the image on hexagonal grid with hexagonal pixel form is the best image type for distinguishing the curvature contours in the images.

According to the pattern of pixels tiling, there are two types of pixel sub-arrangements, i.e. periodic (e.g. square or hexagonal), and aperiodic (e.g. Penrose). Each type of sub-arrangements is investigated where the pixel forms and density are variable. By having at least two generated images of one configuration (i.e. specific sub-arrangement, pixel form and density), the result of histogram of gradient orientation of the certain sensor arrangement shows a stable and specific distribution which we called it the ANgular CHaracteristic of a sensOR structure (ANCHOR). Each ANCHOR has a robust pattern which is changed by the change of the sensor sub-arrangement. This makes it feasible to plan a sensor sub-arrangement in the relation to a specific application and its requirements, and more alike the biological vision sensory. To generate such a flexible sensor, a general framework is proposed for virtual deforming the sensor with a certain configuration of the sensor sub-arrangement, pixel form and pixel density.

Assessing the quality difference between the images generated by different sensor configuration or addressing from on configuration to another one generally needs the conversion of one to another. To overcome this problem, a common space is proposed by implementing a continuous extension of square or hexagonal images based on the orbit function, for quality evaluating the images with different arrangements and addressing from one type of image to another one. The evaluation results show that the creation of such space is feasible which facilitates a usage friendly tool to address an arrangement and assess the changes between different spatial arrangements, for example, it shows richer intensity variation, nonlinear behavior, and larger dynamic range in the hexagonal images compared to the rectangular images.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2019. p. 207
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 4
Keywords
image sensor, pixel form, sub-arrangements, fill factor, square image, hexagonal image, deformable sensor, quality assessment.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17149 (URN)978-91-7295-366-6 (ISBN)
Public defence
2019-03-14, J1650, Campus Gräsvik, Karlskrona, 13:15 (English)
Opponent
Supervisors
Available from: 2018-12-20 Created: 2018-12-20 Last updated: 2019-03-05Bibliographically approved

Open Access in DiVA

fulltext(4078 kB)681 downloads
File information
File name FULLTEXT01.pdfFile size 4078 kBChecksum SHA-512
a664c03156c1d36b4e20e5719c611d782f28958eb2852a3cf5e00f449f2f973e276581d50e422a93680ff06868ed3488ecf835f8927e0de49f8bc7a4e42f0d4b
Type fulltextMimetype application/pdf

Other links

Publisher's full textSensors | Free Full-Text | Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image | HTML:

Authority records

Wen, WeiSiamak, Khatibi

Search in DiVA

By author/editor
Wen, WeiSiamak, Khatibi
By organisation
Department of Technology and Aesthetics
In the same journal
Sensors
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 682 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 318 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf