Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Novel Software-based Method to Widen Dynamic Range of CCD Sensor Images
Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.ORCID iD: 0000-0003-4327-117X
2015 (English)In: / [ed] Yu-Jin Zhang, Springer, 2015, Vol. 9218, p. 572-583Conference paper, Published paper (Refereed)
Abstract [en]

In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.

Place, publisher, year, edition, pages
Springer, 2015. Vol. 9218, p. 572-583
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords [en]
Dynamic range, Fill factor, CCD sensors, Sensitive area, Quantum efficiency
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:bth-11169DOI: 10.1007/978-3-319-21963-9_53ISBN: 978-3-319-21963-9 (print)OAI: oai:DiVA.org:bth-11169DiVA, id: diva2:881572
Conference
International Conference on Image and Graphics 2015, Tianjin, China
Available from: 2015-12-11 Created: 2015-12-11 Last updated: 2018-12-20Bibliographically approved
In thesis
1. Biological Inspired Deformable Image Sensor
Open this publication in new window or tab >>Biological Inspired Deformable Image Sensor
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Nowadays, cameras are everywhere thanks to the tremendous progress on sensor technology. However, their performance is far away from what we experience by our eyes. The study from evolution process shows how the sensor arrangement of retina in human vision has differentiated from other species and is formed into a specific combination of sub-arrangements from hexagonal to elliptical ones. There are three major key differences between our visual cell arrangement and current camera sensors which are: the sub-arrangements, the pixel form and the pixel density.

Despite the advances in sensor technology we face limitations in their further development; i.e. to make the cameras close to the visual system. This is due to the optical diffraction limit which prevents us to increase the sensor resolution, and rigidity of hardware implementation which prevent us to change the image sensor after manufacturing. In the thesis the possibilities to overcome such limitations are investigated where the intention is to find a closer sensory solution to the visual system in comparison to the current ones.

Breaking the diffraction barrier and solving the rigidity problem are simultaneously achieved by introducing and estimating virtual subpixels. A statistical framework consisting of local learning model and Bayesian inference for predicting the incident photons captured on each such a subpixel is used to resample the captured image by any current camera sensor. By investigating the virtual variation of pixel size and fill factor the validity of the proposed idea is proven by which the results show significant changes of dynamic range and tonal levels in relation to the variation. As an example, for both monochrome and color images the results show that by virtual increase of fill factor to 100%, the dynamic range of the images are widened and the tonal levels are enriched significantly over 256 levels for each channel.

The results of virtual variation of the fill factor and pixel size indicates that it is feasible to change the rigidity of the image sensor using the software-based method. Inspired by the mosaic in the fovea, the center of human retina, the hexagonal sub-arrangement and pixel form are proposed to generate images based on the estimated virtual subpixels. Compared to the original square images, not only the dynamic range and tonal levels are improved, but also the hexagonal images are superior in detection of edges, i.e. more edge points on the contour of the objects are detected in hexagonal images.

The evaluation of different sub-arrangements or pixel forms of the image sensor is a challenging task and should be directed to a more specific task. Since the curvature contours contain most of the information related to object perception and human vision is highly evolved to detect curvature object, the task is focused to investigate the impact of the curviness on the different pixel forms and sub-arrangements, by comparing two categories of images; having curved versus linear edges of the objects in a pair of images which have exact similar contents but different contours. The detectability of each of the different sensor structures for curviness is estimated and the results show that the image on hexagonal grid with hexagonal pixel form is the best image type for distinguishing the curvature contours in the images.

According to the pattern of pixels tiling, there are two types of pixel sub-arrangements, i.e. periodic (e.g. square or hexagonal), and aperiodic (e.g. Penrose). Each type of sub-arrangements is investigated where the pixel forms and density are variable. By having at least two generated images of one configuration (i.e. specific sub-arrangement, pixel form and density), the result of histogram of gradient orientation of the certain sensor arrangement shows a stable and specific distribution which we called it the ANgular CHaracteristic of a sensOR structure (ANCHOR). Each ANCHOR has a robust pattern which is changed by the change of the sensor sub-arrangement. This makes it feasible to plan a sensor sub-arrangement in the relation to a specific application and its requirements, and more alike the biological vision sensory. To generate such a flexible sensor, a general framework is proposed for virtual deforming the sensor with a certain configuration of the sensor sub-arrangement, pixel form and pixel density.

Assessing the quality difference between the images generated by different sensor configuration or addressing from on configuration to another one generally needs the conversion of one to another. To overcome this problem, a common space is proposed by implementing a continuous extension of square or hexagonal images based on the orbit function, for quality evaluating the images with different arrangements and addressing from one type of image to another one. The evaluation results show that the creation of such space is feasible which facilitates a usage friendly tool to address an arrangement and assess the changes between different spatial arrangements, for example, it shows richer intensity variation, nonlinear behavior, and larger dynamic range in the hexagonal images compared to the rectangular images.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2019. p. 207
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 4
Keywords
image sensor, pixel form, sub-arrangements, fill factor, square image, hexagonal image, deformable sensor, quality assessment.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:bth-17149 (URN)978-91-7295-366-6 (ISBN)
Public defence
2019-03-14, J1650, Campus Gräsvik, Karlskrona, 13:15 (English)
Opponent
Supervisors
Available from: 2018-12-20 Created: 2018-12-20 Last updated: 2019-03-05Bibliographically approved

Open Access in DiVA

ICIS2015(906 kB)158 downloads
File information
File name FULLTEXT01.pdfFile size 906 kBChecksum SHA-512
b8f9de1cb5fe900ede6ad4401667d4c86dd432279c177b5f3f255f27ca6d3770f0b2bc49ef7134f5db4a7c974b8c9202ef669dceb65416be9d539031bc7083f9
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records BETA

Wen, WeiKhatibi, Siamak

Search in DiVA

By author/editor
Wen, WeiKhatibi, Siamak
By organisation
Department of Communication Systems
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 158 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 415 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf