StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

The Modern Digital Cameras - Case Study Example

Cite this document
Summary
This paper 'The Modern Digital Cameras' tells that In modern digital cameras, there are two types of image sensor technologies that are used. These include the charge-coupled device and the complementary metal-oxide-semiconductor image sensors. They were created during the period of the invention of the semiconductor industry…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.4% of users find it useful
The Modern Digital Cameras
Read Text Preview

Extract of sample "The Modern Digital Cameras"

Introduction In modern digital cameras, there are two types of image sensor technologies that are used. These include thecharge-coupled device (CCD) and the complementary metal-oxide-semiconductor (CMOS) image sensors (Abanmy et al, 2005). These two technologies were created during the period of invention of the semi conductor industry. The main intention behind the creation of these sensors was to replace the film based cameras with electronic devices which were capable of being availed in semiconductor processes. The initial attempt to design image sensors was first based on the nMOS and pMOS processes (Burrough & McDonnell, 2008). In 1963, Morrison discovered what can be said to have been the first successful MOS imaging sensor, this was later improved by Horton in 1964 and Schuster in 1966. Before the invention of the semi conductor based imagers, the main photosensitive elements that were used were mainly the phototransistors and the n-p-n junctions (Burrough & McDonnell, 2008). In 1967, Weckler introduced the photon flux integration mode which is widely used in CMOS imaging sensors. This became the basis for the creation of the modern CMOS imagers. Based on this principle, Noble was able to develop the first 100x100 pixel array by the use of an in pixel source follower transistor for the amplification of the charge (Evans, 2006). This approach is still in use till today. After the 1960s, there has been a significant improvement in this field in terms of the development of the photo sensing and the design of the pixels (Burrough & McDonnell, 2008). The early MOS imagers normally suffered from fabrication processes that were immature and displayed lack of uniformity between pixels as a result of the process of spread which resulted in high level of noise patterns in the images. It therefore emerged that the areas in which the MOS imagers could be applied were limited and in the 1970s, the CCD imaging device was developed by Boyle and Smith of the Bells lab (Abanmy et al, 2005). In comparison to the MOS imagers, the CCD imaging devices had a structure that was simpler and the level of the fixe noise was much lower. This made them suitable for more imaging applications as compared to the MOS imagers. The CCD imagers were brought to the market in the 1970s but they were not commercialized until 15 years later. The first application of CCDs was in video cameras and the CCDs were able to dominate this application (Jaafar & Priestnall, 2001). Despite the fact that the CCDs had an excellent performance, the process of fabrication used in these sensors was dedicated to the making of photo sensing elements rather than transistors. This made it difficult to implement transistors that are well performed. However, this process could well be achieved using the CMOS sensors (Kost et al, 2004). In the year 1995, the first CMO imaging sensor of high performance was developed. It consisted of an on chip timing, a control and circuitries for noise suppression. After this development, the use of CMOS sensors has grown rapidly and replaced the use of CCD imagers. The Megapixel and HDTV technology CMOS and CCD cameras to be able to provide video images of high resolution as compared to the analog CCTV cameras (Mather, 2002). This improves the level of detail. This technology is mainly used in multi viewing streaming. Figure 1: CCD sensor Figure 2: CMOS sensor There are a number of sensors that have been developed in the industry to be used for the sensing application of different parts of the environment. These sensors have different levels of accuracy. The table below outlines the sensors and their levels of accuracy. Table 1: Different sensor and their levels of accuracy System Imaging Sensor Resolution (meters) Day resolution (meters) Spectral resolution (meters) Spot 1-3 High Resolution Visible sensor (HRV) 10 1-3 1 Spot 1-3 Panchromatic High Resolution Visible sensor (P HRV/M) 20 1-3 3 Spot 4 High Resolution Visible IR sensor (HRVIR) 10 1-3 1 Spot 4 Panchromatic High Resolution Visible sensor (P HRVIR) 20 1-3 4 Spot 4 Multispectral VEGETATION 1100 1-3 4 SPOT 5 High Resolution Visible sensor (HRVIR/P), High Resolution Visible sensor (HRVIR/M) HRS VEGETATION 5 (2.5) 10/20 10 1100 1-3 1 ,4 ,1, 4 LANDSAT 1-3 Return Beam Vidicon (RBV) MSS sensor 80 18 1-3 4 LANDSAT 4-5 Multispectral scanner (MSS TM) 80 16 4 7 LANDSAT 7 ETM+ 15 (panchromatic) 16 1 Feature extraction from LIDAR imagery The LIDAR data is normally based on a sequence of laser measurements emanating from the airborne sensors to the points on the surface (Kost et al, 2004). Knowledge of the precise positions and the direction of orientation of the airborne platforms from the Differential GPS and the Inertial Navigation systems by reflecting the laser beam to the ground in order to allow for 3D positioning of the points at an accuracy of up to a decimeter (Mather, 2002). An accuracy of 0.2 m on the horizontal scale and 0.1 m on the vertical scale can be realized in the production of digital surface models from LIDAR imagery depending on the type of system used. The nature and appearance of the surface model generated from the LIDAR imagery allows for the definition and identification of the surface features as discrete objects and also in terms of their surface roughness (Brebbia & Pascolo, 2003). The process of extracting data from LIDAR imagery is primarily dependent on the applications required for the data. The first step in the extraction of features from LIDAR imagery begins with the generation of a DSM. The techniques used depend on the generation of the DSM are dependent on the resolution of the laser scanning either by the use of the raw data points or the gridded digital surface models. It is recommended that the slope of the digital surface model should within the range less than 50 degrees (Kavzoglu & Mather, 2001). After the DSM has been generated, not all the features on the surface are needed and therefore the surface is buffered and a mask is applied to remove the unwanted surfaces. The holes left on the surface are replaced by re-interpolation of the surface (Brebbia & Pascolo, 2003). The next issue of concern is the creation of reference surfaces. The most appropriate method to be used for this purpose is the use of mean filters to generate reference surfaces. The mean filter can be used in various sizes in order to investigate the most appropriate reference surface. The standard deviations of the derived surfaces can then be computed for comparison (Axis Communications: Technical guide to network video: online). The main purpose for filtering is to have a better understanding of the smoothing effects of the digital surface model in the process of creating the reference surfaces. The value of the surface is dependent on the size of the filter used. However, choosing an appropriate size of the filter may be difficult and therefore to achieve a smooth surface, an unsupervised process of image classification is carried out on the digital surface model (Kavzoglu & Mather, 2001). This process results into a defined number of clusters. This will allow for the separation of the main entities found on the surface. The number of clusters used in the classification is normally dependent on the range of the elevation values in the digital surface model and the existing number of entities that need to undergo differentiation (Nilsson, 2001). The result of the classification process is polygons. The reference surface that has been created is then subtracted from the LiDAR digital surface model in order to isolate the features that are above the surface. The features above the surface are thus seen as regions that are scattered and have positive elevation values. A buffer zone is then generated in all the regions that are detected to have a negative elevation value (Zell et al, 2004). The choice of the value of the buffer is dependent on the resolution of the grid and the density of the features. The areas that have a gradient that exceeds 50 degrees are also extracted and a masked is applied together with the buffer zone. Figure 3: The separation process in the extraction of features The digital elevation model is finally constructed through the replacement of the DSM elevations that coincide with the mask with empty data and then interpolation is done across the existing gaps. Figure 4: Flowchart for Feature extraction from LIDAR imagery Feature extraction from Satellite imagery and aerial images Satellite imagery occurs at different levels of resolution. The data collected should be in both panchromatic and multispectral modes at different resolutions. Before the process of extraction commences, a number of pre processing activities need to be carried out and this includes the registration of the images, merging of the multispectral and panchromatic images and the selection of the areas of interest (Razavi, 2001). Registration of the image allows for the transformation of the image from one coordinate system to the other. This will allow for the comparison of the integrated data. In image fusion, the two images are merged to generate a composite image that has a high level of detail (Bain, 2007). The principal component analysis technique is the recommended method of image fusion since it makes use of all the four bands. The process of detection of the features in the composite image is composed of a sequence of procedures for segmentation and classification of the features (Razavi, 2001). A multi resolution segmentation procedure is carried out through the application of different parameters such as scale, shape factor and smoothness of the features. In order to determine the attributes of the image objects, attributes such as color layers, texture and shape can be used. These layers evaluate the mean and the standard deviation of the objects on the images. Based on the co-occurrence matrix, a great number of features can be evaluated (Smith, 2007). These rules are defined using the fuzzy membership functions and the image object attribute values. After the rules have been determined, a class hierarchy is generated and the classification of the features can be done. The output can undergo further processing for the purpose of smoothening of the features and accuracy assessment (Smith, 2007). Figure 5: Flowchart for Feature extraction from satellite and aerial imagery Automated and non automated extraction process The non automated processes of the extraction of features from images are normally based on the use on non mechanized extraction processes. These processes are dependent on the mode of interpretation used. The use of the eye vision to interpret the objects on the images is prevalent in no automated methods of extracting images (Murakami et al, 1999). The processing of the image is then conducted using non automated methods such as stereoscopy. On the other hand, automated processes of extracting features from images are normally computerized and involve the use of digitally collected images and processing them to extract the required features using customized software (Murakami et al, 1999). The automated feature extraction tool allows the automation of the workflow. Some of the existing image processing techniques that exist include the image segmentation technique, image thresholding technique, image and Fourier domains technique and the hierarchical classification technique. The automated techniques of features extraction make use of automated algorithms in the extraction of features. Comparison of the different technologies The CMOS sensors are usually more widely available and less expensive as compared to the CCD sensors. The CMOS sensors usually incorporate amplifiers, A/D converters and circuitry for processing of the images while in CCD sensors, most of the processing functions are done outside the sensor (Smith, 2007). The CMOS sensors usually consume low amounts of power as compared to the CCD imaging sensors. This allows for the temperature contained inside the camera to be kept at low levels. The heat problem in the CCD sensors can result into an increase in the interference levels but on the other hand, the CMOS sensors can undergo more suffering due to the structured noise (Murakami et al, 1999). The CMOS sensors normally allow for image windowing and multiviewing streaming of the image which is not possible with the CCD sensors. The CCD sensors usually have a single converter for charge to voltage for every sensor while on the other hand CMOS sensors have a single pixel (Bain, 2007). Conclusion The LIDAR data is used for a number of applications due to its level of detail and accuracy. The extraction of features such as buildings that are discrete in nature may be challenging and there is need for further research to be done in this field. The information received from the filtering processes may be used to compensate for the deficiencies in the extraction process. In order to improve the modeling process of the features extracted, further classification can be done on the surface of the ground features using a combination of both topographic and spectral information classifiers. References Abanmy, F., Khamees, H., Scarpace, F., & Vonderohe, A., An evaluation of DEM and ortho photo generation on OrthoMAX. ACSM/ASPRS (2005). Burrough, P. A., & McDonnell, R. A., Principles of geographical information systems. Oxford: Oxford University Press (1998). Evans, H. F. J., Neural network approach to the classification of urban images. PhD thesis, The University of Nottingham, Nottingham, UK (2006). Jaafar, J., & Priestnall, G., Automated DEM/DSM accuracy estimates towards land change detection Oxford: Oxford University Press (2001). Kost, K., Loddenkemper, M., & Petring, J., Airborne laser scanning, a new remote sensing method for mapping terrain. Third EARSeL Workshop on LiDAR remote sensing of land and sea, Tallinne,Estonia (2002). Mather, P. M., Computer processing of remotely-sensed images: an introduction. Chichester: John Wiley & Sons (2004). C. A. Brebbia & P. Pascolo, GIS technologies and their environmental applications, Southampton, UK: Computational Mechanics Publications (2003). Kavzoglu, T., & Mather, P. M., Pruning artificial neural networks: an example using land cover Classification of multi-sensor images, International Journal of Remote Sensing (2001). Axis Communications: Technical guide to network video: Retrieved from: www.axis.com/files/brochure/bc_techguide_33334_en_0811_lo.pdf. Accessed on 24th April 2015. Fredrik Nilsson, Intelligent Network Video: Understanding modern surveillance systems, Oxford University Press (2001). Zell, A., Mamier, G., Vogt, M., Mache, N., Hubner, R., Hermann, K., Soyez, T., Schmalzl, M., Sommer, T., Hatzigeorgiou, A., Doring, S., Posselt, D., & Schreiner, T., SNNS (Stuttgart Neural Network Simulator): user manual, Version 3.3. Stuttgart: University of Stuttgart (2004). B. Razavi, Design of Analog CMOS Integrated Circuit, McGraw-Hill International Edition, New York (2001). P. Bain, “Device Noise in CMOS Imagers”, ISSCC 2007 Forum: Noise in Imaging Systems, San Francisco, US (2007). Smith, D. G. , Digital photogrammetry for elevation modelling. PhD thesis, The University of Nottingham, Nottingham, UK (2007). Murakami, H., Nakagawa, K., Hasegawa, H., Shibata, T., & Iwanami, E., Change detection of buildings using an airborne laser scanner. ISPRS Journal of Photogrammetry and Remote Sensing (1999). Read More
Tags
Cite this document
  • APA
  • MLA
  • CHICAGO
(The Modern Digital Cameras Case Study Example | Topics and Well Written Essays - 2000 words, n.d.)
The Modern Digital Cameras Case Study Example | Topics and Well Written Essays - 2000 words. https://studentshare.org/physics/1872522-feature-extraction-how-we-can-obtain-building-footprints-and-trees-from-images-and-lidar-data
(The Modern Digital Cameras Case Study Example | Topics and Well Written Essays - 2000 Words)
The Modern Digital Cameras Case Study Example | Topics and Well Written Essays - 2000 Words. https://studentshare.org/physics/1872522-feature-extraction-how-we-can-obtain-building-footprints-and-trees-from-images-and-lidar-data.
“The Modern Digital Cameras Case Study Example | Topics and Well Written Essays - 2000 Words”. https://studentshare.org/physics/1872522-feature-extraction-how-we-can-obtain-building-footprints-and-trees-from-images-and-lidar-data.
  • Cited: 0 times

CHECK THESE SAMPLES OF The Modern Digital Cameras

The book of Eli argument

With excellent shooting done with high end digital cameras, the audience found it extremely comfortable to sit through the various effects and watch every moment of the movie and absorb it within so as to understand the kind of situation the... The Hughes Brothers have been successful in inserting fresh aspects and outlooks into the way most people view the post apocalyptic era of America today, in the modern day and age....
3 Pages (750 words) Research Paper

Knowledge of Intro to Technologies

Proposal of Technology Needs Employee Hardware requirement Software requirements Connectivity requirements Subtotal General Remarks Reviewer Laptop, tablet PC, digital camera Windows, office Wireless card, USB modem AUD$2,000 Reviewer needs digital camera and tablet PC to make accessibility to taking of motion and still images easier....
5 Pages (1250 words) Essay

The Hammering Man - Jonathan Borofsky

These two talented individual pooled their complementary tools to come up with the Rotary Tumble; the main objective was to create a digital object which would be both interactive and physically tangible and to this end, they focused on the physical spinning or tumbling of an object.... As such, they created what can be termed as a DIY (do it yourself) optical rotary encoder which is made up of black and white strips printed in a radial pattern and carry a digital encoding of the angle of rotation....
4 Pages (1000 words) Essay

The Success of Panasonic

The paper presents Panasonic Company which is one of the organizations that provide security for the company's supply chain to ensure that the goods and the staff are secure during the production, transport, storage.... The success of a business organization depends on its ability to maintain security....
6 Pages (1500 words) Case Study

Technological developements

Motion images, as well as, film evolution in the digital world have play core roles in her undertakings.... Young People, Creativity and New Technologies: The Challenge of digital Arts.... igital camera has proved a cut through in modern photography, given that it has considerably contributed to the further developments of cutting-edge technologies, involving production of images that appear perfect than images from earlier developments.... Photo Restoration Software is considered a cutting-edge technology that offers the populace a chance to view the old photos in the most modern manner....
2 Pages (500 words) Essay

Choose what you want

DIAD technology also came long before the modern smartphones technology.... DIAD technology also came long before the modern smartphones technology.... Their services will also be of high standard due to the efficiency of IT in business, for example, the use of cameras to document the extent of damage on a parcel will improve the handling of packages by the drivers.... Management information systems: Managing the digital firm....
2 Pages (500 words) Coursework
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us