Structure and Imaging Principle of Smartphone Camera

Published: 26 November 2021 | Last Updated: 26 November 202117265
This article mainly covers the composition (PCB board, DSP, sensor, holder, lens) and working principle of a cell phone camera.
This video introduces you to the inside of a smartphone camera and its working principle.

What's Inside Smartphone Camera & How it works?


Catalog

Ⅰ Smartphone camera composition structure

Ⅱ The imaging principle of smartphone camera

Ⅲ Key factors affecting the performance of smartphone camera


In the early 19th century, Sharp and the Japanese communications operator J-PHONE invented the Sharp J-SH04. The Sharp J-SH04 has a camera function. On April 24, 2003, Sharp released the world’s first-megapixel smartphone J-SH53.

smartphone camera.jpg

smartphone camera

With continuous breakthroughs and innovations in technology, new smartphone camera lenses have sprung up like bamboo shoots after rain. From the initial megapixels to the current megapixels, and the quality of shooting continues to enter a new level. The most representative companies such as Huawei, Samsung, and Apple. New design concepts continue to be applied in practice, such as multi-camera design, which has been mastered by Samsung, Huawei, Apple, etc., and has been used in the latest smartphones.

Ⅰ Smartphone camera composition structure

The smartphone camera is mainly composed of the following parts: PCB board, DSP, sensor, holder, lens (ASS'Y). Among them, lens (ASS'Y), DSP, sensor are the three most important parts.

Mobile phone camera shooting process.jpg

Mobile phone camera shooting process

PCB board

PCB boards are divided into three types: hard board, soft board, and rigid-flex board. CMOS can use any kind of board, but for CCD, only flexible-hard board can be used. Among the three types of boards, the price of the rigid-flex board is the highest, while the price of the hard board is the lowest.

Lens

The lens is the second factor that affects image quality after CMOS chips. Its composition is a lens structure, consisting of several lenses, generally classified as plastic or glass. The so-called plastic lenses are not pure plastics, but resin lenses. Of course, its optical indicators such as light transmittance and sensitivity are not comparable to coated lenses.

The lens structures used for the camera are 1P, 2P, 1G1P, 1G2P, 2G2P, 2G3P, 4G, 5G, etc. The more lenses, the higher the cost, and the better the relative imaging effect. Glass lenses are more expensive than resin. Therefore, a good quality camera should be a multilayer glass lens. In order to reduce costs, most camera products on the market generally use cheap plastic lenses or a glass-to-plastic lens (ie: 1P, 2P, 1G1P, 1G2P, etc.), which has a great impact on image quality.

Lens structures.jpg

Lens structures

The lens consists of a lens, a filter device, and a lens barrel. There are three lens parameters, namely the focal length f', the relative aperture D/f', and the angle of view 2ω.

The focal length of the lens is an important indicator of the lens, which determines the ratio of the object to the image. If the object is infinitely far away, the size of the image is determined by the following formula: y′=-f′·tanω (ω is the field angle of the object).

Relative aperture D/f' and aperture number F are the key optical indicators of the lens. Relative aperture represents the light energy that can enter the lens to reach the film, thus determining the image plane illuminance. It is defined as the ratio of the entrance pupil diameter D to the focal length f'of the lens. The photography of dark scenes and high-speed moving objects requires a large relative aperture objective lens. The large relative aperture objective lens can improve the image plane illuminance. According to the relative aperture size, the photographic objective lens is divided into low light objective lens (D/f'1: below 6.3), ordinary objective lens (D/f'1:5.6-1:3.5), strong light objective lens (D/f'1:2.8-1:1.4) and super light objective lens (D/f'1:1-1:0.8). In order to use the same lens in different environments, usually, the aperture diaphragm adopts a continuously variable iris diaphragm.

The reciprocal of the relative aperture is called the aperture factor, also known as the F-number, and the F-number is marked on the camera lens. According to the size of the luminous flux, the arrangement of the number of light levels is specified: 0.7, 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22... As the number of apertures increases, the aperture becomes smaller and the luminous flux decreases. For every level difference of the aperture, the luminous flux is doubled. For camera lenses, the lower the F-number, the better the compatibility of the lens and the greater the range of use. The relative aperture also affects the spatial depth range-depth of the field for obtaining a clear image on the image plane. The larger the relative aperture, the greater the depth of field of imaging.

Aperture factor.jpg

Aperture factor

The field angle 2ω of the photographic objective determines the scope of the object space. The field of view of the photographic objective is determined by the diameter of a circular area with satisfactory imaging quality on the image plane, or by the size of the photosensitive surface of the photosensitive element used by the camera.

The basic types of photographic objective lenses:

1. According to the focal length of the lens and the angle of view, it is divided into the standard lens, short-focus lens, and long-focus lens.

2. According to whether the focal length of the lens can be changed, it can be divided into fixed-focus lens and zoom lens.

Mobile phone camera composition.jpg

Mobile phone camera composition

Holder and color filter

The role of the holder is actually to fix the lens, and there will be a color filter on the holder.

Color filters are also called "color separation filters". There are currently two color separation methods, one is the RGB primary color separation method, and the other is the CMYK complementary color separation method.

The advantage of the primary color CCD is that the image quality is sharp and the color is real, but the disadvantage is the noise problem. Generally, the digital camera with the primary color CCD has an ISO sensitivity of not more than 400. In contrast, the complementary color CCD has an additional Y yellow filter, which sacrifices the resolution of some images, but the ISO value can generally be set above 800.

DSP (Digital Signal Processor)

The function of DSP is to optimize the processing of digital image signals through a series of complex mathematical algorithm operations, and finally, transmit the processed signals to the display.

DSP structure framework: (1). ISP (image signal processor) (mirror signal processor); (2). JPEG encoder (JPEG image decoder).

The powerful performance of the ISP is the key to the smoothness of the image, and the performance of the JPEG encoder is also one of the key indicators. And JPEG encoder is divided into hardware JPEG compression method and software RGB compression method.

The function of the DSP control chip is to transfer the data obtained by the photosensitive chip to the baseband in time and refresh the photosensitive chip. Therefore, the quality of the control chip directly determines the picture quality (such as color saturation, sharpness) and smoothness.

The DSP mentioned above is used in the CCD. Because, in the CMOS sensor camera, its DSP chip has been integrated into the CMOS. From the appearance point of view, they are a whole. The camera with a CCD sensor is divided into two independent parts, CCD and DSP.

Image Sensor

Among the main components of the camera, the most important is the image sensor, because the photosensitive device is very important to the image quality.

The sensor converts the light transmitted from the lens into an electrical signal, and then converts it into a digital signal through the internal DA. Since each pixel of the sensor can only receive R light, B light, or G light, each pixel stores monochromatic data at this time, which we call RAW DATA data. In order to restore the RAW DATA data of each pixel to three primary colors, a signal processor ISP is needed.

The image sensor is a component that plays the role of photosensitive recording, similar to film. There are two types of CMOS and CCD. CCD is also called a charge transfer device. The photodiodes are arranged in a row called a one-dimensional linear sensor, and the photodiodes are arranged in a row called a two-dimensional area image sensor.

CCD is composed of photodiode photosensitive parts, CCD transfer parts, and charge amplifier parts. When light is irradiated, the photons excite charges, and the charges are accumulated. A gate voltage is applied between the photosensitive parts and the transfer parts. Under the action of the gate voltage, the accumulated charges move to the transfer part in a directional direction. After amplifying the output, these output charge signals carry image information.

The development trend of image sensors is the development of high-performance directions such as high sensitivity, high resolution, power saving, and low voltage operation.

CMOS image sensors are made up of metal oxide semiconductors, and each pixel can integrate multiple devices, such as amplifiers, A/D converters, and so on.

Please click here to see the difference between the two photosensitive elements.

Ⅱ The imaging principle of smartphone camera

Light from the object enters the system, passes through the lens, and reaches the image sensor. The photon hits the sensor to generate movable charges, which is the internal photoelectric effect. The collection of movable charges forms an electrical signal. Since the processor cannot recognize the charge signal, it needs to convert the electrical signal into a digital signal. For systems where the image sensor is CMOS, no external analog-to-digital converter is required. As for the system that uses ccd as the image sensor, an A/D converter is required. After an analog-to-digital conversion device, the charge signal is converted into a digital signal, and the digital signal enters the microprocessor through the amplifying circuit. After the digital signal is stored and processed by the DSP digital signal processing chip, it is transmitted to the screen to form the same image as the object.

OV2665 imaging process.jpg

OV2665 imaging process

Ⅲ Key factors affecting the performance of smartphone camera

Pixel

Generally speaking, "XXX million pixels" actually refers to the resolution of the camera, and its value is mainly determined by the number of pixels (ie the smallest photosensitive unit) in the camera sensor. For example, 5 million pixels means that there are 5 million pixels in the sensor.

Do pixels determine photo quality?

It is usually thought that the higher the camera pixel, the sharper the picture taken. In fact, the only thing the camera's pixels can determine is the resolution it takes. The higher the resolution, the larger the size, but the clearer it is.

But the current mainstream smartphone screen is 1080p level (1920×1080 pixels). Whether it is a 4208×3120 pixel photo from a 13-megapixel camera or a 3200×2400 pixel photo from an 8-megapixel camera, it is beyond the interpretation range of the 1080p screen, and will eventually be displayed in 1920×1080 pixels.

Where is the advantage of high pixels?

A camera with a higher pixel can take a larger size. If we want to print a sample, we use the conventional printing standard of 300 pixels/inch to calculate. The 4208×3120 pixel sample taken by a 13-megapixel camera can print a 17 inches photo. But for the 3200×2400 pixel proofs of the 8-megapixel camera, the prints of more than 13 inches began to blur. Obviously, the printable size of the 13-megapixel camera proofs is larger.

Sensor

Since pixels are not the key factor in determining quality, what is it? The answer is the sensor.

There are two main types of camera sensors: CCD and CMOS. Although CCD sensors have good imaging quality, they are relatively expensive and are not suitable for smartphones. CMOS sensors are the most widely used in the field of smartphones due to their lower power consumption, price, and excellent image quality.

CMOS sensors are divided into two types, back-illuminated and stacked. The two are the same. The technology was first developed by Sony. Sony's back-illuminated sensor is branded as "Exmor R", and the stacked sensor is "Exmor RS".

Relatively speaking, the larger the sensor size, the better the light-sensing performance, the more photons (graphic signals) captured, the lower the signal-to-noise ratio, and the better the imaging effect. However, larger sensors will increase the size, weight, and cost of smartphones.

The emergence of back-illuminated sensors effectively solves this problem. Under the same size, it increases the sensitivity of the sensor by 100%, effectively improving the image quality in low-light environments.

In August 2012, Sony released a new stacked sensor (Exmor RS CMOS). It should be noted that it is not an evolutionary relationship with a back-illuminated sensor, but a parallel relationship. The main advantage of a stacked sensor is that the number of pixels remains unchanged. Under changing circumstances, the size of the sensor becomes smaller. It can also be understood that when the number of pixels of the back-illuminated sensor is the same, the size of the stacked sensor will be smaller, thus saving space and making the phone thinner and lighter.

Lens

The lens is a device that images the shooting scene on the sensor, which is equivalent to the "eye" of the camera. It is usually composed of several lenses. When the light signal passes through, the lenses will filter the stray light (infrared, etc.) layer by layer. Therefore, the lens The more the number, the more realistic the imaging.

Aperture

The aperture is composed of several extremely thin metal sheets in the lens, and the amount of light entering the lens to the sensor can be controlled by changing the size of the aperture hole. The value of the aperture is usually expressed by f/2.2 and f/2.4. The smaller the number, the larger the aperture. The two are in inverse proportion.

Its working principle is: the larger the aperture, the more light reaches the sensor through the lens, and the brighter the imaged picture, otherwise the darker the picture. Therefore, in night shooting or a low light environment, the imaging advantage of a large aperture is more obvious.

In addition to controlling the amount of light, the aperture also has the function of controlling the depth of field. We often see photos with a strong background blur effect, which not only highlights the focus of the shot, but also has a very aesthetic sense of art, and this is the so-called depth of field. The larger the aperture, the smaller the depth of field, and the more obvious the background blur effect.

UTMEL

We are the professional distributor of electronic components, providing a large variety of products to save you a lot of time, effort, and cost with our efficient self-customized service. careful order preparation fast delivery service

Frequently Asked Questions

1. What is a smartphone camera?

A camera phone is a smartphone that is able to capture photographs and often record video using one or more built-in digital cameras. It can also send the resulting image wirelessly and conveniently. Most camera phones are smaller and simpler than separate digital cameras.

2. What is a good phone camera MP?

It's quite simple: 12MP is the ideal resolution for smartphone sensors. There are several reasons for this, including storage space, processing time, and low light photo quality. Video resolution and viewing devices also play into how large a camera sensor should be.
Related Articles

  • The Introduction to Common Tools and Using Methods of Fiber Optic
    The Introduction to Common Tools and Using Methods of Fiber Optic
    UTMEL07 January 20226133

    The building and maintenance of optical cables in the intelligent industry has gradually expanded with the rapid expansion of the Internet of Things and 5G technologies, resulting in an increase in demand for various optical fiber tools. Cutting shears, for example, come in a variety of shapes and sizes, and each tool serves a different purpose. Do you have any experience with them? When should you utilize certain fiber optic tools?

    Read More
  • What are Fiber Optic Patch Cables?
    What are Fiber Optic Patch Cables?
    UTMEL11 February 20227465

    Hello everyone, I am Rose. Today I will introduce fiber optic patch cables to you. Optical fiber communication, as an emerging technology, has become one of the main pillars of modern communication and plays a critical role in modern telecommunication networks, thanks to the rapid growth of traffic data.

    Read More
  • Basis of Fiber Optic and Fiber Optic Communication
    Basis of Fiber Optic and Fiber Optic Communication
    UTMEL22 October 20213105

    Hello everyone, I am Rose. Today I will introduce fiber optic to you. Fiber Optic is a type of glass or plastic fiber that can be utilized to transmit light. It is mostly used for communication. This article will show you the basic knowledge of fiber optic and fiber optic communication.

    Read More
  • The History of Microscope
    The History of Microscope
    UTMEL21 October 20216711

    Hello, everyone. I am Rose. Today I will show you the history of microscopes. The microscope is used to magnify tiny objects into instruments that can be seen by human eyes. Microscopes are divided into optical microscopes and electron microscopes.

    Read More
  • What is VCSEL?
    What is VCSEL?
    UTMEL28 October 20219122

    Hello everyone, I am Rose. I welcome you on board, today I will introduce VCSEL to you. The Vertical-Cavity Surface-Emitting Laser (VCSEL, or Vertical-Cavity Surface-Emitting Laser) is a semiconductor whose laser is emitted perpendicular to the top surface. It differs from an edge-fired laser, which emits the laser from the edge.

    Read More