Otwarty dostęp

AI-Enhanced Systems and Innovative Methods for the Exhaustive Collection of Passport Data through intelligent Device

, , ,  oraz   
24 wrz 2025

Zacytuj
Pobierz okładkę

Introduction

Passport is known as the international travel “pass”, is a country’s citizens to enter and leave the country’s borders and travel or stay abroad, issued by the country to prove that the citizen’s nationality and identity of the legal documents [1]. It has many important roles and far-reaching implications for individuals and countries. The role of the passport is not only limited to the fulfillment of personal travel needs, but also directly related to the security of the country and international movement, exchange and cooperation [2-5]. It not only proves the identity and nationality of the holder, but also provides the holder with the opportunity to travel internationally and socialize in other countries. At the same time, the passport helps the state to maintain security and manage the international [6]. In addition, for the individual holder, a passport can be used to protect his or her own safety in the event of an international crisis or local war, pending state rescue, or to obtain credentials for possible rescue as an immigrant or refugee [7-8].

Passports often include basic information such as the holder’s personal name, nationality, and date of birth. However, in the era of smart technology, passport data is no longer limited to this basic information. And smart devices play an important role in the passport data collection process. They can automatically collect the basic information on the passport, but also collect personal information accurately and efficiently through biometric technologies such as face recognition and fingerprint recognition [9-11]. On the one hand, they can enable holders to travel or work more smoothly and obtain appropriate treatment and rights. On the other hand, it can help countries to identify and track the activities of individuals, reduce the occurrence of illegal behaviors such as illegal immigration, drug trafficking, and terrorism, and to a certain extent, monitor and manage the international borders of their own countries [12-14]. In addition, when smart devices collect exhaustive passport data for AI enhancement to process and analyze this complex data for better passport information management and identification. For example, using deep neural networks trained with large amounts of data to enable AI systems to accurately perform identity verification [15]. Or, image recognition and pattern recognition can be used to quickly extract basic passport information, facial and fingerprint information to give efficient immigration processing to people with facial blemishes or deformities [16]. Further, real-time analysis can also be performed for timely identification, tracking or prediction to reduce the possibility of counterfeit documents [17].

The research collects passport data through intelligent devices, and through the design of UV spectral local feature extraction algorithm, UV spectral global feature extraction algorithm, UV spectral feature fusion algorithm, and anti-counterfeiting region detection algorithm, thus realizing the enhancement of UV anti-counterfeiting information. On this basis, the color feature extraction algorithm based on color moments and the gradient feature extraction algorithm based on CFOG descriptors are used to extract and characterize the color features and gradient features in the UV fluorescence anti-counterfeiting pattern, respectively, and successively match and identify the color features and gradient features of the passport, and furthermore, a multi-feature-based matching recognition model for UV anti-counterfeiting pattern is proposed. Through the comparison experiment of the algorithms, the enhancement effect of UV anti-counterfeiting information and the matching recognition effect of UV anti-counterfeiting pattern of the proposed algorithm are explored. Finally, based on the proposed intelligent method, a passport anti-counterfeiting reading system is constructed, which is composed of image acquisition equipment and host computer software, and the host computer software framework and its functional modules are introduced in detail.

Passport ultraviolet spectral image authentication methods

Under the background of globalization, intelligent authentication of passports is implemented in the fields of financial logistics and self-service customs clearance as the exchange of people from various countries deepens. The passport authentication system detects and judges the authenticity of passport multispectral images through passport authentication algorithms, and the focus of the detection is on anti-counterfeiting features. This paper proposes a passport UV spectral pattern authentication method, based on the collection of passport data for UV security information enhancement, and then security pattern matching recognition.

Intelligent scanning equipment for ultraviolet spectroscopy

In addition to the metal mold supporting the overall structure, the UV spectral intelligent scanning device also consists of the following parts: the main control circuit board, 4K optical camera module (model: Sony FCBER8530), multi-spectrum light source (visible light source module, infrared light source module, UV light source module, OVD light source module, transmissive light source module, etc.), a number of motor modules, a number of optoelectronic sensors, etc. The main control circuit board mainly realizes the data and command receiving and controlling functions of all hardware modules. The main control board mainly realizes the function of receiving and sending data and commands, and controls all hardware modules. The camera mainly realizes the image acquisition function, and its image data is uploaded through the main control circuit board. Multi-spectral light source mainly realizes the light function of the passport, so that it can show different anti-counterfeiting characteristics. The motor module mainly realizes the angle control of some lenses and light sources. The photoelectric sensor mainly realizes the detection function of the position of part of the hardware and realizes high-precision control by combining the motor and the light source.

Algorithm for UV anti-counterfeiting information enhancement

To address the problem of unstable quality of UV anti-counterfeiting information, this section will stabilize the quality of UV anti-counterfeiting information and enhance the enhancement effect of existing algorithms through UV anti-counterfeiting information enhancement algorithms.

Model as a whole

The UV anti-counterfeiting information enhancement model of passport includes anti-counterfeiting region detection model and UV anti-counterfeiting information enhancement algorithm. Among them, the anti-counterfeiting region detection model is subdivided into: UV spectral local feature extraction algorithm, UV spectral global feature extraction algorithm, UV spectral feature fusion algorithm, and anti-counterfeiting region detection algorithm.

The total objective function of the UV anti-counterfeiting information enhancement algorithm is: fnew=Af(i)iΩ1

Wherein, f denotes a passport UV spectral image, fnew is an enhanced image, i denotes a pixel point, Ω1 denotes an image region corresponding to the anti-counterfeiting information, and A denotes an enhancement algorithm. For the anti-counterfeiting region Ω1 corresponding to the anti-counterfeiting information, the anti-counterfeiting region detection algorithm is needed to process the passport UV spectral image f with the following expression: Ω1=H(F) where F denotes a UV spectral feature and H denotes a method for separating the anti-counterfeiting region. The UV spectral feature F consists of δ adjustment parameters, UV spectral localized feature L and UV spectral global feature G , and the expression is as follows: F=δ*L*G

Local feature extraction

Hyperpixel segmentation algorithm based on anti-counterfeiting texture reconstruction

The super-pixel segmentation algorithm based on anti-counterfeiting texture reconstruction adopts wavelet transform to generate texture feature images about UV anti-counterfeiting information. Firstly, the passport UV spectral image f in RGB color space is Gaussian filtered with the following formula: fG=f*G

Where, fG is the processed passport image and G is the Gaussian filter used to filter out some of the noise. Then wavelet transform is performed on the denoised passport image with the following formula: Amc,Hic,Vic,Dic=DWTmfG $$\left\{ {A_m^c,H_i^c,V_i^c,D_i^c} \right\} = DW{T_m}\left( {{f_G}} \right)$$

Where A is the low-frequency component, H , V , and D are the horizontal, vertical, and diagonal components of the high-frequency part of the wavelet, respectively, c denotes the color channel of RGB, i denotes the i th scale of the wavelet transform, and DWTm is the wavelet transform with a total scale of m . To suppress or enhance each wavelet component, the transform equation is as follows: c=αi Anew=0 Hnew=c*H Vnew=c*V Dnew=c*D where α is the adjusted parameter, which aims to enhance the texture details of the UV anti-counterfeiting information. After obtaining the new wavelet components, the components of each scale are reduced to the feature image fnew containing the anti-counterfeiting texture, which is calculated as follows: fnew=IDWTAnew,Hnew,Vnew,Dnew where IDWT is the wavelet inverse transform. The enhancement of the UV spectral features after wavelet inversion is given in the following equation: flog=logfnew2+1

The texture image fusion equation for each channel is shown below: fi=flogR*ωR+flogG*ωG+flogB*ωB ωc=iΩcPCc(i)*logPCc(i) where c denotes {R,G,B} the color channel, Ωc denotes the image range under c the channel, i denotes the pixel point, Cc(i) denotes the pixel value of i the pixel point in c the channel, P denotes the corresponding color probability, and log denotes the logarithmic transformation.

The reconstructed anti-counterfeiting texture feature image fm in which the UV anti-counterfeiting information exists only in the boundary as well as the texture part, the Gaussian filter is used to fill the anti-counterfeiting texture and filter out the image noise, so as to obtain the complete anti-counterfeiting texture feature image with the following formula: fr=fm*G where G denotes the Gaussian filter used to fill the antiforgery region.

The final antiforgery texture feature image with the following equation: fl=Histlognormfr+1

Hyperpixel-based localized feature extraction algorithm for UV spectra

In order to suppress the texture-rich region, the total variation (TV) model is introduced to decompose the original passport image to separate the texture of the UV spectral image.

The information entropy Enc of each hyperpixel region under each color channel is calculated by combining a set of N hyperpixel regions Ω=Ω1,Ω2,,ΩN : Enc=iΩΩPCc(i)*logPCc(i) where c is the {R,G,B} color channel, n denotes the n th superpixel region, i denotes the pixel point within the superpixel region, Cc(i) denotes the pixel value of the corresponding pixel point i within the Cartoon image in the c color channel, and P denotes the probability corresponding to the pixel value. The local features EΩn of the superpixel Ωn are denoted as: EΩn=EnR+EnG

Global feature extraction

The MSS algorithm evaluates the saliency of a pixel point by obtaining the mean value of a symmetric region centered on the pixel point as an estimate of the point, and the pixel value at the point after Gaussian filtering, and calculating the color distance between the two.The core formulas of the MSS algorithm are expressed as follows: S=fnewf¯ where fnew denotes the pixel value after Gaussian filtering and f¯ denotes the mean value of the maximum symmetric region of the pixel point at the center. By accumulating the significance values of each color channel of Lab and then histogram equalizing it, the core formula is modified as: f¯newL=f¯L*1normw G=fnewα*f¯new where f¯L denotes the L -channel image of f¯ , w denotes the anti-counterfeit texture reconstruction map, and α denotes the adjustment parameters.

Feature Fusion Algorithm

The UV spectral feature fusion algorithm suppresses the features of the passport background by introducing the filled anti-counterfeiting texture feature map. The preprocessing algorithm for UV spectral localized feature maps and anti-counterfeit texture feature maps is shown below: fL=normfL fT=expfT/255β

Where, fL is the UV spectral localized feature image, fT is the anti-counterfeit texture feature image, and β is the adjusted deviation.

After preprocessing, the UV spectral feature fusion image of the passport is obtained by fusing the three features with the following formula: Sp=fG*fL*fT

Where, fG is the UV spectral global feature image and Sp is the UV spectral feature fusion image.

Anti-counterfeit area detection algorithm

The images were binarized using the ISauvola algorithm, and the pixel values of the images under the {a,b} -color channel in Lab color space were counted for both regions after separation: Hfc=ColorΩf Hbc=ColorΩb

Where max denotes the maximum value to be found from the statistical color probability and exp denotes the exponential function.

Estimate the probability P(i) that a pixel point i in the tamper-proof region of the binarized image is a passport background: P(i)=Efa*Hnorafa(i)+Efb*Hnevbfb(i) where fa and fb denote the {a,b} -color channel images of the UV spectral image, respectively, and E denotes the information entropy.

Anti-counterfeiting information enhancement algorithm

Adjusts the image quality of the anti-counterfeit region based on the information of the anti-counterfeit region detection image and the quality information of the anti-counterfeit region in the UV spectral feature fusion image. Decide whether or not to perform enhancement based on the information of the anti-counterfeit region detection image.

For the passport ultraviolet spectral image f , a judgment is made as to whether or not to stabilize and enhance the pixel points i based on the acquired ultraviolet security information separation image Ω=Ω1,Ω2 : fnew(i)=Enhance(f(i))iΩ1f(i)others

Where Ω1 is the anti-counterfeiting region, Ω2 is the passport background region, and Enhance denotes the image stabilization and enhancement method of the anti-counterfeiting region. According to the quality information contained in the UV spectral feature fusion image F , the formula for image quality stabilization of the anti-counterfeiting region is as follows: HΩ1=normFΩ1 wΩ1=normProcessHΩ1α*meanHΩ1 normProcess(i)=H(i)/max(H(i))H(i)>0H(i)/abs(min(H(i)))othersiΩ1 fmewΩ1=fΩ1*1+β*wΩ1 where norm denotes the normalization function, abs denotes the absolute value of the minimum value of the computationally guarded region, mean denotes the mean value of the computationally guarded region, α is an adjustment parameter for the threshold, and β is an adjustment parameter for the original pixel enhancement.

UV Anti-counterfeit Pattern Matching Recognition Algorithm
Color feature extraction

Color moments are described to characterize the color distribution by calculating moments, and the expressions for the first-order color moments and second-order color moments are shown below: μi=1Nj=1Npi,j σi=1Nj=1Npi,jμi212 where pi,j is the pixel value of pixel point j of the image in color channel i , i{R,G,B} , N is the total number of pixel points counted.

In order to introduce spatial location information in the extracted color features, a grid-based color feature extraction algorithm is proposed based on color moment features for improvement. For the pixel points in the image, they are converted to HSV color space for color moment statistics. Normalization is performed for each of its color channels: R=R255G=G255B=B255 $$\matrix{ {R\prime = {R \over {255}}} \hfill \cr {G\prime = {G \over {255}}} \hfill \cr {{B^\prime } = {B \over {255}}} \hfill \cr } $$

At the same time, the channel pixel extremes are calculated: Cmax=maxR,GCmin=minR,G,B $$\left\{ {\matrix{ {{C_{\max }} = \max \left\{ {R\prime ,G\prime } \right\}} \hfill \cr {{C_{\min }} = \min \left\{ {R\prime ,G\prime ,B\prime } \right\}} \hfill \cr } } \right.$$

Calculate to get the pixel value of each color channel in HSV color space: H=0,Cmin=Cmax60GRCmaxCmin+60,Cmin=B60BG*CmaxCmin+180,Cmin=R60RBCmaxCmin+300,Cmin=G S=CmaxCmin,V=Cmax

For a single grid region, the expression of the extracted color feature descriptor vector vc is as follows: vc=μH,σH,μS,σS,μV,σV where μH,σH denotes the color first-order moments and color second-order moments of the pixels in the grid area under the H -channel, respectively, and the color moments of the other channels are expressed in the same way as the H -channel.

For the whole image, if M and N denote the number of grids on the long and wide edges of the image, the color feature vectors of all grids can be formed into a color feature descriptor matrix Vc according to the grid positions, and its expression is as follows: Vc=vc,1,1vc,M,1vc,1,Nvc,M,N where vc,i,j denotes the color feature descriptor vector corresponding to the grid region located in row j , column i of the image, VcFM×N×6 .

Gradient feature extraction algorithm

In this paper, the channel feature of oriented gradient (CFOG) descriptor is improved and a gradient feature extraction algorithm based on grid region division is proposed. Firstly, the extracted UV fluorescent anti-counterfeiting pattern is grayed out, and then the CFOG descriptor of the image is calculated.The template expression of Roberts operator is: Gx=1001,Gy=0110

After obtaining the gradient in both directions of the pixel, the angular direction with a value range of 0°,180° is evenly divided into n directions.

Subsequently, the magnitude of the gradient value of the gradient vector of each pixel point in each bin direction is calculated with the following formula: gθ=cosθ×gx+sinθ×gy where gθ denotes the gradient value of the gradient vector in the bin direction with angle θ , θ=0°,20°,,180° .

Meanwhile, the gradient of the grid region is characterized using the mean value of the gradient in each bin direction for all pixel points in the grid region, and the gradient feature vector vg of a single grid region is represented as follows: vg=i=1ng^θ,in where n denotes the number of all pixel points in the grid region, g^θ,i denotes the CFOG descriptor subcomponent of the gradient vector of pixel i in the grid region in the bin direction at an angle of θ , and θ=0°,20°,,180° . Ultimately, the gradient feature vectors of all the grid regions are composed into the gradient descriptor matrix of the image according to the grid position Vg : Vg=vg,1,1vg,M,1vg,1,Nvg,M,N where vg,i,j denotes the vector of gradient feature descriptors corresponding to the grid region located in row j , column i of the image, VgFM×N×9 .

Anti-counterfeit pattern matching recognition model

In this paper, based on the idea of matching recognition for UV fluorescence anti-counterfeiting pattern recognition, the UV fluorescence anti-counterfeiting pattern extracted from the sample image of the real passport is used as a matching template, which is matched with the extraction result of UV fluorescence anti-counterfeiting pattern of the passport to be tested, so as to recognize its authenticity.

Color feature matching recognition algorithm

Initial matching recognition is performed based on the extracted color features. For the descriptor vector vct in the color feature descriptor matrix Vct and the descriptor vector vct at the corresponding position in the color feature descriptor matrix Vcs , the normalized Euclidean distances dHS of the elements of channels H and S thereof are calculated: dHS=12μHμ^H2+σHσ^H2+μSμ^S2+σSσ^S212

Where dHS takes the value range of [0,1] , μH denotes the first-order color moments of channel H in descriptor vector vct , μ^H denotes the first-order color moments of channel H in descriptor vector vcs , σH denotes the second-order color moments of channel H in descriptor vector vct , σ^H denotes the second-order color moments of channel H in descriptor vector vCS , and the method of representing color moments in the lower color moments of channel S is the same as that of channel H .

For channel V , the luminance contrast function and the contrast contrast function in the Structural Measurement Indicator (SSIM) are referenced for the difference measures, which are shown below, respectively: l(x,y)=2μxμy+C1μx2+μy2+C1 C(x,y)=2σxσy+C2σx2+σy2+C2

For the V -channel color moments in the color feature descriptor vector, the first-order color moments are used to replace the image pixel mean values in the original function, and the second-order color moments are used to replace the image pixel standard deviation in the original function, so as to obtain the V -channel luminance difference value l^V and the V -channel contrast difference value C^V , whose computational expressions are shown as follows, respectively: l^V=12μVμ^V+C1μV2+μ^V2+C1 C^V=12σVσ^V+C2σV2+σ^V2+C2 where μV denotes the first-order color moments of channel V in descriptor vector vct , μ^V denotes the first-order color moments of channel V in descriptor vector vcs , σV denotes the second-order color moments of channel V in descriptor vector vct , and σ^V denotes the second-order color moments of channel V in descriptor vector vcs , with l^V and C^V taking the range of [0,1] , C1 is a constant with the value of 0.0001, and C2 is also a constant with the value of 0.0009.

Multiplying the difference values of the three channels of the color feature descriptor, we have: Gc=dHS×l^V×C^VTcd

Where Gc is the difference degree of the color feature descriptor, and the value range is [0,1] , Tcd is the matching threshold, and if the color difference degree is GcTcd , the color feature descriptor of the corresponding grid region is considered to be matched.

After obtaining the matching results of a single grid region, the ratio of the matched descriptor vector among all descriptor vectors is counted, so as to judge the recognition results, and the expression is as follows: nM×NTc

Where Tc is the matching threshold, the value range is [0,1] , and n is the number of grid regions successfully matched. If the overall matching degree is greater than or equal to the threshold value of Tc , the recognition result is true, and Tc is selected as 0.9 in this experiment.

Gradient feature matching recognition algorithm

The gradient feature matching step performs the second matching recognition on the image that has passed the color feature matching, and produces the final recognition result.

For the descriptor vector vgt in the gradient feature descriptor matrix Vgt and the descriptor vector vgs in the corresponding position in the gradient feature descriptor matrix Vgs , calculate the normalized Euclidean distance dg and make matching judgment: dg=13i=19vgt,ivgs,i212Tgd where vgt,i is the i rd element of descriptor vector vgt and vgs,i is the i th element of descriptor vector vgs . Tgd is the matching threshold, if dgTgd , the gradient features of the corresponding grid region are considered to match. In this experiment, Td is chosen to be 0.1, i.e., when the gradient difference dg is less than or equal to 0.1, the corresponding grid region gradient feature matching is considered successful.

After obtaining the matching results of a single grid region, the ratio of the matched descriptor vectors among all descriptor vectors is counted, so as to judge the recognition results, and the expression is as follows: nM×NTg

Where Tg is the matching threshold, which takes the value range of [0,1] , and n is the number of grid regions successfully matched. If the overall matching degree is greater than or equal to the threshold value Tg , the recognition result is true, otherwise it is false. For this experiment, Tg is selected as 0.95.

Experimental results and analysis

The proposed passport UV security information enhancement algorithm and UV security pattern matching recognition algorithm are experimentally evaluated and analyzed.

Enhanced Analysis of Ultraviolet Anti-counterfeiting Information
Evaluation methodology

The experiments in this section use an objective evaluation methodology to transform the image quality assessment process into a numerical comparison process of image quality quantification through a specific mathematical model. The following common image quality evaluation criteria will be used in this subsection: mean (M), standard deviation (SD), sharpness (S), and information entropy (E).

The mean value reflects the overall brightness of the UV anti-counterfeiting information enhancement image. The larger the mean value of the image, the higher the overall pixel value. The standard deviation reflects the degree of information dispersion of the UV-enhanced image. The larger the standard deviation of the image, the higher the degree of dispersion of the image information, and the higher the contrast of the image. Clarity reflects the average gradient of the UV-enhanced image, and also reflects the degree of sharpening of the image edges, the greater the image clarity, the more obvious the edges of the image, and the better the visual effect of the image. Information entropy reflects the size of the information contained in the UV-enhanced image, the larger the image information entropy, the more information contained in the UV-enhanced image, the richer the image details.

Experimental comparison and analysis

In order to verify the performance of the UV anti-counterfeiting information enhancement algorithm proposed in this paper, the UV anti-counterfeiting information enhancement algorithm proposed in this paper is compared with the adaptive histogram equalization algorithm (CLAHE), the contrast enhancement algorithm in low light (SEF), and the multiscale Retinex (MSR) algorithm. The experiments will be compared from two perspectives: fluorescent foreground image and non-fluorescent background image, the foreground image is the segmented image with UV fluorescent anti-counterfeiting pattern extracted from the enhanced image, and the background image is the segmented image with UV fluorescent anti-counterfeiting pattern removed. In order to quantitatively judge the performance gap between this paper’s algorithm and the comparison algorithm, the experiments are conducted on a homemade dataset, which contains a total of 300 passport test images.

The experiments use CLAHE, SEF, MSR and the algorithm in this paper to enhance the original passport security information image respectively, calculate the corresponding evaluation indexes such as mean, standard deviation, clarity, information entropy, etc., and calculate the change rate relative to the evaluation indexes of the original image (e.g., change rate of the mean=mean value of the enhanced image/mean value of the original image-1), and finally average all the change rates to observe how well the algorithm enhances the original passport security information image enhancement degree. Finally, the rate of change is averaged to see how much the algorithm has enhanced the original passport security information image.

The comparison of foreground image enhancement effect is shown in Fig. 1, and the comparison of background image enhancement effect is shown in Fig. 2. In the foreground image enhancement effect, this paper’s algorithm achieves the best results in the clarity (S), information entropy (E) indicators, in the two indicators than the original image increased by 55.75% and 0%, indicating that this paper’s algorithm maintains the details of the image better, and at the same time better enhance the edge information of the UV-fluorescence anti-counterfeiting pattern. Although it does not achieve the best in the mean (M) and standard deviation (SD) indicators, it is also similar to the enhancement results of CLAHE and SEF algorithms, and the results of this paper’s algorithm for the rate of change of the mean and the rate of change of the standard deviation are 66.26% and 63.85%. The reason why the MSR algorithm is far more effective than the other three algorithms is because for low illumination images, the algorithm image color fidelity is poor, and will result in color distortion due to over-enhancement, with an information entropy change of -12.71%, and loss of detail in the passport security information image.

The algorithm in this paper specializes for image non-fluorescent background. Relative to the overall enhancement of other algorithms, this paper’s algorithm better suppresses the background contrast and increases the difference between the fluorescent foreground and the non-fluorescent background, and its rate of change in the mean (M), standard deviation (SD), sharpness (S) and information entropy (E) indexes are -53.01%, -35.67%, -32.42%, and -22.66%, and all the indexes are lower than the value of the original image, respectively. The suppression effect is excellent.

Figure 1.

Comparison of foreground image enhancement effect

Figure 2.

Comparison of background image enhancement effect

Anti-counterfeiting pattern matching recognition analysis
Data sets

The dataset used for passport UV anti-counterfeit pattern matching recognition algorithm training and testing in this paper consists of two parts in total: the real UV passport image dataset Passport-2000 and the fake UV image dataset Fake-2000.

Due to the scarcity of forged passport samples that can be obtained in real scenarios, this experiment is made through a variety of production methods, and the use of the laboratory’s internal CIS scanner, flatbed scanner and other professional mapping equipment to collect samples of forged UV images, including four kinds of counterfeiting means of forging passport UV images: color distortion forged photographs, anti-counterfeiting patterns mutilated forged passports, anti-counterfeiting features splicing tampered with forged passport The photo photocopies are used to forge passports. Color distortion is in order to simulate part of the forged passport fluorescent ink material color distortion, pattern mutilation is in order to simulate the forged passport due to the lack of technology, resulting in the absence of security features, splicing tampering is in order to simulate the use of different versions of the original passport splicing replacement of the forgery, photo copy is in order to simulate the use of other people passport pictures of high-definition photocopies of the forgery situation.

Since the forged passport image does not need to ensure the integrity of the original security feature information, in this experiment, data augmentation is also used to expand the original captured forged passport sample image. Specifically, the geometric transformation of the forged passport samples is carried out by affine transformation, translation rotation, random cropping and other ways. Through unsupervised learning such as Cutout, CutMix, etc., the internal structure of the fake passport sample image is changed.

Assessment criteria

Two evaluation metrics are used in this paper’s passport UV security pattern feature extraction task to assess the performance of the model proposed in this paper, which are the mean absolute error (MAE) and Fβmax . The mean absolute error (MAE) indicates the degree of similarity between the predicted significant graph and the real label, and the smaller the value of MAE, it indicates that the model predicts the the more similar the significant map is to the true label, the better the model is. Fβmax is a holistic assessment, calculated by weighted summed average of precision and recall, and the larger the value of evaluation index Fβmax , the better the performance of the model.

In order to verify the effectiveness of the proposed model in the task of passport security pattern matching recognition, two evaluation indexes, namely, misdiagnosis rate (MDR) and under-diagnosis rate (UDR), are used to evaluate the forensic results. The meaning of MDR is the probability that all authentic passport images are recognized as forged passports, and the UDR is the probability that all forged passport images are correctly recognized as authentic passports.

Feature extraction effect

In order to investigate the effectiveness of the UV security pattern matching recognition algorithm proposed in this paper on passport security pattern feature extraction, the algorithm proposed in this section is compared with the current popular saliency detection algorithms. The comparison results of different algorithms are shown in Fig. 3, and (a) to (d) correspond to the number of parameters, computation, Fβmax -value, and MAE-value of each method. The feature extraction method in this paper achieves better experimental results than MINet and EGNet while reducing the number of parameters, the Fβmax -value is improved by 3.3% and 3.6%, and the MAE value is decreased by 2.2% and 0.9%, respectively, compared with MINet and EGNet.SAMNet has the smallest number of parameters in the network, but its effect is not good when facing the situation of the passport anti-counterfeiting pattern with various scales and complex backgrounds. The effect is not good.

Figure 3.

Comparison results of different algorithms

Matching Recognition Effect

In order to verify the effectiveness of the UV anti-counterfeiting pattern matching recognition algorithm proposed in this paper, 50 images were selected from the test set of Passport-2000 as the reference sample images of real passports and 30 images as the images to be tested, while 10 images each of the corresponding categories of color distortion, pattern mutilation, splicing tampering, and photo-copying counterfeiting passport images were randomly selected from the test set of Fake-2000 for comparison experiments. 10 images each were used as the images to be tested for comparison experiments. In the model prediction stage, each time the real passport sample image and the image to be tested as a sample pair, matching recognition, according to the set threshold to determine the authenticity of the passport results.

Since there are fewer related researches on passport UV spectral image-based forgery identification algorithms, four traditional feature extraction and matching algorithms, namely SIFT matching algorithm, ORB matching algorithm, HOG algorithm and CFOG algorithm, as well as the SiameseNet algorithm, are selected for the comparative experiments in this paper, in which the HOG algorithm uses the Euclidean distance to match the calculation of similarity between the test passport image and the real passport image, and the CFOG algorithm uses the European distance to calculate the similarity, and CFOG algorithm uses the original SSD algorithm for matching calculation.

The matching recognition comparison results of different algorithms are shown in Table 1. Although SIFT matching algorithm, ORB matching algorithm, HOG algorithm and CFOG algorithm do not need samples for training, their accuracy rates are relatively low and are not suitable for application in practical scenarios. Among them, CFOG is the most effective with a misidentification rate of 16.3%. However, these four methods generally perform poorly in terms of miss-discrimination rate, which is higher than 32%, indicating their limited ability to identify forged passports.

SiameseNet has 6.1% and 9.2% misidentification and leakage rates when the number of training samples is sufficient. However, when the samples are insufficient, serious overfitting occurs, resulting in the misidentification rate and leakage rate of 16.2% and 22.9%. The UV anti-counterfeit pattern matching recognition algorithm proposed in this paper has 5.5% and 6.9% false positive rate and leakage rate, which are lower than the other comparative methods, and the leakage rate and false positive rate are better.

Comparison results of matching recognition of different algorithms

Algorithms Training sample size MDR UDR
SIFT - 0.262 0.358
ORB - 0.196 0.376
HOG - 0.171 0.347
CFOG - 0.163 0.328
SiameseNet N=10 0.162 0.229
N=20 0.092 0.221
N=30 0.084 0.184
N=40 0.075 0.139
N=50 0.061 0.092
Our method - 0.055 0.069
AI-enhanced passport forgery detection and reading system

Based on the previous discussion of the passport UV spectral image forensic method, to build a passport forensic reading system. The passport identification and reading system consists of two parts: image acquisition equipment and PC software. The image acquisition equipment mainly uses multi-spectrum and multi-angle light source to lighten the passport, so that the passport shows different anti-counterfeiting features, and then uses the camera to collect the passport image, and finally uploads the data to the host computer. At the same time, the image acquisition equipment will receive relevant instructions from the host computer to control some hardware modules. The following section mainly introduces the software design of the host computer.

Upper computer software framework

The host computer software plays a leading role in the whole passport authentication process, which is responsible for controlling the image acquisition equipment for data acquisition, and at the same time, authenticating the passport according to the acquired images. The entire host computer software is designed based on the idea of “high cohesion, low coupling” using a layered structure, while each layer uses modular management. The entire software framework is divided into three layers, respectively, for user interaction layer, functional logic layer, hardware abstraction layer.

The layered structure design of the upper computer software of the passport identification and reading system is shown in Figure 4. The user interaction layer mainly realizes the user interaction interface design, displaying real-time images, collected images, etc., and at the same time using a number of simple and clear operable controls for the user to realize the function control. Functional logic layer mainly realizes the design of software function modules, including most of the hardware function calls, the call of the forensic algorithm, and the design of interactive functions. The hardware abstraction layer mainly realizes the control of image acquisition hardware devices. Since the software development kit (SDK) for hardware control is biased towards the bottom layer, with more relevant parameters, complex control logic and redundant functions, the hardware abstraction layer encapsulates it into a number of controller modules to realize simpler and clearer hardware control logic.

Figure 4.

Hierarchical structure of upper machine software

Functional Module Design

In the abstract hardware layer, each hardware module is encapsulated into a controller with more aggregated functions. Among them, the camera controller mainly realizes the control functions of the camera, including zoom, focus, white balance, shutter, aperture, brightness, exposure, image flip, gamma conversion and other functions. The light source controller realizes the switching control of different brightness and different light sources by controlling the PWM wave of different interfaces.

Functional logic layer mainly realizes several functions by calling relevant controllers and built-in algorithms, such as image preservation, OCR, passport authentication, RFID, real-time display, camera parameter control and so on. Among them, the passport authentication module adopts the designed passport UV spectral image authentication model, and after exhaustive collection of passport data through intelligent devices, the designed algorithm is used to complete the enhancement of UV anti-counterfeiting information, and then the UV anti-counterfeiting information pattern is matched for identification to realize the anti-counterfeiting authentication of the passport.

The user interaction layer consists of the main interface and several secondary interfaces. The center area of the main interface is the real-time image area, where the real-time image captured by the camera is displayed, which can be dragged and dropped with the left mouse button, and at the same time, the image can be digitally zoomed in and out with the mouse wheel. The left area of the main interface is the image preview bar, the captured image will be temporarily saved in this area, the user can select the image in this area for saving and deleting, which can be operated by the buttons in the preview area, shortcut keys and selecting the secondary menu in the right mouse button. At the same time, you can apply the image parameters in the preview area to reproduce the capture effect at that time. The lower area of the main interface is the camera control interface, you can adjust the brightness, focus and zoom of the camera by pressing the button or slider, and the brightness adjustment and focus adjustment have automatic mode and manual mode. The lower right corner of the main interface is divided into two parts, the left part is the display area of the device cover opening and closing status, and the right part is the sampling button area. The upper area of the main interface is the main function bar, including image mode switching, authentication, image opening, real-time image pause and playback, settings, acquisition settings, etc. The right side of the main interface is the light source control. The right side of the main interface is the light source control bar.

At the same time, there are a number of secondary interfaces in the user interaction layer, which are mainly the system setup interface and passport authentication interface. After clicking on the start of forgery, the system performs image acquisition and calls the algorithm of this paper to perform forgery, and displays and saves the forgery results into the database. Users can view the historical forgery results in the database and export the historical data.

Conclusion

Intelligent forgery of passport is conducive to improving the efficiency and accuracy of passport forgery, which is of great research significance. In this paper, passport data are collected exhaustively through intelligent devices, passport intelligent forensic algorithms are researched based on the collected passport UV spectral images, passport forensic methods based on UV spectral image enhancement algorithms and UV anti-counterfeiting pattern matching recognition are designed, and the validity of the proposed methods is verified through experiments. The main results are as follows:

Compared with the original passport image, the growth rate of the UV anti-counterfeiting information enhancement algorithm on the foreground image in terms of mean, standard deviation and clarity is 66.26%, 63.85%, 55.75%, and the value of each index on the background image is less than 0. The UV anti-counterfeiting information enhancement algorithm designed in this paper can effectively enhance the foreground image, and inhibit the interference of the background information in the passport image.

In passport anti-counterfeiting pattern matching recognition, this paper’s algorithm is better than MINet and EGNet algorithms in image feature extraction, with the Fβmax values improved by 3.3% and 3.6% respectively, and the MAE values decreased by 2.2% and 0.9% respectively. In the identification of passport pattern anti-counterfeiting, the misidentification rate and omission rate are 5.5% and 6.9%, which are lower than the comparative detection methods, and show a better performance of passport image forgery identification.

Design of AI-enhanced passport forgery recognition system, which is composed of image acquisition equipment and host computer software. Apply the proposed passport UV spectral image authentication method to the passport authentication module in the functional logic layer of the host computer software.

The passport UV security image enhancement and passport forgery algorithm studied in this paper solves a series of problems faced by passport forgery in UV spectral images and achieves better forgery results. But the passport forgery algorithm still has room for improvement, the future with the continuous development of counterfeiting technology, there will be more forgery accuracy of the passport, a single use of the passport under the ultraviolet spectrum of the anti-counterfeiting features of forgery may not be enough to ensure that the passport forgery accuracy rate. The collection and extraction technology of other difficult-to-counterfeit features of passports, such as holographic anti-counterfeiting patterns, microtext and other features, can be studied and applied to the passport authentication algorithm to realize the multimodal authentication of passports.

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne