Sunil Kumar Singla, and Parul Sethi*

Original Article Challenges at different stages of an iris based biometric system Sunil Kumar Singla, and Parul Sethi* Department of Electrical and In...

0 downloads 39 Views 997KB Size
Songklanakarin J. Sci. Technol. 34 (2), 189-194, Mar. - Apr. 2012

Original Article

Challenges at different stages of an iris based biometric system Sunil Kumar Singla, and Parul Sethi* Department of Electrical and Instrumentation Engineering, Thapar University, Patiala, India. Received 12 November 2011; Accepted 15 February 2012

Abstract Iris recognition has been used for authentication for the past few years and is capable of positive/negative authentication of an individual without any physical contact or intervention. This technique is being used mainly because of its uniqueness, stability, and reliability but still many challenges are being faced an the iris based recognition system. This paper presents the difficulties faced in different modules, like the sensor module, preprocessing module, feature extraction module, and matching module of an iris biometric system. Keywords: iris, recognition, uniqueness, biometric, sensor

1. Introduction Identification of humans is a goal as ancient as humanity itself. As technology and services have developed in the modern world, human activities and transactions have proliferated, in which rapid and reliable personal identification is required. Examples of which includes passport control, computer login control, bank automatic teller machines (ATM), premises access control, and security systems (Daugman, 1994). Biometric systems rely on the use of physical or behavioral traits, such as fingerprints, iris, face, voice, and hand geometry, to establish the identity of an individual (Jain et al., 2008). Therefore, biometric recognition is a rapidly evolving field with applications ranging from accessing one’s computer, to gaining entry into a country. Iris is one of the important biometric features used in many applications, such as national border controls, computer login, cell phone and other wireless-device-based authentication, secure access to bank accounts, ticketless travel, premises access control (home, office, laboratory, etc.), driving licenses and other personal authentication. However, iris recognition cannot be

* Corresponding author. Email address: [email protected]

used for forensic applications as one does not leave one’s iris at the scene of a crime. The reason for the popularity of the iris as a biometric feature in recognition technique is uniqueness, stability, permanency and reliability. But, in this paper we will discuss the different problems encountered in iris recognition. The iris being a protected internal organ, whose random texture is most reliable and stable throughout life, can serve as a kind of living password that one need not to remember but always carries along. Every iris is distinct, even two irises of the same individual, and the irises of twins are different. Iris patterns are formed before birth and do not change over the course of a lifetime (Nanavati et al., 2002). Even medical procedures such as refractive surgery, cataract surgery, and cornea transplants do not affect recognizable characteristics; (Rhodes, 2002). Because of the natural protection of the eyes in the face, and the protection of the iris beneath the cornea, the iris is also resistant to injury, making it highly stable as a recognizable characteristic. The iris recognition is basically divided into following steps. Those include i) Acquiring an image of an eye of the human to be identified, ii) isolating and defining the iris of the eye within the image, which includes defining a circular pupillary boundary between the iris and pupil portions of the image; and defining another circular boundary between the iris and sclera portions of the image, iii) analyzing the iris to


S. K. Singla & P. Sethi / Songklanakarin J. Sci. Technol. 34 (2), 189-194, 2012

generate a presenting iris code, iv) comparing said presenting code with a previously generated reference iris code to find the measure of similarity, and v) calculating a confidence level for the decision. These various steps of iris recognition are shown in Figure 1. In this paper, we shall focus mainly on the difficulties in different modules of iris recognition. The different modules to be discussed are sensor module, preprocessing module, feature extraction module, and template matching module. 2. Problems in Different Modules of Iris Recognition The various problems occurred at different stages of iris recognition are discussed below. 2.1 Sensor module Sensor is the first step in any system. The success of the iris based recognition system highly depends upon the quality of the image captured by the sensor. If in an iris recognition system the image captured is of low quality and contain random specular reflections in and around the iris, then the performance of the iris recognition based biometric system is influenced by a great amount (CY lab, 2011). In 1996, Sensar Inc. and the David Sarno Research Center (Hanna et al., 1996) developed a system that would actively find the eye of the nearest user who stood between 1 and 3 feet from cameras. Their system used two wide field-ofview cameras and a cross-correlation-based stereo algorithm to search for the coarse location of the head. They used a template-based method to search for the characteristic arrangements of features in the face. Then, a narrow field-ofview (NFOV) camera would confirm the presence of the eye and acquires the eye image. Two incandescent lights, one on each side of the camera, illuminated the face. The NFOV camera eye-finding algorithm searched for the specular reflections of these lights to locate the eye. Park et al. (2005) proposed an approach to fast acquisition of in-focus iris images, but they exploit the specular reflections that can be expected to be found in the pupil region in iris images. Several researchers have investigated how the working volume of an iris acquisition system can be expanded. Fancourt et al. (2005) demonstrated that it is possible to acquire images at a distance of up to ten meters that are of sufficient quality to support iris biometrics. However, their system required very constrained conditions. A theoretical framework developed by He et al. (2006) discussed the acquisition of in-focus images. They have discussed the differences between fixed-focus imaging devices and auto-focus imaging devices, where the effects of illumination by different near infrared wavelengths are illustrated. They conclude that “illumination outside 700-900 nm cannot reveal the iris’ rich texture”. Identity recognition is also impacted significantly when scanning images are not perfect due to lighting, motion, blur, or even physical problems like occluded irises, and

others (Eyetrackingupdate, 2010). There are three main problems of concern, i.e. defocus, motion blur, and occlusion. A system that employs fixed-focus optical lens easily causes defocused iris images. Iris scanners work only when targets are stationary and within very close range, since it is impossible to capture iris images from moving targets. Motion blurred images are captured by a CCD sensor in interlaced scan mode, and a frame is combined by two fields with an interval of 20 ms or less, and the resulting image involves obvious interlacing lines in the horizontal direction (Ma et al., 2003). An occluded image is the case that most area of the iris is covered by eyelid and eyelashes. It often happens if the client blinks while the images are being taken (Wei et al., 2006). Bachoo et al. (2005) approach the detection of eyelash occlusion using the gray-level co-occurrence matrix (GLCM) pattern analysis technique. Possible challenges for this approach are choosing the correct window size and dealing with windows that have a mixture of types. Figure 2 shows the above three problems. Most iris recognition devices are capable of capturing only one image of an iris at a time. After each image capture, the device user must manually enter several pieces of identifying information, including whether the image is of a left eye or a right eye. Hence, the single capture ability of iris

Figure 1. Various steps of a biometric recognition system.

Figure 2. Image quality with (a) clear image, (b) defocused image, (c) motion blurred image, and (d) occluded image (Ma et al., 2003).

S. K. Singla & P. Sethi / Songklanakarin J. Sci. Technol. 34 (2), 189-194, 2012 recognition devices slows the data collection process and increases the likelihood that iris images will be misidentified and mislabeled (CY lab, 2011). Discussing more of the problems related to iris recognition includes that the optical systems may introduce image rotation depending on eye position, camera position or subject position. This is a problem for some algorithms. Daugman (1993, 2003) computes the iris code in a single canonical orientation and compares it with several orientations by scrolling, but other iris algorithms are invariant to rotation (Avila et al., 2005). Due to optical issues, subject motion, illumination limitation, good quality iris image acquisition becomes very difficult (Jain et al., 2004; Matey et al., 2006; Daugman, 2007). Even when a good quality camera is used, the result is commonly useless for iris recognition. Autofocus, if it is applied, usually concentrates on the face not on the iris itself, but when auto-focus is disabled then the distance between the head and the camera must to keep stable, fixed manually by the camera operator or by the user himself. This kind of acquisition reduces image quality, and is very uncomfortable to the user (Lorenz et al., 2008). 2.2 Preprocessing module Iris image preprocessing is one of the most important steps in iris recognition system and affects the accuracy of matching. It includes iris localization and iris image quality evaluation. Iris localization refers to detection of the inner and outer boundaries of the iris (Pan et al., 2005). The localization algorithm aims for fast and accurate determination of the iris boundaries. However, in practice, accurate algorithms require a long time to locate iris (Chowhan et al., 2009). The first step in iris localization is to detect the pupil, which is the black circular part surrounded by iris tissues. As pupil is the largest black area in the intensity image, its edges can be detected easily from the binarized image by using suitable threshold on the intensity image. But the problem of binarization arises in case of persons having dark iris and the localization of pupil fails in such cases (Gupta et al., 2006). To convert the original image to the binary image, we need to choose a reasonable threshold value. Firstly we have to analyze the histogram of the original iris image. Figure 4, which is the gray level histogram of Figure 3, has three peaks. The image intensity values in the vicinity of the first peak represent the pupil region’s intensity values. Similarly, the image intensity values near the second and the third peaks represent the intensity values of the iris region and the sclera region, respectively. We choose the intensity value of the dip between the first and second peaks as the threshold value. Then, we convert the original iris image to the binary image. We find, in the binary image, there are some bright spots in the pupil, which gets generated under illumination. These bright spots will reduce the accuracy of localization (Pan et al., 2005). Another important consideration is that the pupil, in most of the cases, is not a perfect circumference. Since it is a


muscle-filled organ (trabeculae), the contraction and dilation movements distort more and more of its pseudo-circumference (Gonzaga et al., 2009). Collectively, the preprocessing of images is a lengthy stage and therefore difficult to perform manually. There is an alternative to the manual method. The automatic procedure always uses the same naming sequences at different stages of processing. In case one disastrous problem may occur here then that will permanently affect the sequences of the input images (Skyimaging, 2011). The second part in the preprocessing of iris image is iris image quality evaluation. In practice, the quality of some iris images is so terrible that error matching will be caused (Lei et al., 2003). Image quality assessment plays an important role in automated biometric systems for two reasons, (1) system performance (recognition and segmentation), and (2) interoperability. Low quality images have poor lighting defocus blur, of-angle, and heavy occlusion, which have a negative impact on even the best available segmentation algorithms (Kalka et al., 2006). Preprocessing also includes the processes of iris normalization, iris image enhancement and denoising. Irises of different people may be captured in different size; also, the size of the iris of the same person may change because of the variation of the illumination. Such elastic deformations in iris

Figure 3. An original iris image (size 320 × 280).

Figure 4. Gray level histogram of the original iris image, the intensity value of the dip between the first and the second peaks is chosen to be the threshold value.


S. K. Singla & P. Sethi / Songklanakarin J. Sci. Technol. 34 (2), 189-194, 2012

texture affect the results of iris matching. For the purpose of achieving more accurate recognition results, it is necessary to compensate for these deformations through normalization (Ma et al., 2002). Normalization involves the process of organizing data to minimize redundancy and then dividing a database into two or more tables and defining relationships between the tables. The problems related to normalization are that the image has low contrast and may have non-uniform illumination caused by the position of light sources. So we use the concept of image enhancement by means of histogram equalization and removal of noise by filtering the image with a low pass Gaussian filter. The problem with histogram equalization is that it can produce undesirable effects when applied to images with low color depth (bits per pixel). For example, if applied to a 8-bit image displayed with 8-bit gray scale it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit grayscale images (Moorthi et al., 2010). Proenca et al. (2006) evaluated four different clustering algorithms for preprocessing the images to enhance the image contrast. The fuzzy k-means clustering algorithm used on the position and intensity feature vector was found the best. They compared their segmentation algorithms with algorithms of Daugman (1993), Tuceryan (1994), Wildes (1997), and Camus et al. (2002). They tested these methods on the UBIRIS dataset, which contained one session of highquality images, and other of lower-quality images. Wildes’ original methodology correctly segmented the images 98.68% of the time on the good quality dataset, and 96.68% of the time on their poorer quality dataset. The algorithm by Proenca et al. (2006) performed second-best with 98.02% accuracy on the good dataset, but they had the smallest performance degradation with 97.88% accuracy on the poorer quality dataset. Denoising is done using either the mean filter or median filter. The main problem with the mean filter is that a single pixel with a very unrepresentative value affects the mean value of all the pixels in its neighborhood. When the filter neighborhood straddles an edge, the filter will interpolate new values for pixels on the edge and so will blur that edge. This is the biggest problem when a sharp edged image is required in the output. This problem is removed by the median filter. The median filter is often a better filter for reducing noise than the mean filter, but it takes longer to compute. 2.3 Feature extraction The next module is related to feature extraction. High dimensional problems are becoming increasingly common. With high dimensional data, it is difficult to understand the underlying structure (Noh et al., 2005). Additionally, the storage, transmission and processing of high dimensional data places great demands on systems. All these are aspects

of the computational and data analysis problems (Chowhan et al., 2009). Iris feature extraction is the crucial stage of the whole iris recognition process for personal identification (Noh et al., 2005). A major approach for iris recognition is to generate feature vectors corresponding to individual iris images and to perform iris matching based on some distance metrics (Daugman et al., 1993; Ma et al., 2004). One of the problems in feature-based iris recognition is that the matching performance is signicantly influenced by many parameters in feature extraction process, which may vary depending on environmental factors of image acquisition (Miyazawa et al., 2005). The human eye is sensitive to visible light. The pupil contracts and dilates under the effect of the visible light, and the iris and the sclera exceptionally reflect within this range. In order to capture an image of the human iris by using visible light, a problem occurs, how to keep the natural reflexes on the globe of the eye, iris and sclera surfaces from affecting the quality of the digital image? The NIR illumination generates good resolution and definition images. But, due to the fact that they are not “visible” to the human eye, they do not allow for the necessary stimuli so that the pupil can perform the contraction and the dilation movements. The image quality is compromised, thus making the extraction of features difficult (Gonzaga et al., 2009). These images do not provide enough quality for a dependable biometric recognition. 2.4 Template matching In this module the template is compared with the other templates stored in a database until either a matching template is found and the person is identified, or no match is found and the person remains unidentified. The matching process can be done by the use of an image pyramid. This is a series of images, at different scales, which are formed by repeatedly filtering and sub-sampling the original image in order to generate a sequence of reduced resolution images (Adelson et al., 1984). More than one template having different scales and rotations are to be used, since using a single template decreases the accuracy. This improves the execution speed for comparing, images; however, the computation time still scales linearly with the size of the set (Cole et al., 2004). There are two error rates that need to be taken into consideration. False reject rate (FRR) occurs when the biometric measurement taken from the live subject fails to match the template stored in the biometric system. False accept rate (FAR) occurs when the measurement taken from the live subject is so close to another subject’s template that a correct match will be declared by mistake (Khaw, 2002). Inadequate training of users at the initial enrollment period will cause problems both at the initial enrollment time and subsequent authentications. 3. Conclusion Iris recognition provides one of the most secure methods of authentication because of its unique characteris-

S. K. Singla & P. Sethi / Songklanakarin J. Sci. Technol. 34 (2), 189-194, 2012 tics. But there are certain hurdles in this biometric method. Each of its modules suffers certain kinds of difficulties, which have been discussed in detail in this paper. In conclusion, it can be said that substantial (research) work is required to be performed at each and every stage of iris based biometric systems in order to have very less false acceptance and rejection rates. References Adelson, E.H., Anderson, C.H., Bergen, J.R., Burt, P.J. and Ogden, J.M. 1984. Pyramid methods in image processing. Radio Corporation of America Engineer. 29, 33-41. Avila, S.C. and Reillo, S.R. 2005. Two different approaches for iris recognition using Gabor filters and multiscale zero-crossing representation. Pattern Recognition. 38, 231–240. Bachoo, A.K. and Tapamo, J.R. 2005. Texture detection for segmentation of iris images. Proceedings of the Annual Research Conference of the South African Institute of Computer Information Technologists, South Africa, September 20-22, 2005, 236–243. Camus, T.A. and Wildes, R.P. 2002. Reliable and fast eye finding in close-up images. Proceedings of the International Conference on Pattern Recognition, Quebec, Canada, August 11-15, 2002, 389–394. Chowhan, S.S. and Shinde, G.N. 2009. Evaluation of statistical feature encoding techniques on iris images. Proceedings of the World Research Institutes World Congress on Computer Science and Information Engineering. Los Angeles, CA, March 31- April 2, 2009, 71-75. Cole, L., Austin, D. and Cole, L. 2004. Visual object recognition using template matching. Proceedings of the Australian Conference on Robotics and Automation. Canberra, Australia, December 6 - 8, 2004, 1-8. CY Lab (2011). Iris and face recognition. http://www.cylab. [October 5, 2011] Daugman, J. 1993. High confidence visual recognition of persons by a test of statistical independence. Institute of Electrical and Electronic Engineers Transactions on Pattern Analysis Machine Intelligence. 15, 1148– 1161. Daugman, J. 1994. Biometric personal identification system based on iris analysis. United States Patent, March 1, 1994. Daugman, J. 2003. The importance of being random: Statistical principles of iris recognition. Pattern Recognition. 36, 279-291. Daugman, J. 2007. New methods in iris recognition. Institute of Electrical and Electronic Engineers Transactions on Systems, Man, and Cybernetics. 37, 1167 – 1175. Eyetrackingupdate (2010). Beware of problems with iris recognition. /2010/11/03/ beware-problems-iris-recognition/ [November 7, 2011]


Fancourt, C., Bogoni, L., Hanna, K., Guo, Y., Wildes, R., Takahashi, N. and Jain, U. 2005. Iris recognition at a distance. Proceedings of the International Conference on Audio- and Video-Based Biometric Person Authentication. Hilton Rye Town, New York, U.S.A., July 2022, 2005, 1–13. Gonzaga, A. and Dacosta, R.M. 2009. Extraction and selection of dynamic features of the human iris. Proceedings of the XXII Brazilian Symposium on Computer Graphics and Image Processing. Sarajevo, Brazil, October 11-15, 2009, 202-208. Gupta, P., Mehrotra, H., Rattani, A., Chatterjee, A. and Kaushik, A.K. 2006. Iris recognition using corner detection. Proceedings of the 23rd International Biometric Conference, Montreal, Canada, July 16-21, 2006, 1-5. Hanna, K., Mandelbaum, R., Mishra, D., Paragano, V. and Wixson, L. 1996. A system for non-intrusive human iris acquisition and identication. Proceedings of the International Association of Pattern Recognition Workshop on Machine Vision Applications, Tokyo, Japan, November 12-14, 1996, 200-203. He, Y., Cui, J., Tan, T. and Wang, Y. 2006. Key techniques and methods for imaging iris in focus. Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, September 18, 2006, 557–561. Jain, A.K., Ross, A. and Prabhakar, S. 2004. An introduction to biometric recognition. Institute of Electrical and Electronic Engineers Transactions on Circuits and Systems for Video Technology. 14, 4-20. Jain, A.K., Flynn, P. and Ross, A. 2008. Handbook of Biometrics, Springer, New York, U.S.A., pp. 60. Kalka, N.D., Zuo, J., Schmid, N.A. and Cukic, B. 2006. Image Quality assessment for Iris Biometric. Proceedings of the Society of Photo Optical Instrumentation Engineers, Orlando, FL, U.S.A., April 17, 2006, 1-2. Khaw, P. 2002. Iris Recognition Technology for Improved Authentication. Sysadmin Audit Networking and Security, Security Essentials Practical Assignment, 1.3, 1-14. Lei, X. and Fei, S. P. 2003. A quality evaluation method of iris images. Chinese Journal of Stereology and Image Analysis, 7, 108-112. Lorenz, M.G., Mengibar, L., Liu, J. and Fernandez, B. 2008. User-friendly biometric camera for speeding iris recognition systems. Proceedings of the 42nd Annual Institute of Electrical and Electronic Engineers International Carnahan Conference on Security Technology, Prague, October 13-16, 2008, 241 – 246. Ma, L., Tan, T., Wang, Y. and Zhang, D. 2003. Personal identification based on iris texture analysis. Institute of Electrical and Electronic Engineers Transactions on Pattern Analysis and Machine Intelligence. 25, 15191533.


S. K. Singla & P. Sethi / Songklanakarin J. Sci. Technol. 34 (2), 189-194, 2012

Ma, L., Tan, T., Wang Y. and Zhang, D. 2004. Efficient iris recognition by characterizing key local variations. Institute of Electrical and Electronic Engineers Transactions on Image Processing. 13, 739–750. Ma, L., Wang, Y. and Tan, T. 2002. Iris recognition based on multichannel gabor filtering. Proceedings of the 5th Asian Conference on Computer Vision. Melbourne, Australia, January 23-25, 2002, 1-5. Matey, J.R., Naroditsky, O., Hanna, K., Kolcyznski, R., LoIacono, D.J., Mangru, S., Tinker, M., Zappia, T.M. and Zhao, W.Y. 2006. Iris on the move: Acquisition of images for iris recognition in less constrained environments. Proceedings of the Institute of Electrical and Electronic Engineers. Princeton, U.S.A., November 11, 2006, 1936-1947. Miyazawa, K., Ito, K., Aoki, T., Kobayashi, K. and Nakajima, H. 2005. An efficient iris recognition algorithm using phase-based image matching. Proceedings of the Institute of Electrical and Electronic Engineers International Conference on Image Processing. Sendai, Japan, September 11-14, 2005, 49-52. Moorthi, M., Arthanari, M. and Sivakumar, M. 2010. Preprocessing of video image with unconstrained background for drowsy driver detection. International Journal of Computer Science and Information Security, 8, 145-151. Nanavati, S., Thieme, M. and Nanavati, R. 2002. Biometrics: identity verification in a networked world, John Wiley & Sons, New York, U.S.A., pp 1-300. Noh, S., Bae, K., Park, K. R. and Kim, J. 2005. A new iris recognition method using independent component analysis. Institute of Electronics, Information and Communication Engineers Transactions on Information and Systems, E88-D, 2573-2581.

Pan, L. and Xie, M. 2005. Research on iris image preprocessing algorithm. Proceedings of the Institute of Electrical and Electronic Engineers International Symposium on Communications and Information Technology, Chengdu, China, October 12-14, 2005, 161-164. Park, K.R. and Kim, J. 2005. A real-time focusing algorithm for iris recognition camera. Institute of the Electrical and Electronic Engineers Transactions on Systems, Man, and Cybernetics, 35, 441-444. Proenca, H. and Alexandre, L.A. 2006. Iris segmentation methodology for non-cooperative recognition. Proceedings of the Institute of Electrical and Electronic Engineers on Vision, Image and Signal Processing, Coviha, Portugal, April 6, 2006, 199–205. Rhodes, K.A. 2002. National Preparedness: Technologies to Secure Federal Buildings. United States General Accounting Office (GAO), April 25, 2002 1-72. Sky imaging (2011). Automatic preprocessing of planetary images with Iris. [October 15, 2011] Tuceryan, M. 1994. Moment based texture segmentation. Pattern Recognition Letters, 15, 659–668. Wei, Z., Tan, T., Sun, Z. and Cui, J. 2006. Robust and fast assessment of iris image quality. Proceedings of the International Conference on Advances in Biometrics, Hong Kong, China, January 5-7, 2006, 464-470. Wildes, R.P. 1997. Iris recognition: An emerging biometric technology. Institute of the Electrical and Electronic Engineers Special Issue on Automated Biometrics, 85, 1348–1363.