Index

Abstract

This work was able to integrate Internet of Things expertise on Principal Count Analysis (PCA) algorithm for facial recognition optimization. An internet-based real-time facial recognition system was developed using PCA algorithm that can detect and track faces, a label name that indicates the face identified was also incorporated to easily track the face in situations where multiple faces are identified at the same time. A router was used to set up a wireless connection with an internet protocol (IP)-camera via the IP-camera firmware. As the router broadcast this connection, a link is set up with a personal computer through the network and sharing centre tab on the computer system, thereby creating a wireless connection between the computer and the camera on the internet. Graphical interface designed on MATLAB was used to access the feed from the camera which PCA algorithm explored in detecting and tracking faces in real-time. Security features such as timestamp and database were also integrated with the system developed.

Keywords: IOT, IP- Camera, MATLAB, PCA, Real time, Router.

Received: 17 July 2019 / Revised: 19 August 2019 / Accepted: 23 September 2019/ Published: 30 October 2019

Contribution/ Originality

This study integrates PCA algorithm to detect, recognize and match faces detected with a GUI design on MATLAB in real-time. Most importantly a label was created to easily identify the face when multiple faces are detected concurrently.


1. INTRODUCTION

Across the globe, security challenges have been on the rise with the recent happenings - from the mass shooting and terrorist attacks in America, England and Germany [1-3] to the piracy activities at the Indian Ocean coast of Somali, Africa [4]. Nigeria has had her share of the crises with major terrorist attacks dated back to 2009 [5] with high rises in banditry, kidnapping and the so-called ‘herdsmen’ attacked all across the country. These endemics have caused a lot of challenges in Nigeria which have resulted in the reduction in the free-flow of goods and services and the suspension of oil exploration in some part of the country [6].

Recently, the Nigeria government declared its intention to deploy closed-circuit television (CCTV) cameras across the states in the country to improve the security situation. By getting real-time feeds of flash zones and other areas that are prone to violence; it provides close monitoring, crime deterrence and improves overall safety. It is a good strategy to have functional CCTV systems across the country, but it is better to have one that incorporates facial recognition system into it. The advantages include automation of object identification, integration of artificial intelligence (AI) and big data analysis, access control and improved security.

A facial recognition system is simply a system that obtains, processes and matches a facial image or video data to an already pre-defined data [7] for recognition, based on the attributes associated to facial recognition. These attributes include facial geometrics, facial expressions, hair, eyes and eyebrows, heavy make-up, cheekbones, and light conditions of the captured image/video e.t.c., and determine the overall performance of a facial recognition system. They are independent of each other but can be strongly related as seen in heavy make-up and lipstick to classify a female and goatee for a male. Deep learning and machine learning approaches have also been developed to extract face features [8, 9]. Regardless of the method used, the process involved in facial recognition system is simplified into the blocks in Figure 1.

Figure-1. Schematic block of a facial recognition system.

Face detection simply spots the location of a face in an image or a video; the normalization classified the image/video data into appropriate size and format. The facial classification and representation involve the extraction of key features such as eyes, the distance between the eyes and chins, lip thickness, forehead and several others. These extracted features will form a unique signature for the face and are used to represent such face identification. Face matching compares the extracted face features with the existing ones, if there is a match a user-defined function is triggered [10, 11]. In summary, the objective is to extract the important information in a face image, encode it as efficiently as possible, and compare the encoded data with a database of models encoded in the same way.

Many methods are available in facial recognition algorithms [12-15] one of the most prevalent is the Principal Components Analysis (PCA) technique that relies on functionality and simplicity of linear algebra and the algebra of matrices. PCA generates a set of Eigen matrices by performing a mathematical procedure on a set of images showing human faces which are used in identifying the faces. PCA is a statistical technique for image recognition which identifies patterns in a dataset and expresses the vision in such a way as to highlight their similarities and differences. The technique obtains a correlation matrix by reducing the dimensionality of a data set containing many mutually correlated data, either deeply or marginally, while retaining the variations present in the data. Equation 1 simplifies how the principal component is formed by taking the transpose of the feature vector and left-multiplying it with the transpose of the scaled version of the original dataset [16, 17].

PCA technique has the advantages of low memory requirement, low computational complexity, and requires less time to execute. This makes it ideal for real-time facial recognition system because a high synchronization is required.

2. RELATED WORKS

A real-time face recognition system reported by Susheel Kumar, et al. [18] integrates a couple of approaches such as AdaBoost with Haar cascade are used together with a simple, fast PCA, and Linear Discriminant Analysis(LDA). These was used in face detection, recognition and matching to boost the performance of the results. The developed system was used to take attendance in the laboratory with good accuracy. Another work done by Lakhina, et al. [19] used PCA to diagnose anomalies in networks’ traffic. The approach was based on the separation of the high dimensional space occupied by a set of network traffic data into disjoint subspaces corresponding to normal and anomalous network conditions. This separation was achieved successfully by Principal Component Analysis. An important use of PCA explored the intrinsic dimensionality of a set of data points. The work was able to accurately sense when a volume anomaly occurred, correctly identify the underlying source – destination of the anomaly and accurately estimate the amount of traffic involved in the anomalous source – destination flow. Abdel-Qader, et al. [20] used PCA-based algorithm to extract cracks in concrete bridge decks for the purpose of automating inspection rather than using human inspection. PCA was used to identify clusters using a database of bridge images. Experimental results showed enhancement in the detection of cracks based on PCA based images and increased the overall correct identification of cracks to 73%.

3. METHODOLOGY

The work done was the incorporation of a wireless camera to a router and interfaced a Graphical User Interface (GUI) developed in MATLAB environment. The system uses the PCA algorithm to detect and recognize faces from the video feed via the GUI in real-time. Prior to face recognition, the system has a trained classifier and the ability to add new images that can be detected and recognized when saved on the developed system. Figure 2 shows the block diagram of the developed facial recognition system.

Figure-2. External and internal components of the system.

The system was divided into two modules - an external and internal module; the external module acts as the input to the internal, where the IP- camera connects to the router wirelessly. The signal broadcast at 2.4GHz by the router acts as the input to the internal module components. The internal components can be sub-grouped into two; these are PCA (face detection, face extraction and database) and the GUI.

Face database formation is achieved by acquiring and pre-processing of the face images that will be added to the face database. Face images are stored in a face library in the system running the developed application. The face database is called a face library because at this stage; it does not have the properties of a relational database. Operations such as training set and Eigenface formation is performed on this face library. In order to start the face recognition process, this initially empty face library is filled with face images. After adding face images to the initially empty face library, the system is ready to perform the training set and Eigenface formations. Those face images are used for the training set of the entire face library. The face library entries are normalized; Eigenfaces are formed and stored for later use. Figure 3 and Figure 4 show the system behaviour and the flow chart of the developed facial recognition system.

Figure-3. Use case diagram illustrating the system’s behavior.

Figure-4. Flowchart of the system.

MATLAB was used to create the GUI that enables the interaction between the IP-camera and the user of the system as shown in Figure 5:

Figure-5. GUI of the facial recognition system.

The first scheme consists only of two buttons, one allows the user to upload a new image and the other requests the system to perform the classification, which would result in displaying the name of the predicted label. The second scheme is the activation of the camera which can also be disabled if need be. The activation of the camera connected the IP-camera to the GUI and live feed was received from the camera to the GUI. The new faces detected by the camera can be also be captured, trained and labeled for subsequent matching if the faces are detected through the feed again.

4. RESULTS AND DISCUSSION

The facial recognition system was connected to an internet network via the router. The performance of the system was tested based on the maximum range the signal transmitted on 2.4 GHz can be received, proximity of faces to the IP-camera, the influence of light on the camera and the number of objects that can be detected simultaneously. The signal of the IP-camera was successfully received at a maximum distance of 85 meters when varied. The proximity of the face to the IP-camera with the use of light and without the use of light was tested and results shown in Table 1 and Table 2 respectively.

Table-1. Testing the range of camera for object detection with the use of light.

Table-2.  Testing the range of camera for object detection without the use of light.

Table 1 and 2 shows the developed system successfully detected, recognized and matched faces when the range of the IP-Camera was varied across 0- 300cm with and without the support of light effect.

Table-3. Multi object faces detection result.

As illustrated in Table 3, the developed facial recognition system detected multiple faces simultaneously. This is important as one face could be detected and the system will keep scanning for the purpose of detecting another face.

5. CONCLUSION

In this work, a fast, efficient, secure and reliable facial recognition system was developed which replaces the manual, unreliable system. The system is time-saving and will reduce the work done by security agents in manually scanning surveillance videos. Furthermore, the need for specialized hardware for installing the system has been eliminated, as it use a computer, a wireless IP camera and a wireless router. Since the system operates from live camera feed, the camera was tested for good image quality and performance in real-time. This ensured proper functioning of the system. The system can be deployed for permission-based scenarios and secure access authentication for access management, personal security, home (video) surveillance systems and crime control.

Funding: This study received no specific financial support.   
Competing Interests: The authors declare that they have no competing interests. 
Contributors/Acknowledgement: All authors contributed equally to the conception and design of the study.

REFERENCES

[1]          M. Follman, G. Aronsen, and D. Pan, A guide to mass shootings in America, 2nd ed. Washington: Mojo Readers, 2012.

[2]          J. Fox and M. DeLateur, "Mass shootings in America," Homicide Studies, vol. 18, pp. 125-145, 2013.

[3]          A. Lankford, "Public mass shooters and firearms: A cross-national study of 171 countries," Violence and Victims, vol. 31, pp. 187-199, 2016.

[4]          R. Middleton, Piracy in Somalia threatening global trade, feeding local wars. London: Chatham House, 2018.

[5]          H. Onapajo and U. Uzodike, "Your haram terrorism in Nigeria," African Security Review, vol. 21, pp. 24-39, 2012.

[6]          B. Obichie, "Oil exploration in Chad Basin: NNPC seeks collaboration with military.Legit.ng - Nigeria news." Available: https://www.legit.ng/1253066-oil-exploration-chad-basin-nnpc-seeks-collaboration-military.html [Accessed 19 Aug. 2019], 2019.

[7]          M. A. O. Vasilescu and D. Terzopoulos, "Multilinear image analysis for facial recognition. In Object recognition supported by user interaction for service robots," IEEE, vol. 2, pp. 511-514, 2002.

[8]          E. M. Hand and R. Chellappa, "Attributes for improved attributes: A multi-task network utilizing implicit and explicit relationships for facial attribute classification," in AAAI-17. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 1-3.

[9]          A. Jadhav, V. Namboodiri, and K. Venkatesh, Deep attributes for one-shot face recognition, 1st ed.: 2-4, n.d.

[10]        C. Gurel and A. Erden, "Design of a face recognition system," in International Conference on Machine Design and Production. [online] Denizli: Researcher Gate, 2012, pp. 1-20.

[11]        H. Kanchwala and V. Vaidyanathan, "Facial recognition: Definition, history, working, and applications." Science ABC. Available at: https://www.scienceabc.com/innovation/facial-recognition-works.html [Accessed 4 Aug. 2019], 2019.

[12]        Z. Liu, Z. You, A. Jain, and Y. Wang, "Face detection and facial feature extraction in color image," in International Conference on Computational Intelligence and Multimedia Applications, 2003, pp. 126-130.

[13]        C. Lin, "Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network," Pattern Recognition Letters, vol. 28, pp. 2190-2200, 2007. Available at: https://doi.org/10.1016/j.patrec.2007.07.003.

[14]        Q.-X. Ye, J.-B. Jiao, and S.-Q. Jiang, "Fast and robust pedestrian detection algorithm with multi-scale orientation features," Ruanjian Xuebao Journal of Software, vol. 22, pp. 3004-3014, 2011. Available at: https://doi.org/10.3724/sp.j.1001.2011.03987.

[15]        S.-H. Lin, S.-Y. Kung, and L.-J. Lin, "Face recognition/detection by probabilistic decision-based neural network," IEEE Transactions on Neural Networks, vol. 8, pp. 114-132, 1997. Available at: https://doi.org/10.1109/72.554196.

[16]        S. Ufldl, "PCA - Ufldl." Available: http://ufldl.stanford.edu/wiki/index.php/PCA, 2018.

[17]        Dezyre, "Principal component analysis tutorial. Available: https://www.dezyre.com/data-science-in-python-tutorial/principal-component-analysis-tutorial," n.d.

[18]        K. Susheel Kumar, S. Prasad, V. Bhaskar Semwal, and R. Tripathi, "Real time face recognition using Ada Boost improved fast PCA algorithm," International Journal of Artificial Intelligence & Applications, vol. 2, pp. 45-58, 2011. Available at: https://doi.org/10.5121/ijaia.2011.2305.

[19]        A. Lakhina, M. Crovella, and C. Diot, "Diagnosing network-wide traffic anomalies. In ACM SIGCOMM computer communication review," Association for Computing Machinery, vol. 34, pp. 219-230, 2004.

[20]        I. Abdel-Qader, S. Pashaie-Rad, O. Abudayyeh, and S. Yehia, "PCA-based algorithm for unsupervised bridge crack detection," Advances in Engineering Software, vol. 37, pp. 771-778, 2006. Available at: https://doi.org/10.1016/j.advengsoft.2006.06.002.

Views and opinions expressed in this article are the views and opinions of the author(s), Review of Computer Engineering Research shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.