SmartKYC
DOI:
https://doi.org/10.3126/injet.v2i2.78620Keywords:
KYC, Text Extraction, Face Verification, Liveliness Detection, Machine LearningAbstract
This Paper demonstrates an automated system to streamline the KYC (Know Your Customer) process using machine learning. It integrates document validation, image pre-processing, text extraction, automated form filling, face detection, liveliness detection, and verification. Tesseract, utilizing LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network), extracts text from smart driving licenses to auto-fill digital KYC forms, reducing manual entry errors. MTCNN (Multi-task Cascaded Convolutional Networks) handles face detection, OpenCV ensures liveliness, and FaceNet (Inception-ResNet-v1) verifies the selfie against the ID photo for security. Successful verification grants confirmation; otherwise, users resolve issues. The system achieved 86.43% training and 86.25% validation accuracy, improving efficiency, accuracy, and user experience.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal on Engineering Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.