
The first method, documentCameraViewController (_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan), is called when we have scanned one or more pages and saved them ( Keep Scan first and then Save). (equalTo: ocrTextView.topAnchor, constant: -padding) (equalTo: scanButton.topAnchor, constant: -padding), Private var ocrTextView = OcrTextView(frame.

Private var scanImageView = ScanImageView(frame. Private var scanButton = ScanButton(frame.

ist where the camera usage request key has been added. To do this, in the ist file we add the key ‘ Privacy – Camera Usage Description‘, along with a text that will be the one that is displayed to the user when they ask for permission (for example: “ To be able to scan documents you must allow the use of the camera.“). If we do not want an error to occur and the application to be closed, we must notify the application that we will need the camera. This project can be found complete on GitHub).Īs we are going to use the camera of the device to scan the documents, the operating system will show us a message in which it asks us for permission to use that camera. In order to check how we can scan a document and recognize its content, we create a project in Xcode 11 (remember that VisionKit only works on iOS 13+). Now let’s see how you can develop your own OCR in iOS 13 with VisionKit. Now, with iOS 13, Apple has published a new library, VisionKit, that allows you to use the document scanner of the system itself (the same one that uses the Notes application).
#Ios text recognition code series#
This library use algorithms to perform a series of tasks on images and video (text detection, barcodes, etc.). In iOS 11 Apple integrated a library called Vision.

If you like this article, considere buy me a cofee! 😉
