iOS

BCC Face is a library meant to be integrated into an iOS application from a .framework file.

It uses the device’s camera to take a picture of a face for biometric purposes. It provides a simple active liveness test, requiring the person to smile for about a second and/or look to the right or left. The liveness test includes an option to speak the instructions, facilitating the workflow for users. Additionally, it provides a passive liveness test that can be used to check if the photo was taken from a real person without requiring user interaction.

This manual is updated for BCC Face Mobile iOS version 4.8.0.

Requirements

Installation

Installing Dependencies

1 - Add the following Pods to the application dependencies on Podfile:

pod 'GoogleMLKit/FaceDetection'

If the application does not possess a Podfile, it can be created in the root folder of your Xcode project using the command pod init in the terminal. The Podfile will not appear directly in Xcode; it will be created in the project directory.

It is preferable to use dynamic frameworks. It can be indicated using the flag use_frameworks! on Podfile.

A Podfile example with a target called BCCs-Sample is shown below:

platform :ios, '15.0'

target 'BCCs-Sample' do
	use_frameworks!

	pod 'GoogleMLKit/FaceDetection', '~> 6.0.0'
	pod 'lottie-ios', '~> 4.5.0'
end

post_install do |installer|
	installer.pods_project.targets.each do |target|
		target.build_configurations.each do |config|
			config.build_settings['DEVELOPMENT_TEAM'] = "YOUR_DEVELOPMENT_TEAM_KEY"
			config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
		end
	end
end

2 - Close the Xcode project, open a terminal and go to the folder where the Podfile is, and then run:

pod install

After the execution finishes, a file with the .xcworkspace extension will be created in the same folder.

3 - Open the new .xcworkspace file.

Importing and Configuring

Importing the Project

  • Open the project using the .xcworkspace file.

  • Add the BCCFace.framework file to the project, then add it to the framework list of your application.

    • Move the .framework file to the project file tree.

      If there’s already a framework folder, it is recommended to move the file there.

    • Open the project settings.

    • Go to General tab.

    • Click and drag the .framework to the project tree under the section Frameworks, Libraries, etc.

  • Change the BCCFace.framework setting from Do not embed to Embed & Sign.

  • Change the target version of your project to a minimum of iOS 15.

It is recommended to disable iPad as a target.

Initial Configuration

This version does not have dependencies on Firebase, and neither from an initial configuration called by AppDelegate. The only initial configuration needed is that the application must request camera usage permission. To do so, add the following key in the info.plist file, at the Information Property List:

Key    :  Privacy - Camera Usage Description
Value  :  Allow access to camera

The key value is a message to be shown to the user when requesting camera use permission. This value can be blank or filled with a custom message.

Usage

Parameters and Constructor

To properly use the BCC Face library, there are some required parameters.

A simple library usage example is shown below:

BCCFaceBuilder(self, delegate: self).initializeCapture()

The BCCFaceBuilder class constructor receives the following parameters:

  • hostVC: UIViewController - View controller that calls the capture screen.

  • delegate: BCCFaceDelegate - Interface responsible for notifying capture events (e.g. failure or success).


The initializeCapture method also accepts an optional parameter, an shown below:

public func initializeCapture(
	_ navController: UINavigationController? = nil
) {
	// ...
}

If you want the navigation to run through a navigation controller, you must provide it when calling the method.


The BCCFaceBuilder class is responsible for handling the usage configuration for BCCFace. The following parameters are accepted for configuring biometric capture and software behavior:

  • buildSmileCheck(with smileProbability: ClosedRange<Float> = 0.5...1.0) - Adds smile for liveness test and defines the acceptance threshold. This feature is enabled by default.

  • removeSmileCheck() - Removes smile for liveness check.

  • buildRotationCheck(_ rotationChecks: [HeadRotationCheck], headRotationAngle: ClosedRange<Float> = -6.0...6.0) - Defines a list of liveness tests for head rotation and max rotation angle. This feature is enabled by default. The head rotation options are:

    enum class HeadRotationCheck {
    	case randomRotation
    	case leftRotation
    	case rightRotation
    }
  • removeHeadRotation() - Removes head rotation for liveness check.

  • addPassiveLiveness() - Adds the passive liveness test. This feature is used to check if the captured photo is from a real person without requiring any user interaction. To use this feature, you MUST disable the active liveness checks (removeSmileCheck() and removeHeadRotation()) that are added by default.

  • buildSpeechSettings(_ speechSettings: SpeechSettings) - Defines the criteria for accessibility speech, using the following parameters:

    class SpeechSettings(
    	public let volume: Float
    	public let startsMuted: Bool
    	public let pitch: Float
    	public let speed: Float
    )
    • volume - The audio volume between 0.0 and 1.0.

    • startsMuted - Defines whether the instructions start muted or not (true for muted).

    • pitch - Defines the voice pitch for the instructions between 0.5 (low) and 2.0 (high).

    • speed - Defines the voice speed for the instructions. This value must be positive.

    The pre-defined values can be accessed through the static variable:

    public static let defaultSpeechSettings = SpeechSettings(
    	volume: 1.0,
    	startsMuted: true,
    	pitch: 1.0,
    	speed: 0.5
    )
  • removeSpeech() - Removes accessibility speech.

  • setReviewEnable(_ enable: Bool) - Defines whether the biometric capture review screen is enabled or disabled.

  • setInstructionEnable(_ enable: Bool) - Defines whether the instruction screen is enabled or disabled.

  • forceLanguage(_ language: BCCLanguages?) - Forces the instructions to be shown on a single language. If the device language is not supported, English will be used. The supported languages are:

    public enum BCCLanguages: String {
    	case ptBR = "pt-BR"
    	case enUS = "en"
    	case esMX = "es"
    	case deviceLanguage = "deviceLanguage"
    }
  • removeLanguage() - Removes the forced language.

For reference, here is a full list of parameters and default values:

var smileProbability: ClosedRange<Float> = 0.5...1.0
var headRotationAngle: ClosedRange<Float> = -6.0...6.0
var openEyesProbability: ClosedRange<Float> = 0.8...1.1
var livenessChecks: [LivenessChecks] = [.smileDetection, .headRotationRandom]
var speechSettings: SpeechSettings? = .defaultSpeechSettings
var language: BCCLanguages? = nil
var showPhotoReview: Bool = false

Here is a code snipped to initialize a capture using only the passive liveness test:

// From the desired ViewController...

// Create the builder
let faceBuilder = BCCFaceBuilder(self, delegate: self)

// Setup the builder
faceBuilder
	.removeSmileCheck()
	.removeHeadRotation()
	.addPassiveLiveness()

// Initialize the capture from the builder
faceBuilder.initializeCapture(self.navigationController)

Return Values

The results from the last facial capture can be retrieved using the faceCaptureDidFinish method from the BCCFaceDelegate interface:

func faceCaptureDidFinish(
	data: BCCFaceReturnData,
	analytics: BCCFaceReturnAnalytics
)

The data object contains the images captured during the process:

public struct BCCFaceReturnData {
	// Previously 'photo', renamed to conform to the same standard as Android
	public internal(set) var originalPhoto: UIImage
	public internal(set) var croppedPhoto: UIImage?

	// Passive Liveness Result
	public internal(set) var passiveResult: Data?

	// LEGACY API: same as 'originalPhoto'
	public var photo: UIImage { self.originalPhoto }
}

The returned properties are:

  • originalPhoto (image) - The original photo taken by the camera.

  • croppedPhoto (image) - The cropped photo, which is the face image cropped from the original photo.

  • passiveResult (Data) - File content of the collection for passive liveness. It is present only if successfully captured. It is the return of the collection for passive liveness as JPEG Data. This data can be saved/exported directly to a file or sent to the network (for networking: system encoding can be easily done, e.g. Base64 string).

If the user aborts the capture, closing before capturing the biometrics, the method faceCaptureDidAbort will be called. You can implement this method to treat this scenario.

Sample Project

This is a functional sample project for a face capture using BCC Mobile Face iOS:

import UIKit
import BCCFace

class ViewController: UIViewController {
	override func viewDidLoad() {
		super.viewDidLoad()
	}

	@IBAction func startCapture(_ sender: UIButton) {
		BCCFaceBuilder(self, delegate: self)
			.removeSmileCheck()
			.removeHeadRotation()
			.addPassiveLiveness()
			.buildCameraSettings(.frontSwitchable)
			.enableFlashButton(true)
			.initializeCapture(navigationController)
	}
}

extension ViewController: BCCFaceDelegate {
	func faceCaptureDidFinish(
		data: BCCFace.BCCFaceReturnData,
		analytics: BCCFace.BCCFaceAnalytics
	) {
		// ...
	}

	func faceCaptureDidAbort(
		analytics: BCCFace.BCCFaceAnalytics
	) {
		// ...
	}
}

Last updated