Le SDK Android Blinkid vous permet de créer une expérience d'intégration fantastique dans votre application Android.
Avec une analyse rapide, vos utilisateurs pourront extraire des informations de leurs cartes d'identité, passeports, permis de conduire et pratiquement tout autre identifiant émis par le gouvernement.
Blinkid est:
Pour voir toutes ces fonctionnalités au travail, téléchargez notre application de démonstration gratuite:
Vous vous sentez prêt à se fissurer avec l'intégration? Assurez-vous d'abord que nous prenons en charge la liste complète de votre type de document. Puis suivez les directives ci-dessous.
UISettings )RecognizerRunnerFragment )RecognizerRunnerViewString (analyse)BlinkIdUISettings et BlinkIdOverlayControllerDocumentUISettingsLegacyDocumentVerificationUISettingsRecognizerRunner et RecognizerRunnerViewRecognizer et RecognizerBundleRecognizerRecognizerBundleRecognizer entre les activitéslibc++_shared.soYes . RecognizerRunnerFragment et de superposition de caméra intégrée dans votre activité Dans votre build.gradle , ajoutez le référentiel Maven Blinkid à la liste des référentiels
repositories {
maven { url 'https://maven.microblink.com' }
}
Ajoutez Blinkid comme dépendance et assurez-vous que transitive est définie sur true
dependencies {
implementation('com.microblink:blinkid:6.12.0@aar') {
transitive = true
}
}
Android Studio doit importer automatiquement Javadoc à partir de la dépendance Maven. Si cela ne se produit pas, vous pouvez le faire manuellement en suivant ces étapes:
External Libraries (généralement c'est la dernière entrée dans la vue du projet)blinkid-6.12.0 , cliquez avec le bouton droit dessus et sélectionnez Library Properties...Library Properties apparaîtra+ dans le coin inférieur gauche de la fenêtre (celui qui contient + avec Little Globe)https://blinkid.github.io/blinkid-android/OK Une clé de licence valide est nécessaire pour initialiser la numérisation. Vous pouvez demander une clé de licence d'essai gratuite, après vous être inscrit, au Microblink Developer Hub. La licence est liée au nom du package de votre application, assurez-vous donc de saisir le nom du package correct lorsque vous êtes demandé.
Téléchargez votre fichier de licence et placez-le dans le dossier Assets de votre demande. Assurez-vous de définir la clé de licence avant d'utiliser d'autres classes du SDK, sinon vous obtiendrez une exception d'exécution.
Nous vous recommandons d'étendre la classe d'application Android et de définir la licence dans un rappel OnCreate comme ceci:
public class MyApplication extends Application {
@ Override
public void onCreate () {
MicroblinkSDK . setLicenseFile ( "path/to/license/file/within/assets/dir" , this );
}
} public class MyApplication : Application () {
override fun onCreate () {
MicroblinkSDK .setLicenseFile( " path/to/license/file/within/assets/dir " , this )
}
} Dans votre activité principale, définissez et créez un objet ActivityResultLauncher en remplaçant la méthode onActivityResult . OneSideDocumentScan et TwoSideDocumentScan peuvent être utilisés de manière interchangeable sans différence de mise en œuvre. La seule différence fonctionnelle est que OneSideDocumentScan est un seul côté du document et que TwoSideDocumentScan scanne plus d'un côté du document.
ActivityResultLauncher < Void > resultLauncher = registerForActivityResult (
new TwoSideDocumentScan (),
twoSideScanResult -> {
ResultStatus resultScanStatus = twoSideScanResult . getResultStatus ();
if ( resultScanStatus == ResultStatus . FINISHED ) {
// code after a successful scan
// use result.getResult() for fetching results, for example:
String firstName = twoSideScanResult . getResult (). getFirstName (). value ();
} else if ( resultScanStatus == ResultStatus . CANCELLED ) {
// code after a cancelled scan
} else if ( resultScanStatus == ResultStatus . EXCEPTION ) {
// code after a failed scan
}
}
); private val resultLauncher =
registerForActivityResult( TwoSideDocumentScan ()) { twoSideScanResult : TwoSideScanResult ->
when (twoSideScanResult.resultStatus) {
ResultStatus . FINISHED -> {
// code after a successful scan
// use twoSideScanResult.result for fetching results, for example:
val firstName = twoSideScanResult.result?.firstName?.value()
}
ResultStatus . CANCELLED -> {
// code after a cancelled scan
}
ResultStatus . EXCEPTION -> {
// code after a failed scan
}
else -> {}
}
}@Composable
fun createLauncher (): ActivityResultLauncher < Void ?> {
return rememberLauncherForActivityResult( TwoSideDocumentScan ()) { twoSideScanResult : TwoSideScanResult ->
when (twoSideScanResult.resultStatus) {
ResultStatus . FINISHED -> {
// code after a successful scan
// use twoSideScanResult.result for fetching results, for example:
val firstName = twoSideScanResult.result?.firstName?.value()
}
ResultStatus . CANCELLED -> {
// code after a cancelled scan
}
ResultStatus . EXCEPTION -> {
// code after a failed scan
}
else -> {}
}
}
} Après un scan, le result , qui est une instance d'objet OneSideScanResult ou TwoSideScanResult , va être mis à jour. Vous pouvez définir ce qui se passe avec les données dans le remplacement de la fonction onActivityResult (le code Kotlin remplace également cette fonction mais elle est implicite). Les résultats sont accessibles dans la méthode twoSideScanResult.getResult() ( twoSideScanResult.result dans Kotlin).
Commencez à numériser le processus en appelant ActivityResultObject et en appelant ActivityResultLauncher.launch :
// method within MyActivity from previous step
public void startScanning () {
// Start scanning
resultLauncher . launch ( null );
} // method within MyActivity from previous step
public fun startScanning () {
// Start scanning
resultLauncher.launch()
} // within @Composable function or setContent block
val resultLauncher = createLauncher()
resultLauncher.launch() Les résultats seront disponibles dans des rappels, qui sont définis dans l' ActivityResultObject qui a été défini à l'étape précédente.
Blinkid nécessite Android API niveau 21 ou plus récent.
La résolution de l'aperçu vidéo de la caméra est également importante. Afin d'effectuer des analyses réussies, la résolution de l'aperçu des caméras doit être d'au moins 720p. Notez que la résolution d'aperçu de l'appareil photo n'est pas la même que la résolution d'enregistrement vidéo.
Blinkid est distribué avec les binaires de la bibliothèque native ARMV7 et ARM64 .
Blinkid est une bibliothèque native, écrite en C ++ et disponible pour plusieurs plates-formes. Pour cette raison, Blinkid ne peut pas fonctionner sur des appareils avec des architectures matérielles obscures. Nous avons compilé le code natif de Blinkid uniquement pour les Abis Android les plus populaires.
Même avant de définir la touche de licence, vous devez vérifier si le clignot est pris en charge sur le périphérique actuel (voir la section suivante: Vérification de la compatibilité ). Tenter d'appeler n'importe quelle méthode à partir du SDK qui s'appuie sur le code natif, tel que la vérification de licence, sur un appareil avec une architecture CPU non pris en charge effacera votre application.
Si vous combinez la bibliothèque Blinkid avec d'autres bibliothèques contenant du code natif dans votre application, assurez-vous de faire correspondre les architectures de toutes les bibliothèques natives.
Pour plus d'informations, voir la section des considérations d'architecture du processeur.
Voici comment vous pouvez vérifier si le clignot est pris en charge sur l'appareil:
// check if BlinkID is supported on the device,
RecognizerCompatibilityStatus status = RecognizerCompatibility . getRecognizerCompatibilityStatus ( this );
if ( status == RecognizerCompatibilityStatus . RECOGNIZER_SUPPORTED ) {
Toast . makeText ( this , "BlinkID is supported!" , Toast . LENGTH_LONG ). show ();
} else if ( status == RecognizerCompatibilityStatus . NO_CAMERA ) {
Toast . makeText ( this , "BlinkID is supported only via Direct API!" , Toast . LENGTH_LONG ). show ();
} else if ( status == RecognizerCompatibilityStatus . PROCESSOR_ARCHITECTURE_NOT_SUPPORTED ) {
Toast . makeText ( this , "BlinkID is not supported on current processor architecture!" , Toast . LENGTH_LONG ). show ();
} else {
Toast . makeText ( this , "BlinkID is not supported! Reason: " + status . name (), Toast . LENGTH_LONG ). show ();
} // check if _BlinkID_ is supported on the device,
when ( val status = RecognizerCompatibility .getRecognizerCompatibilityStatus( this )) {
RecognizerCompatibilityStatus . RECOGNIZER_SUPPORTED -> {
Toast .makeText( this , " BlinkID is supported! " , Toast . LENGTH_LONG ).show()
}
RecognizerCompatibilityStatus . NO_CAMERA -> {
Toast .makeText( this , " BlinkID is supported only via Direct API! " , Toast . LENGTH_LONG ).show()
}
RecognizerCompatibilityStatus . PROCESSOR_ARCHITECTURE_NOT_SUPPORTED -> {
Toast .makeText( this , " BlinkID is not supported on current processor architecture! " , Toast . LENGTH_LONG ).show()
}
else -> {
Toast .makeText( this , " BlinkID is not supported! Reason: " + status.name, Toast . LENGTH_LONG ).show()
}
}Certains reconnaissants nécessitent une caméra avec l'autofocus. Si vous essayez de les utiliser sur un appareil qui ne prend pas en charge la mise au point automatique, vous obtiendrez une erreur. Pour éviter cela, vous pouvez vérifier si un reconnaissance nécessite une mise au point automatique en appelant sa méthode de requises Autofocus.
Si vous avez déjà une gamme de reconnaissances, vous pouvez facilement filtrer les reconnaissances qui nécessitent une mise au point automatique à partir du tableau à l'aide de l'extrait de code suivant:
Recognizer [] recArray = ...;
if (! RecognizerCompatibility . cameraHasAutofocus ( CameraType . CAMERA_BACKFACE , this )) {
recArray = RecognizerUtils . filterOutRecognizersThatRequireAutofocus ( recArray );
} var recArray : Array < Recognizer > = .. .
if ( ! RecognizerCompatibility .cameraHasAutofocus( CameraType . CAMERA_BACKFACE , this )) {
recArray = RecognizerUtils .filterOutRecognizersThatRequireAutofocus(recArray)
}Vous pouvez intégrer Blinkid dans votre application de cinq manières différentes, en fonction de votre cas d'utilisation et de vos besoins de personnalisation:
OneSideDocumentScan et TwoSideDocumentScan ) - SDK gère tout et il vous suffit de démarrer notre activité intégrée et de gérer le résultat, aucune option de personnalisationUISettings ) - SDK gère la plupart du travail, il vous suffit de définir un reconnaissance, des paramètres, de démarrer notre activité intégrée et de gérer le résultat, les options de personnalisation sont limitéesRecognizerRunnerFragment ) - Réutiliser la numérisation UX de nos activités intégrées dans votre propre activitéRecognizerRunnerView ) - SDK gère la gestion de la caméra alors que vous devez implémenter une numérisation entièrement personnalisée UXRecognizerRunner ) - SKD ne gère la reconnaissance que lorsque vous devez lui fournir les images, soit à partir de la caméra ou d'un fichier OneSideDocumentScan et TwoSideDocumentScan ) OneSideDocumentScan et TwoSideDocumentScan sont des classes qui contiennent toutes les définitions de paramètres nécessaires afin de démarrer rapidement les activités de numérisation intégrées du SDK. Il permet à l'utilisateur de sauter toutes les étapes de configuration comme UISettings et RecognizerBundle et d'aller directement à la numérisation.
Comme indiqué dans la performance, votre premier scan nécessite uniquement la définition d'un écouteur de résultat, pour définir ce qui va se passer avec les résultats de scan et appeler la fonction de numérisation réelle.
UISettings ) UISettings est une classe qui contient tous les paramètres nécessaires pour les activités de numérisation intégrées du SDK. Il configure le comportement d'activité de balayage, les chaînes, les icônes et autres éléments d'interface utilisateur. Vous devez utiliser ActivityRunner pour démarrer l'activité de numérisation configurée par UISettings , illustrée dans l'exemple ci-dessous.
Nous fournissons plusieurs classes UISettings spécialisées pour différents scénarios de numérisation. Chaque objet UISettings a des propriétés qui peuvent être modifiées via des méthodes de setter appropriées. Par exemple, vous pouvez personnaliser les paramètres de la caméra avec setCameraSettings Metod.
Tous les classes UISettings disponibles sont répertoriées ici.
Dans votre activité principale, créez des objets de reconnaissance qui effectueront la reconnaissance d'image, les configureront et les mettront dans l'objet ReconnateurBundle. Vous pouvez voir plus d'informations sur les reconnaissances disponibles et RecognizerBundle ici.
Par exemple, pour scanner un document pris en charge, configurez votre reconnaissance comme ceci:
public class MyActivity extends Activity {
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle bundle ) {
super . onCreate ( bundle );
// setup views, as you would normally do in onCreate callback
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
}
} public class MyActivity : Activity () {
private lateinit var mRecognizer : BlinkIdMultiSideRecognizer
private lateinit var mRecognizerBundle : RecognizerBundle
override fun onCreate ( bundle : Bundle ) {
// setup views, as you would normally do in onCreate callback
// create BlinkIdMultiSideRecognizer
mRecognizer = BlinkIdMultiSideRecognizer ()
// build recognizers into RecognizerBundle
mRecognizerBundle = RecognizerBundle (mRecognizer)
}
} Commencez le processus de reconnaissance en créant BlinkIdUISettings et en appelant ActivityRunner.startActivityForResult :
// method within MyActivity from previous step
public void startScanning () {
// Settings for BlinkIdActivity
BlinkIdUISettings settings = new BlinkIdUISettings ( mRecognizerBundle );
// tweak settings as you wish
// Start activity
ActivityRunner . startActivityForResult ( this , MY_REQUEST_CODE , settings );
} // method within MyActivity from previous step
public fun startScanning () {
// Settings for BlinkIdActivity
val settings = BlinkIdUISettings (mRecognizerBundle)
// tweak settings as you wish
// Start activity
ActivityRunner .startActivityForResult( this , MY_REQUEST_CODE , settings)
} onActivityResult sera appelée dans votre activité une fois la numérisation terminée, ici vous pouvez obtenir les résultats de numérisation.
@ Override
protected void onActivityResult ( int requestCode , int resultCode , Intent data ) {
super . onActivityResult ( requestCode , resultCode , data );
if ( requestCode == MY_REQUEST_CODE ) {
if ( resultCode == Activity . RESULT_OK && data != null ) {
// load the data into all recognizers bundled within your RecognizerBundle
mRecognizerBundle . loadFromIntent ( data );
// now every recognizer object that was bundled within RecognizerBundle
// has been updated with results obtained during scanning session
// you can get the result by invoking getResult on recognizer
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
}
} override protected fun onActivityResult ( requestCode : Int , resultCode : Int , data : Intent ) {
super .onActivityResult(requestCode, resultCode, data);
if (requestCode == MY_REQUEST_CODE ) {
if (resultCode == Activity . RESULT_OK && data != null ) {
// load the data into all recognizers bundled within your RecognizerBundle
mRecognizerBundle.loadFromIntent(data)
// now every recognizer object that was bundled within RecognizerBundle
// has been updated with results obtained during scanning session
// you can get the result by invoking getResult on recognizer
val result = mRecognizer.result
if (result.resultState == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
}
} Pour plus d'informations sur les reconnaissances disponibles et RecognizerBundle , voir ReconnateurBundle et les reconnaissances disponibles.
RecognizerRunnerFragment ) Si vous souhaitez réutiliser notre activité intégrée UX à l'intérieur de votre propre activité, utilisez RecognizerRunnerFragment . L'activité qui hébergera RecognizerRunnerFragment doit implémenter l'interface ScanningOverlayBinder . Tenter d'ajouter RecognizerRunnerFragment à l'activité qui n'implémente pas cette interface entraînera une conduite ClassCastException .
Le ScanningOverlayBinder est responsable du retour de l'implémentation non-null de ScanningOverlay - classe qui gérera l'interface utilisateur en plus de RecognizerRunnerFragment . Il n'est pas recommandé de créer votre propre implémentation ScanningOverlay , utilisez l'une de nos implémentations répertoriées ici à la place.
Voici l'exemple minimum pour l'activité qui héberge le RecognizerRunnerFragment :
public class MyActivity extends AppCompatActivity implements RecognizerRunnerFragment . ScanningOverlayBinder {
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
private BlinkIdOverlayController mScanOverlay ;
private RecognizerRunnerFragment mRecognizerRunnerFragment ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ();
setContentView ( R . layout . activity_my_activity );
mScanOverlay = createOverlay ();
if ( null == savedInstanceState ) {
// create fragment transaction to replace R.id.recognizer_runner_view_container with RecognizerRunnerFragment
mRecognizerRunnerFragment = new RecognizerRunnerFragment ();
FragmentTransaction fragmentTransaction = getSupportFragmentManager (). beginTransaction ();
fragmentTransaction . replace ( R . id . recognizer_runner_view_container , mRecognizerRunnerFragment );
fragmentTransaction . commit ();
} else {
// obtain reference to fragment restored by Android within super.onCreate() call
mRecognizerRunnerFragment = ( RecognizerRunnerFragment ) getSupportFragmentManager (). findFragmentById ( R . id . recognizer_runner_view_container );
}
}
@ Override
@ NonNull
public ScanningOverlay getScanningOverlay () {
return mScanOverlay ;
}
private BlinkIdOverlayController createOverlay () {
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
BlinkIdUISettings settings = new BlinkIdUISettings ( mRecognizerBundle );
return settings . createOverlayController ( this , mScanResultListener );
}
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// pause scanning to prevent new results while fragment is being removed
mRecognizerRunnerFragment . getRecognizerRunnerView (). pauseScanning ();
// now you can remove the RecognizerRunnerFragment with new fragment transaction
// and use result within mRecognizer safely without the need for making a copy of it
// if not paused, as soon as this method ends, RecognizerRunnerFragments continues
// scanning. Note that this can happen even if you created fragment transaction for
// removal of RecognizerRunnerFragment - in the time between end of this method
// and beginning of execution of the transaction. So to ensure result within mRecognizer
// does not get mutated, ensure calling pauseScanning() as shown above.
}
@ Override
public void onUnrecoverableError ( @ NonNull Throwable throwable ) {
}
};
} package com.microblink.blinkid
class MainActivity : AppCompatActivity (), RecognizerRunnerFragment.ScanningOverlayBinder {
private lateinit var mRecognizer : BlinkIdMultiSideRecognizer
private lateinit var mRecognizerRunnerFragment : RecognizerRunnerFragment
private lateinit var mRecognizerBundle : RecognizerBundle
private lateinit var mScanOverlay : BlinkIdOverlayController
override fun onCreate ( savedInstanceState : Bundle ? ) {
super .onCreate(savedInstanceState)
if ( ! ::mScanOverlay.isInitialized) {
mScanOverlay = createOverlayController()
}
setContent {
this . run {
// viewBinding has to be set to 'true' in buildFeatures block of the build.gradle file
AndroidViewBinding ( RecognizerRunnerLayoutBinding ::inflate) {
mRecognizerRunnerFragment =
fragmentContainerView.getFragment< RecognizerRunnerFragment >()
}
}
}
}
override fun getScanningOverlay (): ScanningOverlay {
return mScanOverlay
}
private fun createOverlay (): BlinkIdOverlayController {
// create BlinkIdMultiSideRecognizer
val mRecognizer = BlinkIdMultiSideRecognizer ()
// bundle recognizers into RecognizerBundle
mRecognizerBundle = RecognizerBundle (mRecognizer)
val settings = BlinkIdUISettings (mRecognizerBundle)
return settings.createOverlayController( this , mScanResultListener)
}
private val mScanResultListener : ScanResultListener = object : ScanResultListener {
override fun onScanningDone ( p0 : RecognitionSuccessType ) {
// pause scanning to prevent new results while fragment is being removed
mRecognizerRunnerFragment !! .recognizerRunnerView !! .pauseScanning()
// now you can remove the RecognizerRunnerFragment with new fragment transaction
// and use result within mRecognizer safely without the need for making a copy of it
// if not paused, as soon as this method ends, RecognizerRunnerFragments continues
// scanning. Note that this can happen even if you created fragment transaction for
// removal of RecognizerRunnerFragment - in the time between end of this method
// and beginning of execution of the transaction. So to ensure result within mRecognizer
// does not get mutated, ensure calling pauseScanning() as shown above.
}
override fun onUnrecoverableError ( p0 : Throwable ) {
}
}
} Veuillez vous référer aux exemples d'applications fournies avec le SDK pour un exemple plus détaillé et assurez-vous que l'orientation de votre activité hôte est définie sur nosensor ou que le changement de configuration est activé (c.-à-d. N'est pas redémarré lorsque le changement de configuration se produit). Pour plus d'informations, vérifiez la section Orientation de numérisation.
RecognizerRunnerViewCette section explique comment intégrer RecockerrunnerView dans votre activité de scan et effectuer un scan.
RecognizerRunnerView est un domaine membre de votre activité. Ceci est nécessaire car vous devrez passer tous les événements de cycle de vie de tous les activités à RecognizerRunnerView .portrait ou landscape . La définition sensor en tant qu'orientation de l'activité de balayage déclenchera le redémarrage complet de l'activité chaque fois que l'orientation de l'appareil change. Cela offrira une expérience utilisateur très médiocre car la caméra et la bibliothèque native Blinkid devront être redémarrées à chaque fois. Il existe des mesures contre ce comportement qui sont discutées plus tard.onCreate de votre activité, créez un nouveau RecognizerRunnerView , définissez RecognizerBundle contenant des reconnaissances qui seront utilisées par la vue, définiront CameraEventsListener qui gérera les événements de caméra obligatoires, définira ScanResultListener qui recevra un appel lorsque la reconnaissance aura été terminée, puis appellera sa méthode create . Après cela, ajoutez vos vues qui devraient être disposées sur la vue de la caméra.setLifecycle pour permettre la manipulation automatique des événements LifeCeycle. Voici l'exemple minimum d'intégration de RecognizerRunnerView comme seule vue de votre activité:
public class MyScanActivity extends AppCompatActivity {
private static final int PERMISSION_CAMERA_REQUEST_CODE = 42 ;
private RecognizerRunnerView mRecognizerRunnerView ;
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ( savedInstanceState );
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
// create RecognizerRunnerView
mRecognizerRunnerView = new RecognizerRunnerView ( this );
// set lifecycle to automatically call recognizer runner view lifecycle methods
mRecognizerRunnerView . setLifecycle ( getLifecycle ());
// associate RecognizerBundle with RecognizerRunnerView
mRecognizerRunnerView . setRecognizerBundle ( mRecognizerBundle );
// scan result listener will be notified when scanning is complete
mRecognizerRunnerView . setScanResultListener ( mScanResultListener );
// camera events listener will be notified about camera lifecycle and errors
mRecognizerRunnerView . setCameraEventsListener ( mCameraEventsListener );
setContentView ( mRecognizerRunnerView );
}
@ Override
public void onConfigurationChanged ( Configuration newConfig ) {
super . onConfigurationChanged ( newConfig );
// changeConfiguration is not handled by lifecycle events so call it manually
mRecognizerRunnerView . changeConfiguration ( newConfig );
}
private final CameraEventsListener mCameraEventsListener = new CameraEventsListener () {
@ Override
public void onCameraPreviewStarted () {
// this method is from CameraEventsListener and will be called when camera preview starts
}
@ Override
public void onCameraPreviewStopped () {
// this method is from CameraEventsListener and will be called when camera preview stops
}
@ Override
public void onError ( Throwable exc ) {
/**
* This method is from CameraEventsListener and will be called when
* opening of camera resulted in exception or recognition process
* encountered an error. The error details will be given in exc
* parameter.
*/
}
@ Override
@ TargetApi ( 23 )
public void onCameraPermissionDenied () {
/**
* Called in Android 6.0 and newer if camera permission is not given
* by user. You should request permission from user to access camera.
*/
requestPermissions ( new String []{ Manifest . permission . CAMERA }, PERMISSION_CAMERA_REQUEST_CODE );
/**
* Please note that user might have not given permission to use
* camera. In that case, you have to explain to user that without
* camera permissions scanning will not work.
* For more information about requesting permissions at runtime, check
* this article:
* https://developer.android.com/training/permissions/requesting.html
*/
}
@ Override
public void onAutofocusFailed () {
/**
* This method is from CameraEventsListener will be called when camera focusing has failed.
* Camera manager usually tries different focusing strategies and this method is called when all
* those strategies fail to indicate that either object on which camera is being focused is too
* close or ambient light conditions are poor.
*/
}
@ Override
public void onAutofocusStarted ( Rect [] areas ) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has started.
* You can utilize this method to draw focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
@ Override
public void onAutofocusStopped ( Rect [] areas ) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has stopped.
* You can utilize this method to remove focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
};
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// this method is from ScanResultListener and will be called when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
// Note that mRecognizer is stateful object and that as soon as
// scanning either resumes or its state is reset
// the result object within mRecognizer will be changed. If you
// need to create a immutable copy of the result, you can do that
// by calling clone() on it, for example:
BlinkIdMultiSideRecognizer . Result immutableCopy = result . clone ();
// After this method ends, scanning will be resumed and recognition
// state will be retained. If you want to prevent that, then
// you should call:
mRecognizerRunnerView . resetRecognitionState ();
// Note that reseting recognition state will clear internal result
// objects of all recognizers that are bundled in RecognizerBundle
// associated with RecognizerRunnerView.
// If you want to pause scanning to prevent receiving recognition
// results or mutating result, you should call:
mRecognizerRunnerView . pauseScanning ();
// if scanning is paused at the end of this method, it is guaranteed
// that result within mRecognizer will not be mutated, therefore you
// can avoid creating a copy as described above
// After scanning is paused, you will have to resume it with:
mRecognizerRunnerView . resumeScanning ( true );
// boolean in resumeScanning method indicates whether recognition
// state should be automatically reset when resuming scanning - this
// includes clearing result of mRecognizer
}
};
} Si la propriété screenOrientation de l'activité dans AndroidManifest.xml est définie sur sensor , fullSensor ou similaire, l'activité sera redémarrée à chaque fois que l'appareil change l'orientation du portrait au paysage et vice versa. Lors du redémarrage de l'activité, ses méthodes onPause , onStop et onDestroy seront appelées, puis une nouvelle activité sera créée à nouveau. Il s'agit d'un problème potentiel pour l'activité de numérisation car dans son cycle de vie, il contrôle à la fois la caméra et la bibliothèque native - le redémarrage de l'activité déclenchera à la fois le redémarrage de l'appareil photo et de la bibliothèque native. C'est un problème car le changement d'orientation du paysage au portrait et vice versa sera très lent, dégradant ainsi une expérience utilisateur. Nous ne recommandons pas un tel paramètre.
D'ailleurs, nous vous recommandons de définir votre activité de scan sur le mode portrait ou landscape et gérer les changements d'orientation de l'appareil manuellement. Pour vous aider à cela, RecognizerRunnerView prend en charge l'ajout de vues sur l'enfant qui sera tournée, indépendamment de screenOrientation de l'activité. Vous ajoutez une vue que vous souhaitez faire pivoter (comme la vue qui contient des boutons, des messages d'état, etc.) à RecognizerRunnerView avec la méthode AddChildView. Le deuxième paramètre de la méthode est un booléen qui définit si la vue que vous ajoutez sera tournée avec l'appareil. Pour définir les orientations autorisées, implémentez l'orientationallowedListener l'interface et ajoutez-la à RecognizerRunnerView avec la méthode setOrientationAllowedListener . Il s'agit du moyen recommandé de rotatif de superposition de la caméra.
Cependant, si vous souhaitez vraiment définir la propriété d'origine screenOrientation sur sensor ou similaire et que vous souhaitez gérer Android pour gérer les modifications d'orientation de votre activité de scan, nous vous recommandons de définir la propriété configChanges de votre activité vers orientation|screenSize . Cela indiquera à Android de ne pas redémarrer votre activité lorsque l'orientation de l'appareil changera. Au lieu de cela, la méthode onConfigurationChanged de l'activité sera appelée afin que l'activité puisse être informée du changement de configuration. Dans votre mise en œuvre de cette méthode, vous devez appeler la méthode changeConfiguration de RecognizerView afin qu'il puisse adapter les vues de la surface de la caméra et de l'enfant à une nouvelle configuration.
Cette section décrira comment utiliser l'API direct pour reconnaître les bitmaps Android sans avoir besoin de caméra. Vous pouvez utiliser l'API direct de votre application, pas seulement des activités.
Les performances de reconnaissance d'image dépend fortement de la qualité des images d'entrée. Lorsque notre gestion de la caméra est utilisée (numérisation à partir d'une caméra), nous faisons de notre mieux pour obtenir des cadres de caméra avec la meilleure qualité possible pour l'appareil utilisé. D'un autre côté, lorsque l'API direct est utilisée, vous devez fournir des images de haute qualité sans flou et reflets pour une reconnaissance réussie.
Bitmaps d'Android sont encore obtenus, par exemple, de la galerie. Utilisez reconnaîtrebitmap ou reconnaissancebitmapWithRecognisers.Images vidéo construites à partir de cadres vidéo personnalisés, par exemple, lorsque vous utilisez votre propre gestion de la caméra ou tierce. La reconnaissance sera optimisée pour la vitesse et s'appuiera sur la redondance temporelle entre les cadres vidéo consécutifs afin de donner le meilleur résultat de reconnaissance possible. Utilisez reconnaîtrevideoimage ou reconnaîtrevideoimage avec des reconnaissances.Images fixes Lorsque vous avez besoin d'une analyse approfondie d'images uniques ou de quelques images qui ne font pas partie du flux vidéo et que vous souhaitez obtenir les meilleurs résultats possibles de l' InputImage unique. Le type INPUTIMAGE provient de notre SDK ou il peut être créé en utilisant ImageBuilder. Utilisez la reconnaissance ou la reconnaissance de la reconnaissance avec des reconnaissances.Voici l'exemple minimum d'utilisation de l'API direct pour la reconnaissance du bitmap Android:
public class DirectAPIActivity extends Activity {
private RecognizerRunner mRecognizerRunner ;
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ();
// initialize your activity here
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
try {
mRecognizerRunner = RecognizerRunner . getSingletonInstance ();
} catch ( FeatureNotSupportedException e ) {
Toast . makeText ( this , "Feature not supported! Reason: " + e . getReason (). getDescription (), Toast . LENGTH_LONG ). show ();
finish ();
return ;
}
mRecognizerRunner . initialize ( this , mRecognizerBundle , new DirectApiErrorListener () {
@ Override
public void onRecognizerError ( Throwable t ) {
Toast . makeText ( DirectAPIActivity . this , "There was an error in initialization of Recognizer: " + t . getMessage (), Toast . LENGTH_SHORT ). show ();
finish ();
}
});
}
@ Override
protected void onResume () {
super . onResume ();
// start recognition
Bitmap bitmap = BitmapFactory . decodeFile ( "/path/to/some/file.jpg" );
mRecognizerRunner . recognizeBitmap ( bitmap , Orientation . ORIENTATION_LANDSCAPE_RIGHT , mScanResultListener );
}
@ Override
protected void onDestroy () {
super . onDestroy ();
mRecognizerRunner . terminate ();
}
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// this method is from ScanResultListener and will be called
// when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
};
} ScanResultListener.onscanningDone La méthode est appelée pour chaque image d'entrée que vous envoyez à la reconnaissance. Vous pouvez appeler la méthode RecognizerRunner.recognize* plusieurs fois avec différentes images du même document pour une meilleure précision de lecture jusqu'à ce que vous obteniez un résultat réussi dans la méthode onScanningDone de l'auditeur. Ceci est utile lorsque vous utilisez votre propre gestion de caméras ou tiers.
String (analyse) Certains reconnaissants prennent en charge la reconnaissance de String . Ils peuvent être utilisés via l'API direct pour analyser String donnée et retourner des données comme lorsqu'ils sont utilisés sur une image d'entrée. Lorsque la reconnaissance est effectuée sur String , il n'y a pas besoin de l'OCR. String d'entrée est utilisée de la même manière que la sortie OCR est utilisée lorsque l'image est reconnue.
La reconnaissance de String peut être effectuée de la même manière que la reconnaissance de l'image, décrite dans la section précédente.
La seule différence est que l'une des méthodes Singleton de reconnaissance de reconnaissance de String doit être appelée:
Le Singleton RecognizerRunner de Direct API est une machine d'État qui peut être dans l'un des 3 états: OFFLINE , READY et WORKING .
RecognizerRunner Singleton, il sera en état OFFLINE .RecognizerRunner en appelant la méthode Initialize. Si vous appelez la méthode initialize tandis que RecognizerRunner n'est pas en état OFFLINE , vous obtiendrez IllegalStateException .RecognizerRunner passera à READY State. Vous pouvez maintenant appeler l'une des méthodes recognize* .recognize* , RecognizerRunner passera à l'état WORKING . Si vous essayez d'appeler ces méthodes tandis que RecognizerRunner n'est pas en état READY , vous obtiendrez IllegalStateExceptionRecognizerRunner's à partir du fil d'interface utilisateurRecognizerRunner remonte d'abord à READY State, puis appelle la méthode onScanningDone du ScanResultListener fourni.onScanningDone de ScanResultListener sera appelée sur le thread de traitement d'arrière-plan, alors assurez-vous de ne pas effectuer d'opérations d'interface utilisateur dans ce rappel. Notez également que jusqu'à ce que la méthode onScanningDone se termine, RecognizerRunner n'effectuera pas la reconnaissance d'une autre image ou d'une autre chaîne, même si l'une des méthodes recognize* a été appelée juste après la transition vers l'état READY . Il s'agit de garantir que les résultats des reconnaissants regroupés dans RecognizerBundle Bundle associé à RecognizerRunner ne sont pas modifiés tout en étant utilisés dans la méthode onScanningDone .terminate , RecognizerRunner Singleton publiera toutes ses ressources internes. Notez que même après l' terminate vous pourriez recevoir un événement onScanningDone s'il y avait un travail en cours lorsque terminate a été appelée.terminate peut être appelée à partir de tout état RecognizerRunner de SingletonRecognizerRunner Singleton avec la méthode getCurrentState ReconnatrUnnerView et RecognizerRunner utilisent le même singleton interne qui gère le code natif. Ce singleton gère l'initialisation et la terminaison de la bibliothèque native et la propagation des reconnaissances à la bibliothèque native. Il est possible d'utiliser RecognizerRunnerView et RecognizerRunner ensemble, car Singleton interne s'assurera que les paramètres de synchronisation et de reconnaissance corrects sont utilisés. Si vous rencontrez des problèmes tout en utilisant RecognizerRunner en combinaison avec RecognizerRunnerView , faites-le nous savoir!
Lorsque vous utilisez une reconnaissance combinée et que des images des deux côtés de documents sont nécessaires, vous devez appeler RecognizerRunner.recognize* plusieurs fois. Appelez-le d'abord avec les images du premier côté du document, jusqu'à ce qu'elle soit lue, puis avec les images du deuxième côté. Le Reconnaître combiné passe automatiquement à la deuxième balance latérale, après avoir lu avec succès le premier côté. Pour être informé lorsque le premier balayage latéral est terminé, vous devez définir le FirstsideRecognitionCallback via les métadatacallbacks. Si vous n'avez pas besoin de ces informations, par exemple, lorsque vous n'avez qu'une seule image pour chaque côté document, ne définissez pas le FirstSideRecognitionCallback et vérifiez la reconnaissancesUssType dans ScanResultListener.onscanningDone, après le traitement de la deuxième image latérale.
BlinkIdUISettings et BlinkIdOverlayController BlinkIdOverlayController implémente une nouvelle interface utilisateur pour la numérisation des documents d'identité, qui est conçu de manière optimale pour être utilisé avec de nouveaux BlinkIdMultiSideRecognizer et BlinkIdSingleSideRecognizer . Il implémente plusieurs nouvelles fonctionnalités:
La nouvelle interface utilisateur permet à l'utilisateur de scanner le document à un angle, dans n'importe quelle orientation. Nous vous recommandons de forcer l'orientation du paysage si vous scannez des codes à barres à l'arrière, car dans ce taux de réussite de l'orientation sera plus élevé.
Pour lancer une activité intégrée qui utilise BlinkIdOverlayController , utilisez BlinkIdUISettings .
Pour personnaliser la superposition, fournissez votre ressource de style personnalisée via BlinkIdUISettings.setOverlayViewStyle() ou via le constructeur ReticleOverlayView . Vous pouvez personnaliser les éléments étiquetés sur des captures d'écran ci-dessus en fournissant les attributs suivants dans votre style:
sortie
mb_exitScanDrawable - icône DrawableBlinkIdUISettings.setShowCancelButton(false)torche
mb_torchOnDrawable - icône dessinable qui est indiqué lorsque la torche est activéemb_torchOffDrawable - icône Drawable qui est affiché lorsque la torche est désactivéeBlinkIdUISettings.setShowTorchButton(false)instructions
mb_instructionsTextAppearance - Style qui sera utilisé comme android:textAppearancemb_instructionsBackgroundDrawable - Drawable utilisé pour l'arrière-planmb_instructionsBackgroundColor - couleur utilisée pour l'arrière-planavertissement de lampe de poche
mb_flashlightWarningTextAppearance - Style qui sera utilisé comme android:textAppearancemb_flashlightWarningBackgroundDrawable - Drawable utilisé pour l'arrière-planBlinkIdUISettings.setShowFlashlightWarning(false)icône de carte
mb_cardFrontDrawable - Drawable icône montré pendant l'animation de retournement de la carte, représentant le côté avant de la cartemb_cardBackDrawable - Drawable icône montré pendant l'animation de retournement de la carte, représentant le côté arrière de la carteréticule
mb_reticleDefaultDrawable - Drawable montré lorsque le réticule est à l'état neutremb_reticleSuccessDrawable - Drawable montré lorsque le réticule est en état de réussite (la numérisation a été réussie)mb_reticleErrorDrawable - Drawable illustré lorsque le réticule est en état d'erreurmb_reticleColor - couleur utilisée pour l'élément de réticule rotatifmb_reticleDefaultColor - Couleur utilisée pour le réticule à l'état neutremb_reticleErrorColor - Couleur utilisée pour le réticule dans l'état d'erreurmb_successFlashColor - Couleur utilisée pour l'effet flash sur le scan réussi Pour personnaliser la visibilité et le style de ces deux boîtes de dialogue, utilisez des méthodes fournies dans BlinkIdUISettings .
La méthode de contrôle de la visibilité de la boîte de dialogue d'introduction est BlinkIdUISettings.setShowIntroductionDialog(boolean showIntroductionDialog) et il est défini sur true par défaut, ce qui signifie que la boîte de dialogue d'introduction sera affichée.
La méthode de contrôle de la visibilité de la boîte de dialogue intégrée est BlinkIdUISettings.setShowOnboardingInfo(boolean showOnboardingInfo) et il est défini sur true par défaut, ce qui signifie que la boîte de dialogue d'introduction sera affichée.
Il existe également une méthode pour contrôler le retard de "Aide Aide?" INFORMATION INTÉRIEUR qui est indiqué au-dessus du bouton d'aide. Le bouton lui-même sera affiché si la méthode précédente pour afficher l'intégration est vraie. La méthode de définition de la longueur de retard de l'info-bulle est BlinkIdUISettings.setShowTooltipTimeIntervalMs(long showTooltipTimeIntervalMs) . Le paramètre de temps est défini en millisecondes.
Le paramètre par défaut du retard est de 12 secondes (12000 millisecondes).
Personnalisation et théâtre Ces éléments d'introduction et d'intégration peuvent être effectués de la même manière que dans le chapitre précédent, en fournissant les attributs suivants:
bouton d'aide
mb_helpButtonDrawable - Drawable qui est affiché lorsque le bouton d'aide est activémb_helpButtonBackgroundColor - Couleur utilisée pour le fond du bouton d'aidemb_helpButtonQuestionmarkColor - Couleur utilisée pour le premier bouton d'aideAIDE
mb_helpTooltipBackground - Drawable qui est affiché comme arrièremb_helpTooltipColor - Couleur utilisée pour l'aide à l'influencemb_helpTooltipTextAppearance - Style qui sera utilisé comme android:textAppearanceIntroduction Dialogue
mb_introductionBackgroundColor - Couleur utilisée pour l'introduction du fond d'écranmb_introductionTitleTextAppearance - Style qui sera utilisé comme android:textAppearancemb_introductionMessageTextAppearance - Style qui sera utilisé comme android:textAppearancemb_introductionButtonTextAppearance - Style qui sera utilisé comme android:textAppearanceBlinkIdUISettings.setShowIntroductionDialog(false)boîte de dialogue intégrée
mb_onboardingBackgroundColor - Couleur utilisée pour les écrans intégrésmb_onboardingPageIndicatorColor - Couleur utilisée pour les indicateurs de page circulaire dans les écrans intégrésmb_onboardingTitleTextAppearance - Style qui sera utilisé comme android:textAppearancemb_onboardingMessageTextAppearance - Style qui sera utilisé comme android:textAppearancemb_onboardingButtonTextAppearance - Style qui sera utilisé comme android:textAppearanceBlinkIdUISettings.setShowOnboardingInfo(false) Les boîtes de dialogue d'alerte appelées par le SDK ont leur propre ensemble de propriétés qui peuvent être modifiées dans styles.xml .
MB_alert_dialog est un thème qui étend le thème Theme.AppCompat.Light.Dialog.Alert et utilise les couleurs par défaut du thème de l'application. Afin de modifier les attributs de ces boîtes de dialogue d'alerte sans modifier d'autres attributs dans l'application de l'utilisateur, le thème MB_alert_dialog doit être écrasé.
< style name = " MB_alert_dialog " parent = " Theme.AppCompat.Light.Dialog.Alert " >
< item name = " android:textSize " >TEXT_SIZE</ item >
< item name = " android:background " >COLOR</ item >
< item name = " android:textColorPrimary " >COLOR</ item >
< item name = " colorAccent " >COLOR</ item >
</ style >Les attributs qui ne sont pas écrasés vont utiliser les couleurs et les tailles par défaut du thème de l'application.
colorAccent ATTIBER est utilisé pour modifier la couleur du bouton de dialogue d'alerte. Si l'attribut colorAccent du thème de l'application est modifié ailleurs, cette couleur de la boîte de dialogue d'alerte sera également modifiée. Cependant, l'écrasement du thème MB_alert_dialog et cet attribut à l'intérieur garantira que seule la couleur du bouton dans la boîte de dialogue d'alerte du SDK Microblink est modifiée. Si le thème de l'application étend un thème de l'ensemble colorAccent MaterialComponents (par colorOnPrimary Theme.MaterialComponents.Light.NoActionBar .
DocumentUISettings DocumentUISettings lance une activité qui utilise BlinkIdOverlayController avec une interface utilisateur alternative. Il est mieux adapté à la numérisation du côté document unique de divers documents de carte et il ne doit pas être utilisé avec des reconnaissances combinées car elle ne fournit aucune instruction utilisateur sur le moment de basculer vers l'arrière.
LegacyDocumentVerificationUISettings LegacyDocumentVerificationUISettings lance une activité qui utilise BlinkIdOverlayController avec une interface utilisateur alternative. Il est mieux adapté aux reconnaissances combinées car il gère la numérisation de plusieurs côtés de documents dans l'ouverture de la caméra unique et guide l'utilisateur à travers le processus de numérisation. Il peut également être utilisé pour la numérisation latérale unique des cartes d'identité, des passeports, des permis de conduire, etc.
Les chaînes utilisées dans les activités et les superpositions intégrées peuvent être localisées dans n'importe quelle langue. Si vous utilisez RecognizerRunnerView (voir ce chapitre pour plus d'informations) dans votre activité ou fragment de numérisation personnalisé, vous devez gérer la localisation comme dans toute autre application Android. RecognizerRunnerView n'utilise pas de chaînes ni de dessins, il n'utilise que des actifs à partir du dossier assets/microblink . Ces actifs ne doivent pas être touchés car ils sont nécessaires pour que la reconnaissance fonctionne correctement.
Cependant, si vous utilisez nos activités ou superpositions intégrées, ils utiliseront des ressources emballées dans LibBlinkID.aar pour afficher des chaînes et des images sur la vue de la caméra. Nous avons déjà préparé des chaînes pour plusieurs langues que vous pouvez utiliser hors de la boîte. Vous pouvez également modifier ces chaînes ou ajouter votre propre langue.
Pour utiliser une langue, vous devez l'activer à partir du code:
Pour utiliser une certaine langue, sur le démarrage de l'application, avant d'ouvrir un composant d'interface utilisateur du SDK, vous devez appeler la méthode LanguageUtils.setLanguageAndCountry(language, country, context) . Par exemple, vous pouvez définir la langue sur Croate comme ceci:
// define BlinkID language
LanguageUtils . setLanguageAndCountry ( "hr" , "" , this ); Blinkid peut facilement être traduit dans d'autres langues. Le dossier res dans LibBlinkID.aar archive a values de dossier contenant strings.xml - Ce fichier contient des chaînes anglaises. In order to make eg croatian translation, create a folder values-hr in your project and put the copy of strings.xml inside it (you might need to extract LibBlinkID.aar archive to access those files). Then, open that file and translate the strings from English into Croatian.
To modify an existing string, the best approach would be to:
strings.xml in folder res/values-hr of the LibBlinkID.aar archive<string name="MBBack">Back</string>strings.xml in the folder res/values-hr , if it doesn't already exist<string name="MBBack">Natrag</string>RecognizerRunner and RecognizerRunnerViewProcessing events, also known as Metadata callbacks are purely intended for giving processing feedback on UI or to capture some debug information during development of your app using BlinkID SDK. For that reason, built-in activities and fragments handle those events internally. If you need to handle those events yourself, you need to use either RecognizerRunnerView or RecognizerRunner.
Callbacks for all events are bundled into the MetadataCallbacks object. Both RecognizerRunner and RecognizerRunnerView have methods which allow you to set all your callbacks.
We suggest that you check for more information about available callbacks and events to which you can handle in the javadoc for MetadataCallbacks class.
Please note that both those methods need to pass information about available callbacks to the native code and for efficiency reasons this is done at the time setMetadataCallbacks method is called and not every time when change occurs within the MetadataCallbacks object. This means that if you, for example, set QuadDetectionCallback to MetadataCallbacks after you already called setMetadataCallbacks method, the QuadDetectionCallback will not be registered with the native code and you will not receive its events.
Similarly, if you, for example, remove the QuadDetectionCallback from MetadataCallbacks object after you already called setMetadataCallbacks method, your app will crash with NullPointerException when our processing code attempts to invoke the method on removed callback (which is now set to null ). We deliberately do not perform null check here because of two reasons:
null callback, while still being registered to native code is illegal state of your program and it should therefore crash Remember , each time you make some changes to MetadataCallbacks object, you need to apply those changes to to your RecognizerRunner or RecognizerRunnerView by calling its setMetadataCallbacks method.
Recognizer concept and RecognizerBundle This section will first describe what is a Recognizer and how it should be used to perform recognition of the images, videos and camera stream. Next, we will describe how RecognizerBundle can be used to tweak the recognition procedure and to transfer Recognizer objects between activities.
RecognizerBundle is an object which wraps the Recognizers and defines settings about how recognition should be performed. Besides that, RecognizerBundle makes it possible to transfer Recognizer objects between different activities, which is required when using built-in activities to perform scanning, as described in first scan section, but is also handy when you need to pass Recognizer objects between your activities.
List of all available Recognizer objects, with a brief description of each Recognizer , its purpose and recommendations how it should be used to get best performance and user experience, can be found here .
Recognizer concept The Recognizer is the basic unit of processing within the BlinkID SDK. Its main purpose is to process the image and extract meaningful information from it. As you will see later, the BlinkID SDK has lots of different Recognizer objects that have various purposes.
Each Recognizer has a Result object, which contains the data that was extracted from the image. The Result object is a member of corresponding Recognizer object and its lifetime is bound to the lifetime of its parent Recognizer object. If you need your Result object to outlive its parent Recognizer object, you must make a copy of it by calling its method clone() .
Every Recognizer is a stateful object, that can be in two states: idle state and working state . While in idle state , you can tweak Recognizer object's properties via its getters and setters. After you bundle it into a RecognizerBundle and use either RecognizerRunner or RecognizerRunnerView to run the processing with all Recognizer objects bundled within RecognizerBundle , it will change to working state where the Recognizer object is being used for processing. While being in working state , you cannot tweak Recognizer object's properties. If you need to, you have to create a copy of the Recognizer object by calling its clone() , then tweak that copy, bundle it into a new RecognizerBundle and use reconfigureRecognizers to ensure new bundle gets used on processing thread.
While Recognizer object works, it changes its internal state and its result. The Recognizer object's Result always starts in Empty state. When corresponding Recognizer object performs the recognition of given image, its Result can either stay in Empty state (in case Recognizer failed to perform recognition), move to Uncertain state (in case Recognizer performed the recognition, but not all mandatory information was extracted), move to StageValid state (in case Recognizer successfully scanned one part/side of the document and there are more fields to extract) or move to Valid state (in case Recognizer performed recognition and all mandatory information was successfully extracted from the image).
As soon as one Recognizer object's Result within RecognizerBundle given to RecognizerRunner or RecognizerRunnerView changes to Valid state, the onScanningDone callback will be invoked on same thread that performs the background processing and you will have the opportunity to inspect each of your Recognizer objects' Results to see which one has moved to Valid state.
As already stated in section about RecognizerRunnerView , as soon as onScanningDone method ends, the RecognizerRunnerView will continue processing new camera frames with same Recognizer objects, unless paused. Continuation of processing or resetting recognition will modify or reset all Recognizer objects's Results . When using built-in activities, as soon as onScanningDone is invoked, built-in activity pauses the RecognizerRunnerView and starts finishing the activity, while saving the RecognizerBundle with active Recognizer objects into Intent so they can be transferred back to the calling activities.
RecognizerBundle The RecognizerBundle is wrapper around Recognizers objects that can be used to transfer Recognizer objects between activities and to give Recognizer objects to RecognizerRunner or RecognizerRunnerView for processing.
The RecognizerBundle is always constructed with array of Recognizer objects that need to be prepared for recognition (ie their properties must be tweaked already). The varargs constructor makes it easier to pass Recognizer objects to it, without the need of creating a temporary array.
The RecognizerBundle manages a chain of Recognizer objects within the recognition process. When a new image arrives, it is processed by the first Recognizer in chain, then by the second and so on, iterating until a Recognizer object's Result changes its state to Valid or all of the Recognizer objects in chain were invoked (none getting a Valid result state). If you want to invoke all Recognizers in the chain, regardless of whether some Recognizer object's Result in chain has changed its state to Valid or not, you can allow returning of multiple results on a single image.
You cannot change the order of the Recognizer objects within the chain - no matter the order in which you give Recognizer objects to RecognizerBundle , they are internally ordered in a way that provides best possible performance and accuracy. Also, in order for BlinkID SDK to be able to order Recognizer objects in recognition chain in the best way possible, it is not allowed to have multiple instances of Recognizer objects of the same type within the chain. Attempting to do so will crash your application.
Recognizer objects between activities Besides managing the chain of Recognizer objects, RecognizerBundle also manages transferring bundled Recognizer objects between different activities within your app. Although each Recognizer object, and each its Result object implements Parcelable interface, it is not so straightforward to put those objects into Intent and pass them around between your activities and services for two main reasons:
Result object is tied to its Recognizer object, which manages lifetime of the native Result object.Result object often contains large data blocks, such as images, which cannot be transferred via Intent because of Android's Intent transaction data limit. Although the first problem can be easily worked around by making a copy of the Result and transfer it independently, the second problem is much tougher to cope with. This is where, RecognizerBundle's methods saveToIntent and loadFromIntent come to help, as they ensure the safe passing of Recognizer objects bundled within RecognizerBundle between activities according to policy defined with method setIntentDataTransferMode :
STANDARD , the Recognizer objects will be passed via Intent using normal Intent transaction mechanism , which is limited by Android's Intent transaction data limit. This is same as manually putting Recognizer objects into Intent and is OK as long as you do not use Recognizer objects that produce images or other large objects in their Results .OPTIMISED , the Recognizer objects will be passed via internal singleton object and no serialization will take place. This means that there is no limit to the size of data that is being passed. This is also the fastest transfer method, but it has a serious drawback - if Android kills your app to save memory for other apps and then later restarts it and redelivers Intent that should contain Recognizer objects, the internal singleton that should contain saved Recognizer objects will be empty and data that was being sent will be lost. You can easily provoke that condition by choosing No background processes under Limit background processes in your device's Developer options , and then switch from your app to another app and then back to your app.PERSISTED_OPTIMISED , the Recognizer objects will be passed via internal singleton object (just like in OPTIMISED mode) and will additionaly be serialized into a file in your application's private folder. In case Android restarts your app and internal singleton is empty after re-delivery of the Intent , the data will be loaded from file and nothing will be lost. The files will be automatically cleaned up when data reading takes place. Just like OPTIMISED , this mode does not have limit to the size of data that is being passed and does not have a drawback that OPTIMISED mode has, but some users might be concerned about files to which data is being written.onSaveInstanceState and save bundle back to file by calling its saveState method. Also, after saving state, you should ensure that you clear saved state in your onResume , as onCreate may not be called if activity is not restarted, while onSaveInstanceState may be called as soon as your activity goes to background (before onStop ), even though activity may not be killed at later time.OPTIMISED mode to transfer large data and image between activities or create your own mechanism for data transfer. Note that your application's private folder is only accessible by your application and your application alone, unless the end-user's device is rooted. This section will give a list of all Recognizer objects that are available within BlinkID SDK, their purpose and recommendations how they should be used to get best performance and user experience.
The FrameGrabberRecognizer is the simplest recognizer in BlinkID SDK, as it does not perform any processing on the given image, instead it just returns that image back to its FrameCallback . Its Result never changes state from Empty.
This recognizer is best for easy capturing of camera frames with RecognizerRunnerView . Note that Image sent to onFrameAvailable are temporary and their internal buffers all valid only until the onFrameAvailable method is executing - as soon as method ends, all internal buffers of Image object are disposed. If you need to store Image object for later use, you must create a copy of it by calling clone .
Also note that FrameCallback interface extends Parcelable interface, which means that when implementing FrameCallback interface, you must also implement Parcelable interface.
This is especially important if you plan to transfer FrameGrabberRecognizer between activities - in that case, keep in mind that the instance of your object may not be the same as the instance on which onFrameAvailable method gets called - the instance that receives onFrameAvailable calls is the one that is created within activity that is performing the scan.
The SuccessFrameGrabberRecognizer is a special Recognizer that wraps some other Recognizer and impersonates it while processing the image. However, when the Recognizer being impersonated changes its Result into Valid state, the SuccessFrameGrabberRecognizer captures the image and saves it into its own Result object.
Since SuccessFrameGrabberRecognizer impersonates its slave Recognizer object, it is not possible to give both concrete Recognizer object and SuccessFrameGrabberRecognizer that wraps it to same RecognizerBundle - doing so will have the same result as if you have given two instances of same Recognizer type to the RecognizerBundle - it will crash your application.
This recognizer is best for use cases when you need to capture the exact image that was being processed by some other Recognizer object at the time its Result became Valid . When that happens, SuccessFrameGrabber's Result will also become Valid and will contain described image. That image can then be retrieved with getSuccessFrame() method.
Unless stated otherwise for concrete recognizer, single side BlinkID recognizers from this list can be used in any context, but they work best with BlinkIdUISettings and DocumentScanUISettings , with UIs best suited for document scanning.
Combined recognizers should be used with BlinkIdUISettings . They manage scanning of multiple document sides in the single camera opening and guide the user through the scanning process. Some combined recognizers support scanning of multiple document types, but only one document type can be scanned at a time.
The BlinkIdSingleSideRecognizer scans and extracts data from the single side of the supported document. You can find the list of the currently supported documents here. We will continue expanding this recognizer by adding support for new document types in the future. Star this repo to stay updated.
The BlinkIdSingleSideRecognizer works best with the BlinkIdUISettings and BlinkIdOverlayController .
Use BlinkIdMultiSideRecognizer for scanning both sides of the supported document. First, it scans and extracts data from the front, then scans and extracts data from the back, and finally, combines results from both sides. The BlinkIdMultiSideRecognizer also performs data matching and returns a flag if the extracted data captured from the front side matches the data from the back. You can find the list of the currently supported documents here. We will continue expanding this recognizer by adding support for new document types in the future. Star this repo to stay updated.
The BlinkIdMultiSideRecognizer works best with the BlinkIdUISettings and BlinkIdOverlayController .
The MrtdRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various Machine Readable Travel Documents (MRTDs) like ID cards and passports. This recognizer is not bound to the specific country, but it can be configured to only return data that match some criteria defined by the MrzFilter .
You can find information about usage context at the beginning of this section.
The MrtdCombinedRecognizer scans Machine Readable Zone (MRZ) after scanning the full document image and face image (usually MRZ is on the back side and face image is on the front side of the document). Internally, it uses DocumentFaceRecognizer for obtaining full document image and face image as the first step and then MrtdRecognizer for scanning the MRZ.
You can find information about usage context at the beginning of this section.
The PassportRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various passport documents. This recognizer also returns face image from the passport.
You can find information about usage context at the beginning of this section.
The VisaRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various visa documents. This recognizer also returns face image from the visa document.
You can find information about usage context at the beginning of this section.
The IdBarcodeRecognizer is used for scanning barcodes from various ID cards. Check this document to see the list of supported document types.
You can find information about usage context at the beginning of this section.
The DocumentFaceRecognizer is a special type of recognizer that only returns face image and full document image of the scanned document. It does not extract document fields like first name, last name, etc. This generic recognizer can be used to obtain document images in cases when specific support for some document type is not available.
You can find information about usage context at the beginning of this section.
You need to ensure that the final app gets all resources required by BlinkID . At the time of writing this documentation, Android does not have support for combining multiple AAR libraries into single fat AAR. The problem is that resource merging is done while building application, not while building AAR, so application must be aware of all its dependencies. There is no official Android way of "hiding" third party AAR within your AAR.
This problem is usually solved with transitive Maven dependencies, ie when publishing your AAR to Maven you specify dependencies of your AAR so they are automatically referenced by app using your AAR. Besides this, there are also several other approaches you can try:
RecognizerRunnerView ). You can perform custom UI integration while taking care that all resources (strings, layouts, images, ...) used are solely from your AAR, not from BlinkID . Then, in your AAR you should not reference LibBlinkID.aar as gradle dependency, instead you should unzip it and copy its assets to your AAR's assets folder, its classes.jar to your AAR's lib folder (which should be referenced by gradle as jar dependency) and contents of its jni folder to your AAR's src/main/jniLibs folder.BlinkID is distributed with ARMv7 and ARM64 native library binaries.
ARMv7 architecture gives the ability to take advantage of hardware accelerated floating point operations and SIMD processing with NEON. This gives BlinkID a huge performance boost on devices that have ARMv7 processors. Most new devices (all since 2012.) have ARMv7 processor so it makes little sense not to take advantage of performance boosts that those processors can give. Also note that some devices with ARMv7 processors do not support NEON and VFPv4 instruction sets, most popular being those based on NVIDIA Tegra 2, ARM Cortex A9 and older. Since these devices are old by today's standard, BlinkID does not support them. For the same reason, BlinkID does not support devices with ARMv5 ( armeabi ) architecture.
ARM64 is the new processor architecture that most new devices use. ARM64 processors are very powerful and also have the possibility to take advantage of new NEON64 SIMD instruction set to quickly process multiple pixels with a single instruction.
There are some issues to be considered:
LibBlinkID.aar archive contains ARMv7 and ARM64 builds of the native library. By default, when you integrate BlinkID into your app, your app will contain native builds for all these processor architectures. Thus, BlinkID will work on ARMv7 and ARM64 devices and will use ARMv7 features on ARMv7 devices and ARM64 features on ARM64 devices. However, the size of your application will be rather large.
We recommend that you distribute your app using App Bundle. This will defer apk generation to Google Play, allowing it to generate minimal APK for each specific device that downloads your app, including only required processor architecture support.
If you are unable to use App Bundle, you can create multiple flavors of your app - one flavor for each architecture. With gradle and Android studio this is very easy - just add the following code to build.gradle file of your app:
android {
...
splits {
abi {
enable true
reset()
include 'armeabi-v7a', 'arm64-v8a'
universalApk true
}
}
}
With that build instructions, gradle will build two different APK files for your app. Each APK will contain only native library for one processor architecture and one APK will contain all architectures. In order for Google Play to accept multiple APKs of the same app, you need to ensure that each APK has different version code. This can easily be done by defining a version code prefix that is dependent on architecture and adding real version code number to it in following gradle script:
// map for the version code
def abiVersionCodes = ['armeabi-v7a':1, 'arm64-v8a':2]
import com.android.build.OutputFile
android.applicationVariants.all { variant ->
// assign different version code for each output
variant.outputs.each { output ->
def filter = output.getFilter(OutputFile.ABI)
if(filter != null) {
output.versionCodeOverride = abiVersionCodes.get(output.getFilter(OutputFile.ABI)) * 1000000 + android.defaultConfig.versionCode
}
}
}
For more information about creating APK splits with gradle, check this article from Google.
After generating multiple APK's, you need to upload them to Google Play. For tutorial and rules about uploading multiple APK's to Google Play, please read the official Google article about multiple APKs.
If you won't be distributing your app via Google Play or for some other reasons want to have single APK of smaller size, you can completely remove support for certain CPU architecture from your APK. This is not recommended due to consequences .
To keep only some CPU architectures, for example armeabi-v7a and arm64-v8a , add the following statement to your android block inside build.gradle :
android {
...
ndk {
// Tells Gradle to package the following ABIs into your application
abiFilters 'armeabi-v7a', 'arm64-v8a'
}
}
This will remove other architecture builds for all native libraries used by the application.
To remove support for a certain CPU architecture only for BlinkID , add the following statement to your android block inside build.gradle :
android {
...
packagingOptions {
exclude 'lib/<ABI>/libBlinkID.so'
}
}
where <ABI> represents the CPU architecture you want to remove:
exclude 'lib/armeabi-v7a/libBlinkID.so'exclude 'lib/arm64-v8a/libBlinkID.so' You can also remove multiple processor architectures by specifying exclude directive multiple times. Just bear in mind that removing processor architecture will have side effects on performance and stability of your app. Please read this for more information.
Google decided that as of August 2019 all apps on Google Play that contain native code need to have native support for 64-bit processors (this includes ARM64 and x86_64). This means that you cannot upload application to Google Play Console that supports only 32-bit ABI and does not support corresponding 64-bit ABI.
By removing ARMv7 support, BlinkID will not work on devices that have ARMv7 processors.
By removing ARM64 support, BlinkID will not use ARM64 features on ARM64 device
If you are combining BlinkID library with other libraries that contain native code into your application, make sure you match the architectures of all native libraries. For example, if third party library has got only ARMv7 version, you must use exactly ARMv7 version of BlinkID with that library, but not ARM64. Using this architectures will crash your app at initialization step because JVM will try to load all its native dependencies in same preferred architecture and will fail with UnsatisfiedLinkError .
libc++_shared.so BlinkID contains native code that depends on the C++ runtime. This runtime is provided by the libc++_shared.so , which needs to be available in your app that is using BlinkID . However, the same file is also used by various other libraries that contain native components. If you happen to integrate both such library together with BlinkID in your app, your build will fail with an error similar to this one:
* What went wrong:
Execution failed for task ':app:mergeDebugNativeLibs'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.MergeJavaResWorkAction
> 2 files found with path 'lib/arm64-v8a/libc++_shared.so' from inputs:
- <path>/.gradle/caches/transforms-3/3d428f9141586beb8805ce57f97bedda/transformed/jetified-opencv-4.5.3.0/jni/arm64-v8a/libc++_shared.so
- <path>/.gradle/caches/transforms-3/609476a082a81bd7af00fd16a991ee43/transformed/jetified-blinkid-6.12.0/jni/arm64-v8a/libc++_shared.so
If you are using jniLibs and CMake IMPORTED targets, see
https://developer.android.com/r/tools/jniLibs-vs-imported-targets
The error states that multiple different dependencies provide the same file lib/arm64/libc++_shared.so (in this case, OpenCV and BlinkID).
You can resolve this issue by making sure that the dependency that uses newer version of libc++_shared.so is listed first in your dependency list, and then, simply add the following to your build.gradle :
android {
packaging {
jniLibs {
pickFirsts.add("lib/armeabi-v7a/libc++_shared.so")
pickFirsts.add("lib/arm64-v8a/libc++_shared.so")
}
}
}
IMPORTANT NOTE
The code above will always select the first libc++_shared.so from your dependency list, so make sure that the dependency that uses the latest version of libc++_shared.so is listed first. This is because libc++_shared.so is backward-compatible, but not forward-compatible. This means that, eg libBlinkID.so built against libc++_shared.so from NDK r24 will work without problems when you package it together with libc++_shared.so from NDK r26, but will crash when you package it together with libc++_shared.so from NDK r21. This is true for all your native dependencies.
In case of problems with SDK integration, first make sure that you have followed integration instructions. If you're still having problems, please contact us at help.microblink.com.
If you are getting "invalid license key" error or having other license-related problems (eg some feature is not enabled that should be or there is a watermark on top of camera), first check the ADB logcat. All license-related problems are logged to error log so it is easy to determine what went wrong.
When you have to determine what is the license-relate problem or you simply do not understand the log, you should contact us help.microblink.com. When contacting us, please make sure you provide following information:
AndroidManifest.xml and/or your build.gradle file)Keep in mind: Versions 5.8.0 and above require an internet connection to work under our new License Management Program.
We're only asking you to do this so we can validate your trial license key. Data extraction still happens offline, on the device itself. Once the validation is complete, you can continue using the SDK in offline mode (or over a private network) until the next check.
If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside BlinkID or anything unmentioned, please do as follows:
enable logging to get the ability to see what is library doing. To enable logging, put this line in your application:
com . microblink . blinkid . util . Log . setLogLevel ( com . microblink . blinkid . util . Log . LogLevel . LOG_VERBOSE );After this line, library will display as much information about its work as possible. Please save the entire log of scanning session to a file that you will send to us. It is important to send the entire log, not just the part where crash occurred, because crashes are sometimes caused by unexpected behaviour in the early stage of the library initialization.
Contact us at help.microblink.com describing your problem and provide following information:
InvalidLicenseKeyException when I construct specific Recognizer object Each license key contains information about which features are allowed to use and which are not. This exception indicates that your production license does not allow using of specific Recognizer object. You should contact support to check if provided license is OK and that it really contains all features that you have purchased.
InvalidLicenseKeyException with trial license key Whenever you construct any Recognizer object or any other object that derives from Entity , a check whether license allows using that object will be performed. If license is not set prior constructing that object, you will get InvalidLicenseKeyException . We recommend setting license as early as possible in your app, ideally in onCreate callback of your Application singleton.
ClassNotFoundExceptionThis usually happens when you perform integration into Eclipse project and you forget to add resources or native libraries into the project. You must alway take care that same versions of both resources, assets, java library and native libraries are used in combination. Combining different versions of resources, assets, java and native libraries will trigger crash in SDK. This problem can also occur when you have performed improper integration of BlinkID SDK into your SDK. Please read how to embed BlinkID inside another SDK.
UnsatisfiedLinkError This error happens when JVM fails to load some native method from native library If performing integration into Android studio and this error happens, make sure that you have correctly combined BlinkID SDK with third party SDKs that contain native code, especially if you need resolving conflict over libc++_shared.so . If this error also happens in our integration sample apps, then it may indicate a bug in the SDK that is manifested on specific device. Please report that to our support team.
libc++_shared.so Please consult the section about resolving libc++_shared.so conflict.
MetadataCallbacks object, but it is not being called Make sure that after adding your callback to MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.
MetadataCallbacks object, and now app is crashing with NullPointerException Make sure that after removing your callback from MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.
onScanningDone callback I have the result inside my Recognizer , but when scanning activity finishes, the result is gone This usually happens when using RecognizerRunnerView and forgetting to pause the RecognizerRunnerView in your onScanningDone callback. Then, as soon as onScanningDone happens, the result is mutated or reset by additional processing that Recognizer performs in the time between end of your onScanningDone callback and actual finishing of the scanning activity. For more information about statefulness of the Recognizer objects, check this section.
IllegalStateException stating Data cannot be saved to intent because its size exceeds intent limit . This usually happens when you use Recognizer that produces image or similar large object inside its Result and that object exceeds the Android intent transaction limit. You should enable different intent data transfer mode. For more information about this, check this section. Also, instead of using built-in activity, you can use RecognizerRunnerFragment with built-in scanning overlay.
This usually happens when you attempt to transfer standalone Result that contains images or similar large objects via Intent and the size of the object exceeds Android intent transaction limit. Depending on the device, you will get either TransactionTooLargeException, a simple message BINDER TRANSACTION FAILED in log and your app will freeze or your app will get into restart loop. We recommend that you use RecognizerBundle and its API for sending Recognizer objects via Intent in a more safe manner (check this section for more information). However, if you really need to transfer standalone Result object (eg Result object obtained by cloning Result object owned by specific Recognizer object), you need to do that using global variables or singletons within your application. Sending large objects via Intent is not supported by Android.
Direct API When automatic scanning of camera frames with our camera management is used (provided camera overlays or direct usage of RecognizerRunnerView ), we use a stream of video frames and send multiple images to the recognition to boost reading accuracy. Also, we perform frame quality analysis and combine scanning results from multiple camera frames. On the other hand, when you are using the Direct API with a single image per document side, we cannot combine multiple images. We do our best to extract as much information as possible from that image. In some cases, when the quality of the input image is not good enough, for example, when the image is blurred or when glare is present, we are not able to successfully read the document.
Online trial licenses require a public network access for validation purposes. See Licensing issues.
onOcrResult() method in my OcrCallback is never invoked and all Result objects always return null in their OCR result gettersIn order to be able to obtain raw OCR result, which contains locations of each character, its value and its alternatives, you need to have a license that allows that. By default, licenses do not allow exposing raw OCR results in public API. If you really need that, please contact us and explain your use case.
You can find BlinkID SDK size report for all supported ABIs here.
Complete API reference can be found in Javadoc.
For any other questions, feel free to contact us at help.microblink.com.