Blinkid Android SDK를 사용하면 Android 앱에서 환상적인 온 보딩 경험을 구축 할 수 있습니다.
한 번의 빠른 스캔을 사용하면 사용자는 신분증, 여권, 운전 면허증 및 거의 다른 정부에서 발행 한 신분증에서 정보를 추출 할 수 있습니다.
Blinkid는 다음과 같습니다.
직장에서 이러한 모든 기능을 보려면 무료 데모 앱을 다운로드하십시오.
통합으로 깨질 준비가 되었습니까? 먼저 문서 유형을 지원해야합니다. 전체 목록 전체 목록. 그런 다음 아래 지침을 따르십시오.
UISettings )RecognizerRunnerFragment )RecognizerRunnerView 가있는 사용자 정의 UXString 인식에 직접 API 사용 (구문 분석)BlinkIdUISettings 및 BlinkIdOverlayControllerDocumentUISettingsLegacyDocumentVerificationUISettingsRecognizerRunner 및 RecognizerRunnerView 로 처리 이벤트를 처리합니다Recognizer 개념 및 RecognizerBundleRecognizer 개념RecognizerBundleRecognizer 객체를 전달합니다libc++_shared.so 의 충돌 해결Yes 선택하십시오. RecognizerRunnerFragment 사용하는 방법과 카메라 오버레이 컨트롤러를 사용하는 방법을 보여줍니다. build.gradle 에서 Repositories 목록에 Blinkid Maven 저장소를 추가하십시오.
repositories {
maven { url 'https://maven.microblink.com' }
}
종속성으로 Blinkid를 추가하고 transitive True로 설정되어 있는지 확인하십시오.
dependencies {
implementation('com.microblink:blinkid:6.12.0@aar') {
transitive = true
}
}
Android Studio는 Maven 의존성에서 Javadoc을 자동으로 가져와야합니다. 그렇지 않은 경우 다음 단계를 수행하여 수동으로 수행 할 수 있습니다.
External Libraries 입력 확장 (일반적으로 프로젝트보기의 마지막 항목입니다)blinkid-6.12.0 항목을 찾아 마우스 오른쪽 버튼을 클릭하고 Library Properties...Library Properties 팝업 창이 나타납니다+ 버튼을 클릭하십시오 (Little Globe가있는 + 포함 된 것)https://blinkid.github.io/blinkid-android/OK 클릭하십시오 스캔을 초기화하려면 유효한 라이센스 키가 필요합니다. MicroBlink Developer Hub에서 등록한 후 무료 평가판 라이센스 키를 요청할 수 있습니다. 라이센스는 앱의 패키지 이름으로 바인딩되므로 질문 할 때 올바른 패키지 이름을 입력해야합니다.
라이센스 파일을 다운로드하여 응용 프로그램의 자산 폴더에 넣으십시오. SDK의 다른 클래스를 사용하기 전에 라이센스 키를 설정하십시오. 그렇지 않으면 런타임 예외가 나타납니다.
Android 애플리케이션 클래스를 확장하고 oncreate 콜백에서 라이센스를 설정하는 것이 좋습니다.
public class MyApplication extends Application {
@ Override
public void onCreate () {
MicroblinkSDK . setLicenseFile ( "path/to/license/file/within/assets/dir" , this );
}
} public class MyApplication : Application () {
override fun onCreate () {
MicroblinkSDK .setLicenseFile( " path/to/license/file/within/assets/dir " , this )
}
} 주요 활동에서 onActivityResult 메소드를 재정의하여 ActivityResultLauncher 객체를 정의하고 만듭니다. OneSideDocumentScan 및 TwoSideDocumentScan 은 모두 구현에 차이가 없으면 상호 교환 할 수 있습니다. 유일한 기능적 차이점은 OneSideDocumentScan 이 문서의 한쪽 단지 스캔하고 TwoSideDocumentScan 스캔이 문서의 한쪽 이상을 스캔한다는 것입니다.
ActivityResultLauncher < Void > resultLauncher = registerForActivityResult (
new TwoSideDocumentScan (),
twoSideScanResult -> {
ResultStatus resultScanStatus = twoSideScanResult . getResultStatus ();
if ( resultScanStatus == ResultStatus . FINISHED ) {
// code after a successful scan
// use result.getResult() for fetching results, for example:
String firstName = twoSideScanResult . getResult (). getFirstName (). value ();
} else if ( resultScanStatus == ResultStatus . CANCELLED ) {
// code after a cancelled scan
} else if ( resultScanStatus == ResultStatus . EXCEPTION ) {
// code after a failed scan
}
}
); private val resultLauncher =
registerForActivityResult( TwoSideDocumentScan ()) { twoSideScanResult : TwoSideScanResult ->
when (twoSideScanResult.resultStatus) {
ResultStatus . FINISHED -> {
// code after a successful scan
// use twoSideScanResult.result for fetching results, for example:
val firstName = twoSideScanResult.result?.firstName?.value()
}
ResultStatus . CANCELLED -> {
// code after a cancelled scan
}
ResultStatus . EXCEPTION -> {
// code after a failed scan
}
else -> {}
}
}@Composable
fun createLauncher (): ActivityResultLauncher < Void ?> {
return rememberLauncherForActivityResult( TwoSideDocumentScan ()) { twoSideScanResult : TwoSideScanResult ->
when (twoSideScanResult.resultStatus) {
ResultStatus . FINISHED -> {
// code after a successful scan
// use twoSideScanResult.result for fetching results, for example:
val firstName = twoSideScanResult.result?.firstName?.value()
}
ResultStatus . CANCELLED -> {
// code after a cancelled scan
}
ResultStatus . EXCEPTION -> {
// code after a failed scan
}
else -> {}
}
}
} 스캔 후, OneSideScanResult 또는 TwoSideScanResult 객체 인스턴스 인 result 업데이트됩니다. onActivityResult 함수의 재정의에서 데이터에서 발생하는 일을 정의 할 수 있습니다 (Kotlin Code 도이 기능을 대체하지만 암시 적입니다). 결과는 twoSideScanResult.getResult() 메소드 (Kotlin의 twoSideScanResult.result )에서 액세스 할 수 있습니다.
ActivityResultObject 호출하고 ActivityResultLauncher.launch 호출하여 스캔 프로세스를 시작하십시오.
// method within MyActivity from previous step
public void startScanning () {
// Start scanning
resultLauncher . launch ( null );
} // method within MyActivity from previous step
public fun startScanning () {
// Start scanning
resultLauncher.launch()
} // within @Composable function or setContent block
val resultLauncher = createLauncher()
resultLauncher.launch() 결과는 이전 단계에서 정의 된 ActivityResultObject 에서 정의 된 콜백에서 사용할 수 있습니다.
Blinkid에는 Android API 레벨 21 또는 최신이 필요합니다.
카메라 비디오 미리보기 해상도도 중요합니다. 성공적인 스캔을 수행하려면 카메라 미리보기 해상도는 720p 이상이어야합니다. 카메라 미리보기 해상도는 비디오 녹화 해상도와 다릅니다.
Blinkid 는 ARMV7 및 ARM64 Native Library Binaries와 함께 배포됩니다.
Blinkid는 C ++로 작성되었으며 여러 플랫폼에서 사용할 수있는 기본 라이브러리입니다. 이로 인해 Blinkid는 모호한 하드웨어 아키텍처가있는 장치에서 작업 할 수 없습니다. 우리는 가장 인기있는 Android Abis에 대해서만 Blinkid Native 코드를 컴파일했습니다.
라이센스 키를 설정하기 전에도 현재 장치에서 BlinkId가 지원되는지 확인해야합니다 (다음 섹션 : 호환성 점검 참조). 지원되지 않은 CPU 아키텍처가있는 장치에서 라이센스 확인과 같은 기본 코드에 의존하는 SDK의 모든 메소드를 호출하려고하면 앱이 충돌합니다.
Blinkid 라이브러리를 응용 프로그램에 기본 코드를 포함하는 다른 라이브러리와 결합하는 경우 모든 기본 라이브러리의 아키텍처와 일치하십시오.
자세한 내용은 프로세서 아키텍처 고려 사항 섹션을 참조하십시오.
기기에서 Blinkid가 지원되는지 확인하는 방법은 다음과 같습니다.
// check if BlinkID is supported on the device,
RecognizerCompatibilityStatus status = RecognizerCompatibility . getRecognizerCompatibilityStatus ( this );
if ( status == RecognizerCompatibilityStatus . RECOGNIZER_SUPPORTED ) {
Toast . makeText ( this , "BlinkID is supported!" , Toast . LENGTH_LONG ). show ();
} else if ( status == RecognizerCompatibilityStatus . NO_CAMERA ) {
Toast . makeText ( this , "BlinkID is supported only via Direct API!" , Toast . LENGTH_LONG ). show ();
} else if ( status == RecognizerCompatibilityStatus . PROCESSOR_ARCHITECTURE_NOT_SUPPORTED ) {
Toast . makeText ( this , "BlinkID is not supported on current processor architecture!" , Toast . LENGTH_LONG ). show ();
} else {
Toast . makeText ( this , "BlinkID is not supported! Reason: " + status . name (), Toast . LENGTH_LONG ). show ();
} // check if _BlinkID_ is supported on the device,
when ( val status = RecognizerCompatibility .getRecognizerCompatibilityStatus( this )) {
RecognizerCompatibilityStatus . RECOGNIZER_SUPPORTED -> {
Toast .makeText( this , " BlinkID is supported! " , Toast . LENGTH_LONG ).show()
}
RecognizerCompatibilityStatus . NO_CAMERA -> {
Toast .makeText( this , " BlinkID is supported only via Direct API! " , Toast . LENGTH_LONG ).show()
}
RecognizerCompatibilityStatus . PROCESSOR_ARCHITECTURE_NOT_SUPPORTED -> {
Toast .makeText( this , " BlinkID is not supported on current processor architecture! " , Toast . LENGTH_LONG ).show()
}
else -> {
Toast .makeText( this , " BlinkID is not supported! Reason: " + status.name, Toast . LENGTH_LONG ).show()
}
}일부 인식기에는 자동 초점이있는 카메라가 필요합니다. 자동 초점을 지원하지 않는 장치에서 사용하려고하면 오류가 발생합니다. 이를 방지하기 위해, requestOfocus 방법을 호출하여 인식기에 자동 초점이 필요한지 확인할 수 있습니다.
이미 인식기 배열이있는 경우 다음 코드 스 니펫을 사용하여 배열에서 자동 초점이 필요한 인식기를 쉽게 필터링 할 수 있습니다.
Recognizer [] recArray = ...;
if (! RecognizerCompatibility . cameraHasAutofocus ( CameraType . CAMERA_BACKFACE , this )) {
recArray = RecognizerUtils . filterOutRecognizersThatRequireAutofocus ( recArray );
} var recArray : Array < Recognizer > = .. .
if ( ! RecognizerCompatibility .cameraHasAutofocus( CameraType . CAMERA_BACKFACE , this )) {
recArray = RecognizerUtils .filterOutRecognizersThatRequireAutofocus(recArray)
}사용 사례 및 사용자 정의 요구에 따라 5 가지 방법으로 BlinkId를 앱에 통합 할 수 있습니다.
OneSideDocumentScan 및 TwoSideDocumentScan ) - SDK는 모든 것을 처리하며 내장 활동을 시작하고 결과를 처리하면 사용자 정의 옵션이 없습니다.UISettings )-SDK는 대부분의 작업을 처리하고, 인식기, 설정을 시작하고, 내장 활동을 시작하고, 결과를 처리하면, 사용자 정의 옵션이 제한되어 있습니다.RecognizerRunnerFragment )-자신의 활동에서 내장 활동에서 UX를 재사용하는 재사용RecognizerRunnerView ) -SDK는 카메라 관리를 처리하는 동안 완전히 사용자 정의 스캔 UX를 구현해야합니다.RecognizerRunner ) - SKD는 카메라 나 파일에서 이미지를 제공 해야하는 동안 인식 만 처리합니다. OneSideDocumentScan 및 TwoSideDocumentScan ) OneSideDocumentScan 및 TwoSideDocumentScan SDK의 내장 스캔 활동을 신속하게 시작하기 위해 필요한 모든 설정 정의를 포함하는 클래스입니다. 이를 통해 사용자는 UISettings 및 RecognizerBundle 과 같은 모든 설정 단계를 건너 뛰고 스캔으로 직접 이동할 수 있습니다.
첫 번째 스캔 수행에 표시된대로 스캔 결과에서 발생할 내용을 정의하고 실제 스캔 기능을 호출하기 위해 결과 리스너의 정의 만 있으면됩니다.
UISettings ) UISettings 는 SDK의 내장 스캔 활동에 필요한 모든 설정을 포함하는 클래스입니다. 스캔 활동 동작, 문자열, 아이콘 및 기타 UI 요소를 구성합니다. ActivityRunner 사용하여 아래 예제에 표시된 UISettings 로 구성된 스캔 활동을 시작해야합니다.
다양한 스캔 시나리오를 위해 특수화 된 여러 UISettings 클래스를 제공합니다. 각 UISettings 객체에는 적절한 세터 방법을 통해 변경 될 수있는 속성이 있습니다. 예를 들어 setCameraSettings Metod로 카메라 설정을 사용자 정의 할 수 있습니다.
사용 가능한 모든 UISettings 클래스는 여기에 나열되어 있습니다.
주요 활동에서는 이미지 인식을 수행하고 구성하여 ComporizerBundle 객체에 넣을 인식기 개체를 만듭니다. 사용 가능한 인식 자 및 RecognizerBundle 에 대한 자세한 정보는 여기를 참조하십시오.
예를 들어, 지원되는 문서를 스캔하려면 다음과 같은 인식자를 구성하십시오.
public class MyActivity extends Activity {
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle bundle ) {
super . onCreate ( bundle );
// setup views, as you would normally do in onCreate callback
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
}
} public class MyActivity : Activity () {
private lateinit var mRecognizer : BlinkIdMultiSideRecognizer
private lateinit var mRecognizerBundle : RecognizerBundle
override fun onCreate ( bundle : Bundle ) {
// setup views, as you would normally do in onCreate callback
// create BlinkIdMultiSideRecognizer
mRecognizer = BlinkIdMultiSideRecognizer ()
// build recognizers into RecognizerBundle
mRecognizerBundle = RecognizerBundle (mRecognizer)
}
} BlinkIdUISettings 만들고 ActivityRunner.startActivityForResult 호출하여 인식 프로세스를 시작하십시오.
// method within MyActivity from previous step
public void startScanning () {
// Settings for BlinkIdActivity
BlinkIdUISettings settings = new BlinkIdUISettings ( mRecognizerBundle );
// tweak settings as you wish
// Start activity
ActivityRunner . startActivityForResult ( this , MY_REQUEST_CODE , settings );
} // method within MyActivity from previous step
public fun startScanning () {
// Settings for BlinkIdActivity
val settings = BlinkIdUISettings (mRecognizerBundle)
// tweak settings as you wish
// Start activity
ActivityRunner .startActivityForResult( this , MY_REQUEST_CODE , settings)
} Scanning이 완료된 후 활동에서 onActivityResult 호출됩니다. 여기서 스캔 결과를 얻을 수 있습니다.
@ Override
protected void onActivityResult ( int requestCode , int resultCode , Intent data ) {
super . onActivityResult ( requestCode , resultCode , data );
if ( requestCode == MY_REQUEST_CODE ) {
if ( resultCode == Activity . RESULT_OK && data != null ) {
// load the data into all recognizers bundled within your RecognizerBundle
mRecognizerBundle . loadFromIntent ( data );
// now every recognizer object that was bundled within RecognizerBundle
// has been updated with results obtained during scanning session
// you can get the result by invoking getResult on recognizer
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
}
} override protected fun onActivityResult ( requestCode : Int , resultCode : Int , data : Intent ) {
super .onActivityResult(requestCode, resultCode, data);
if (requestCode == MY_REQUEST_CODE ) {
if (resultCode == Activity . RESULT_OK && data != null ) {
// load the data into all recognizers bundled within your RecognizerBundle
mRecognizerBundle.loadFromIntent(data)
// now every recognizer object that was bundled within RecognizerBundle
// has been updated with results obtained during scanning session
// you can get the result by invoking getResult on recognizer
val result = mRecognizer.result
if (result.resultState == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
}
} 사용 가능한 인식 자 및 RecognizerBundle 에 대한 자세한 내용은 CelobIzerBundle 및 사용 가능한 인식자를 참조하십시오.
RecognizerRunnerFragment ) 자신의 활동 내에서 내장 활동 UX 재사용을 재사용하려면 RecognizerRunnerFragment 사용하십시오. RecognizerRunnerFragment 호스팅하는 활동은 ScanningOverlayBinder 인터페이스를 구현해야합니다. 해당 인터페이스를 구현하지 않는 활동에 RecognizerRunnerFragment 추가하려고하면 ClassCastException 이 발생합니다.
ScanningOverlayBinder RecognizerRunnerFragment 위에 UI를 관리 할 ScanningOverlay - 클래스의 non-null 구현을 반환 할 책임이 있습니다. 자신만의 ScanningOverlay 구현을 작성하는 것이 좋습니다. 대신 나열된 구현 중 하나를 사용하십시오.
다음은 RecognizerRunnerFragment 호스팅하는 활동에 대한 최소 예입니다.
public class MyActivity extends AppCompatActivity implements RecognizerRunnerFragment . ScanningOverlayBinder {
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
private BlinkIdOverlayController mScanOverlay ;
private RecognizerRunnerFragment mRecognizerRunnerFragment ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ();
setContentView ( R . layout . activity_my_activity );
mScanOverlay = createOverlay ();
if ( null == savedInstanceState ) {
// create fragment transaction to replace R.id.recognizer_runner_view_container with RecognizerRunnerFragment
mRecognizerRunnerFragment = new RecognizerRunnerFragment ();
FragmentTransaction fragmentTransaction = getSupportFragmentManager (). beginTransaction ();
fragmentTransaction . replace ( R . id . recognizer_runner_view_container , mRecognizerRunnerFragment );
fragmentTransaction . commit ();
} else {
// obtain reference to fragment restored by Android within super.onCreate() call
mRecognizerRunnerFragment = ( RecognizerRunnerFragment ) getSupportFragmentManager (). findFragmentById ( R . id . recognizer_runner_view_container );
}
}
@ Override
@ NonNull
public ScanningOverlay getScanningOverlay () {
return mScanOverlay ;
}
private BlinkIdOverlayController createOverlay () {
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
BlinkIdUISettings settings = new BlinkIdUISettings ( mRecognizerBundle );
return settings . createOverlayController ( this , mScanResultListener );
}
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// pause scanning to prevent new results while fragment is being removed
mRecognizerRunnerFragment . getRecognizerRunnerView (). pauseScanning ();
// now you can remove the RecognizerRunnerFragment with new fragment transaction
// and use result within mRecognizer safely without the need for making a copy of it
// if not paused, as soon as this method ends, RecognizerRunnerFragments continues
// scanning. Note that this can happen even if you created fragment transaction for
// removal of RecognizerRunnerFragment - in the time between end of this method
// and beginning of execution of the transaction. So to ensure result within mRecognizer
// does not get mutated, ensure calling pauseScanning() as shown above.
}
@ Override
public void onUnrecoverableError ( @ NonNull Throwable throwable ) {
}
};
} package com.microblink.blinkid
class MainActivity : AppCompatActivity (), RecognizerRunnerFragment.ScanningOverlayBinder {
private lateinit var mRecognizer : BlinkIdMultiSideRecognizer
private lateinit var mRecognizerRunnerFragment : RecognizerRunnerFragment
private lateinit var mRecognizerBundle : RecognizerBundle
private lateinit var mScanOverlay : BlinkIdOverlayController
override fun onCreate ( savedInstanceState : Bundle ? ) {
super .onCreate(savedInstanceState)
if ( ! ::mScanOverlay.isInitialized) {
mScanOverlay = createOverlayController()
}
setContent {
this . run {
// viewBinding has to be set to 'true' in buildFeatures block of the build.gradle file
AndroidViewBinding ( RecognizerRunnerLayoutBinding ::inflate) {
mRecognizerRunnerFragment =
fragmentContainerView.getFragment< RecognizerRunnerFragment >()
}
}
}
}
override fun getScanningOverlay (): ScanningOverlay {
return mScanOverlay
}
private fun createOverlay (): BlinkIdOverlayController {
// create BlinkIdMultiSideRecognizer
val mRecognizer = BlinkIdMultiSideRecognizer ()
// bundle recognizers into RecognizerBundle
mRecognizerBundle = RecognizerBundle (mRecognizer)
val settings = BlinkIdUISettings (mRecognizerBundle)
return settings.createOverlayController( this , mScanResultListener)
}
private val mScanResultListener : ScanResultListener = object : ScanResultListener {
override fun onScanningDone ( p0 : RecognitionSuccessType ) {
// pause scanning to prevent new results while fragment is being removed
mRecognizerRunnerFragment !! .recognizerRunnerView !! .pauseScanning()
// now you can remove the RecognizerRunnerFragment with new fragment transaction
// and use result within mRecognizer safely without the need for making a copy of it
// if not paused, as soon as this method ends, RecognizerRunnerFragments continues
// scanning. Note that this can happen even if you created fragment transaction for
// removal of RecognizerRunnerFragment - in the time between end of this method
// and beginning of execution of the transaction. So to ensure result within mRecognizer
// does not get mutated, ensure calling pauseScanning() as shown above.
}
override fun onUnrecoverableError ( p0 : Throwable ) {
}
}
} 보다 자세한 예제는 SDK와 함께 제공된 샘플 앱을 참조하고 호스트 활동의 방향이 nosensor 로 설정되었거나 구성 변경이 활성화되어 있는지 확인하십시오 (즉, 구성 변경이 발생하면 다시 시작되지 않음). 자세한 내용은 스캔 오리엔테이션 섹션을 확인하십시오.
RecognizerRunnerView 가있는 사용자 정의 UX이 섹션에서는 SCAN 활동에 인식 일에 런너 뷰를 포함시키고 스캔을 수행하는 방법에 대해 설명합니다.
RecognizerRunnerView 가 귀하의 활동의 회원 필드인지 확인하십시오. 이는 모든 활동의 수명주기 이벤트를 RecognizerRunnerView 으로 전달해야하기 때문에 필요합니다.portrait 또는 landscape 과 같은 한 방향으로 유지하는 것이 좋습니다. sensor 스캔 활동의 방향으로 설정하면 장치 방향이 변경 될 때마다 활동이 완전히 다시 시작됩니다. 카메라와 Blinkid Native Library는 매번 다시 시작해야하기 때문에 사용자 경험이 매우 저하됩니다. 이 행동에 대한 조치는 나중에 논의됩니다.onCreate 메소드에서, 새로운 RecognizerRunnerView 만들고, View에서 사용할 수있는 인식자를 포함하고, 필수 카메라 이벤트를 처리 할 CamereVentsListener를 정의하고, 인식이 완료되었을 때 통화를받을 ScanResultListener를 정의한 다음 create 방법을 호출 할 수 있습니다. 그런 다음 카메라보기 위에 레이아웃을 해야하는 뷰를 추가하십시오.setLifecycle 메소드를 사용하여 활동의 수명주기를 통과하여 LifeCeycle 이벤트의 자동 처리를 가능하게합니다. 다음은 귀하의 활동의 유일한 견해로서 RecognizerRunnerView 의 통합의 최소 예입니다.
public class MyScanActivity extends AppCompatActivity {
private static final int PERMISSION_CAMERA_REQUEST_CODE = 42 ;
private RecognizerRunnerView mRecognizerRunnerView ;
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ( savedInstanceState );
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
// create RecognizerRunnerView
mRecognizerRunnerView = new RecognizerRunnerView ( this );
// set lifecycle to automatically call recognizer runner view lifecycle methods
mRecognizerRunnerView . setLifecycle ( getLifecycle ());
// associate RecognizerBundle with RecognizerRunnerView
mRecognizerRunnerView . setRecognizerBundle ( mRecognizerBundle );
// scan result listener will be notified when scanning is complete
mRecognizerRunnerView . setScanResultListener ( mScanResultListener );
// camera events listener will be notified about camera lifecycle and errors
mRecognizerRunnerView . setCameraEventsListener ( mCameraEventsListener );
setContentView ( mRecognizerRunnerView );
}
@ Override
public void onConfigurationChanged ( Configuration newConfig ) {
super . onConfigurationChanged ( newConfig );
// changeConfiguration is not handled by lifecycle events so call it manually
mRecognizerRunnerView . changeConfiguration ( newConfig );
}
private final CameraEventsListener mCameraEventsListener = new CameraEventsListener () {
@ Override
public void onCameraPreviewStarted () {
// this method is from CameraEventsListener and will be called when camera preview starts
}
@ Override
public void onCameraPreviewStopped () {
// this method is from CameraEventsListener and will be called when camera preview stops
}
@ Override
public void onError ( Throwable exc ) {
/**
* This method is from CameraEventsListener and will be called when
* opening of camera resulted in exception or recognition process
* encountered an error. The error details will be given in exc
* parameter.
*/
}
@ Override
@ TargetApi ( 23 )
public void onCameraPermissionDenied () {
/**
* Called in Android 6.0 and newer if camera permission is not given
* by user. You should request permission from user to access camera.
*/
requestPermissions ( new String []{ Manifest . permission . CAMERA }, PERMISSION_CAMERA_REQUEST_CODE );
/**
* Please note that user might have not given permission to use
* camera. In that case, you have to explain to user that without
* camera permissions scanning will not work.
* For more information about requesting permissions at runtime, check
* this article:
* https://developer.android.com/training/permissions/requesting.html
*/
}
@ Override
public void onAutofocusFailed () {
/**
* This method is from CameraEventsListener will be called when camera focusing has failed.
* Camera manager usually tries different focusing strategies and this method is called when all
* those strategies fail to indicate that either object on which camera is being focused is too
* close or ambient light conditions are poor.
*/
}
@ Override
public void onAutofocusStarted ( Rect [] areas ) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has started.
* You can utilize this method to draw focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
@ Override
public void onAutofocusStopped ( Rect [] areas ) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has stopped.
* You can utilize this method to remove focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
};
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// this method is from ScanResultListener and will be called when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
// Note that mRecognizer is stateful object and that as soon as
// scanning either resumes or its state is reset
// the result object within mRecognizer will be changed. If you
// need to create a immutable copy of the result, you can do that
// by calling clone() on it, for example:
BlinkIdMultiSideRecognizer . Result immutableCopy = result . clone ();
// After this method ends, scanning will be resumed and recognition
// state will be retained. If you want to prevent that, then
// you should call:
mRecognizerRunnerView . resetRecognitionState ();
// Note that reseting recognition state will clear internal result
// objects of all recognizers that are bundled in RecognizerBundle
// associated with RecognizerRunnerView.
// If you want to pause scanning to prevent receiving recognition
// results or mutating result, you should call:
mRecognizerRunnerView . pauseScanning ();
// if scanning is paused at the end of this method, it is guaranteed
// that result within mRecognizer will not be mutated, therefore you
// can avoid creating a copy as described above
// After scanning is paused, you will have to resume it with:
mRecognizerRunnerView . resumeScanning ( true );
// boolean in resumeScanning method indicates whether recognition
// state should be automatically reset when resuming scanning - this
// includes clearing result of mRecognizer
}
};
} AndroidManifest.xml 의 Activity의 screenOrientation 속성은 sensor , fullSensor 또는 이와 유사한 것으로 설정된 경우 장치가 초상화에서 풍경으로 방향을 변경할 때마다 활동이 다시 시작됩니다. 활동을 다시 시작하는 동안 onPause , onStop 및 onDestroy 방법이 호출되고 새로운 활동이 새로 생성됩니다. 수명주기에서 카메라와 기본 라이브러리를 모두 제어하기 때문에 스캔 활동에 잠재적 인 문제입니다. 활동 재시작은 카메라와 기본 라이브러리의 다시 시작됩니다. 오리엔테이션을 풍경에서 초상화로 바꾸는 것은 매우 느리기 때문에 사용자 경험을 저하시키기 때문에 이것은 문제입니다. 우리는 그러한 설정을 권장하지 않습니다.
이 문제에 대해서는 스캔 활동을 portrait 또는 landscape 모드로 설정하고 장치 방향 변경을 수동으로 처리하는 것이 좋습니다. screenOrientation 돕기 위해 RecognizerRunnerView 회전하려는 뷰 (예 : 버튼, 상태 메시지 등이 포함 된 뷰 등)를 AddChildView 메소드를 사용하여 RecognizerRunnerView 로 추가하십시오. 이 방법의 두 번째 매개 변수는 추가하는 뷰가 장치로 회전할지 여부를 정의하는 부울입니다. 허용 된 방향을 정의하려면 OrientationallowEdListener 인터페이스를 구현하고 Method setOrientationAllowedListener 사용하여 RecognizerRunnerView 에 추가하십시오. 이것은 카메라 오버레이 회전하는 권장 방법입니다.
그러나 screenOrientation 속성을 sensor 또는 이와 유사하게 설정하고 Android가 스캔 활동의 방향 변경을 처리하려면 활동의 configChanges 속성을 orientation|screenSize 로 설정하는 것이 좋습니다. 이렇게하면 장치 방향이 변경 될 때 Android가 활동을 다시 시작하지 않도록합니다. 대신, 활동에 구성 변경을 알릴 수 있도록 Activity의 onConfigurationChanged 메소드가 호출됩니다. 이 방법을 구현할 때는 카메라 표면과 하위보기를 새로운 구성에 적응할 수 있도록 changeConfiguration Method of RecognizerView 호출해야합니다.
이 섹션에서는 직접 API를 사용하여 카메라없이 안드로이드 비트 맵을 인식하는 방법에 대해 설명합니다. 활동뿐만 아니라 응용 프로그램에서 어디서나 직접 API를 사용할 수 있습니다.
이미지 인식 성능은 입력 이미지의 품질에 따라 크게 달라집니다. 카메라 관리가 사용되면 (카메라에서 스캔) 사용 된 장치에 최상의 품질을 갖춘 카메라 프레임을 얻기 위해 최선을 다합니다. 반면, 직접 API를 사용하는 경우 성공적인 인식을 위해 흐릿하고 눈부심없이 고품질 이미지를 제공해야합니다.
Bitmaps 을 얻었습니다. kenlodizebitmap 또는 kensImages . 인식은 속도에 최적화되며 최상의 인식 결과를 얻기 위해 연속 비디오 프레임 사이의 시간 중복에 의존합니다. 인식 VideoImage 또는 videizeoImage를 사용하여 인식하는 사람을 사용하십시오.Images 비디오 스트림의 일부가 아닌 단일 이미지를 철저히 스캔해야하며 단일 InputImage 에서 최상의 결과를 얻으려고합니다. InputImage 유형은 SDK에서 제공되거나 ImageBuilder를 사용하여 생성 할 수 있습니다. keanceStillImage 또는 keanceStillImage를 사용하여 사용하십시오.다음은 Android 비트 맵을 인식하기위한 직접 API 사용의 최소 예입니다.
public class DirectAPIActivity extends Activity {
private RecognizerRunner mRecognizerRunner ;
private BlinkIdMultiSideRecognizer mRecognizer ;
private RecognizerBundle mRecognizerBundle ;
@ Override
protected void onCreate ( Bundle savedInstanceState ) {
super . onCreate ();
// initialize your activity here
// create BlinkIdMultiSideRecognizer
mRecognizer = new BlinkIdMultiSideRecognizer ();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle ( mRecognizer );
try {
mRecognizerRunner = RecognizerRunner . getSingletonInstance ();
} catch ( FeatureNotSupportedException e ) {
Toast . makeText ( this , "Feature not supported! Reason: " + e . getReason (). getDescription (), Toast . LENGTH_LONG ). show ();
finish ();
return ;
}
mRecognizerRunner . initialize ( this , mRecognizerBundle , new DirectApiErrorListener () {
@ Override
public void onRecognizerError ( Throwable t ) {
Toast . makeText ( DirectAPIActivity . this , "There was an error in initialization of Recognizer: " + t . getMessage (), Toast . LENGTH_SHORT ). show ();
finish ();
}
});
}
@ Override
protected void onResume () {
super . onResume ();
// start recognition
Bitmap bitmap = BitmapFactory . decodeFile ( "/path/to/some/file.jpg" );
mRecognizerRunner . recognizeBitmap ( bitmap , Orientation . ORIENTATION_LANDSCAPE_RIGHT , mScanResultListener );
}
@ Override
protected void onDestroy () {
super . onDestroy ();
mRecognizerRunner . terminate ();
}
private final ScanResultListener mScanResultListener = new ScanResultListener () {
@ Override
public void onScanningDone ( @ NonNull RecognitionSuccessType recognitionSuccessType ) {
// this method is from ScanResultListener and will be called
// when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
BlinkIdMultiSideRecognizer . Result result = mRecognizer . getResult ();
if ( result . getResultState () == Recognizer . Result . State . Valid ) {
// result is valid, you can use it however you wish
}
}
};
} scanresultlistener.onscanningdone 메소드는 인식으로 보내는 각 입력 이미지에 대해 호출됩니다. 청취자의 onScanningDone 메소드에서 성공적인 결과를 얻을 때까지 더 나은 판독 정확도를 얻기 위해 동일한 문서의 다른 이미지를 사용하여 RecognizerRunner.recognize* 메소드를 여러 번 호출 할 수 있습니다. 이는 자신의 또는 타사 카메라 관리를 사용할 때 유용합니다.
String 인식에 직접 API 사용 (구문 분석) 일부 인식자는 String 에서 인식을 지원합니다. 직접 API를 통해 주어진 String 구문 분석하고 입력 이미지에서 사용될 때와 같이 데이터를 반환 할 수 있습니다. String 에서 인식이 수행되면 OCR이 필요하지 않습니다. 입력 String 이미지가 인식 될 때 OCR 출력이 사용되는 것과 같은 방식으로 사용됩니다.
String 에서 인식은 이전 섹션에서 설명한 이미지의 인식과 동일한 방식으로 수행 할 수 있습니다.
유일한 차이점은 문자열에서 인식하기위한 인식 상속인 싱글 톤 방법 중 하나를 다음과 같이 호출해야한다는 것입니다.
Direct API의 RecognizerRunner 싱글 톤은 OFFLINE , READY 및 WORKING 3 개 주 중 하나에있을 수있는 주 머신입니다.
RecognizerRunner Singleton에 대한 참조를 얻으면 OFFLINE 상태가됩니다.RecognizerRunner 초기화 할 수 있습니다. initialize RecognizerRunner OFFLINE IllegalStateExceptionRecognizerRunner READY State로 이동합니다. 이제 recognize* 방법을 호출 할 수 있습니다.recognize* 메소드로 인식을 시작할 때는 RecognizerRunner WORKING 상태로 이동합니다. RecognizerRunner READY 상태가 아닌 동안 이러한 방법을 호출하려고 시도하면 IllegalStateException 얻을 수 있습니다.RecognizerRunner's 메소드를 호출하는 것이 안전합니다.RecognizerRunner 먼저 READY State로 다시 이동 한 다음 제공된 ScanResultListener 의 OnScanningDone 메소드를 호출합니다.ScanResultListener 의 onScanningDone 메소드는 백그라운드 처리 스레드에서 호출 되므로이 콜백에서 UI 작업을 수행하지 않도록하십시오. 또한 onScanningDone 메소드가 완료 될 때까지 RecognizerRunner recognize* 메소드가 READY 상태로 전환 된 후에도 인식 된* 메소드가 호출 되었더라도 다른 이미지 나 문자열을 인식하지 못한다는 점에 유의하십시오. 이는 RecognizerRunner 와 관련된 RecognizerBundle 내에서 번들로 연결된 인식 자의 결과가 onScanningDone 방법 내에서 사용될 수있는 동안 수정되지 않도록하기위한 것입니다.terminate Method를 호출함으로써 RecognizerRunner Singleton은 모든 내부 리소스를 출시합니다. terminate 전화 후에도 terminate 호출 될 때 진행중인 작업이 있으면 onScanningDone 이벤트를받을 수 있습니다.terminate 방법은 모든 RecognizerRunner 싱글 톤의 상태에서 호출 할 수 있습니다.getCurrentState 사용하여 RecognizerRunner Singleton의 상태를 관찰 할 수 있습니다 CelobErrunnerView와 RecognizerRunner 모두 기본 코드를 관리하는 것과 동일한 내부 싱글 톤을 사용합니다. 이 싱글 톤은 기본 라이브러리의 초기화 및 종료 및 기본 라이브러리로 전파되는 인식자를 처리합니다. 내부 싱글 톤은 올바른 동기화와 올바른 인식 설정이 사용되므로 RecognizerRunnerView 와 RecognizerRunner 함께 사용할 수 있습니다. RecognizerRunnerView 와 함께 RecognizerRunner 사용하는 동안 문제가 발생하면 알려주십시오.
결합 된 인식기를 사용하는 경우 두 문서 측면의 이미지가 필요할 때는 RecognizerRunner.recognize* 호출해야합니다. 문서의 첫 번째면의 이미지, 읽을 때까지 두 번째면의 이미지로 먼저 전화하십시오. 결합 된 인식기는 첫 번째 측면을 성공적으로 읽은 후 자동으로 2면 스캔으로 전환됩니다. 첫 번째 측면 스캐닝이 완료되면 알림을 받으려면 Metadatacallbacks를 통해 FirstSiderecognitionCallback을 설정해야합니다. 해당 정보가 필요하지 않은 경우 (예 : 각 문서 측에 이미지가 하나 밖에 없을 때, FirstSideRecognitionCallback 설정하지 말고 ScanResultListener.onscanningDone에서 인식이 포함 된 경우, 두 번째 측면 이미지가 처리 된 후.
BlinkIdUISettings 및 BlinkIdOverlayController BlinkIdOverlayController 새로운 BlinkIdMultiSideRecognizer 및 BlinkIdSingleSideRecognizer 와 함께 사용되도록 최적으로 설계된 ID 문서를 스캔하기위한 새로운 UI를 구현합니다. 몇 가지 새로운 기능을 구현합니다.
새 UI를 사용하면 사용자가 모든 각도로 모든 각도로 문서를 스캔 할 수 있습니다. 오리엔테이션 성공률이 더 높기 때문에 뒷면에서 바코드를 스캔하면 조경 방향을 강제하는 것이 좋습니다.
BlinkIdOverlayController 사용하는 내장 활동을 시작하려면 BlinkIdUISettings 사용하십시오.
오버레이를 사용자 정의하려면 BlinkIdUISettings.setOverlayViewStyle() 메소드 또는 ReticleOverlayView 생성자를 통해 사용자 정의 스타일 리소스를 제공하십시오. 스타일의 다음 속성을 제공하여 위의 스크린 샷에 표시된 요소를 사용자 정의 할 수 있습니다.
출구
mb_exitScanDrawable 아이콘 드로우 가능BlinkIdUISettings.setShowCancelButton(false) 사용 하여이 요소를 비활성화 할 수 있습니다.토치
mb_torchOnDrawable 토치가 활성화 될 때 표시되는 아이콘 드로우 가능mb_torchOffDrawable 토치가 비활성화 될 때 표시되는 아이콘 드로우 가능BlinkIdUISettings.setShowTorchButton(false) 사용 하여이 요소를 비활성화 할 수 있습니다.지침
mb_instructionsTextAppearance android:textAppearance 로 사용되는 스타일mb_instructionsBackgroundDrawable 배경에 사용되는 드로우 가능mb_instructionsBackgroundColor 배경에 사용되는 색상손전등 경고
mb_flashlightWarningTextAppearance android:textAppearance 로 사용되는 스타일mb_flashlightWarningBackgroundDrawable 배경에 사용되는 드로우 가능BlinkIdUISettings.setShowFlashlightWarning(false) 사용 하여이 요소를 비활성화 할 수 있습니다.카드 아이콘
mb_cardFrontDrawable 카드 플립 애니메이션 중에 표시되는 아이콘 드로우 가능 카드 전면을 나타냅니다.mb_cardBackDrawable 카드 플립 애니메이션 중에 표시되는 아이콘 드로우 가능 카드의 뒷면을 나타냅니다.십자선
mb_reticleDefaultDrawable 레티클이 중립 상태 일 때 표시되는 드로우 가능mb_reticleSuccessDrawable 레티클이 성공 상태에있을 때 표시 될 수 있습니다 (스캔이 성공적)mb_reticleErrorDrawable 레티클이 오류 상태에있을 때 표시 가능mb_reticleColor 회전 레티클 요소에 사용되는 색상mb_reticleDefaultColor 중립 상태에서 레티클에 사용되는 색상mb_reticleErrorColor 오류 상태에서 레티클에 사용되는 색상mb_successFlashColor 성공적인 스캔에서 플래시 효과에 사용되는 색상 이 두 대화의 가시성과 스타일을 사용자 정의하려면 BlinkIdUISettings 에 제공된 방법을 사용하십시오.
소개 대화 상자 의 가시성을 제어하는 방법은 BlinkIdUISettings.setShowIntroductionDialog(boolean showIntroductionDialog) 이며 기본적으로 true로 설정되므로 소개 대화 상자가 표시됩니다.
온 보딩 대화 상자 의 가시성을 제어하는 방법은 BlinkIdUISettings.setShowOnboardingInfo(boolean showOnboardingInfo) 이며 기본적으로 true로 설정되므로 소개 대화 상자가 표시됩니다.
"Show Help" 의 지연을 제어하는 방법도 있습니다. 도움말 버튼 위에 표시된 툴팁 . 온 보딩을 표시하는 이전 방법이 사실 인 경우 버튼 자체가 표시됩니다. 툴팁의 지연 길이를 설정하는 방법은 BlinkIdUISettings.setShowTooltipTimeIntervalMs(long showTooltipTimeIntervalMs) 입니다. 시간 매개 변수는 밀리 초로 설정됩니다.
지연의 기본 설정은 12 초 (12000 밀리 초)입니다.
이러한 소개 및 온 보딩 요소를 사용자 정의하고 테마하면 이전 장에서 설명한 것과 동일한 방식으로 다음과 같은 속성을 제공하여 수행 할 수 있습니다.
도움말 버튼
mb_helpButtonDrawable 도움말 버튼이 활성화 될 때 표시되는 드로우 가능mb_helpButtonBackgroundColor 도움말 버튼 배경에 사용되는 색상mb_helpButtonQuestionmarkColor 도움말 버튼 전경에 사용되는 색상툴팁을 도와주세요
mb_helpTooltipBackground 도움말 툴팁이 팝업 될 때 배경으로 표시되는 드로우 가능mb_helpTooltipColor 도움말 툴팁 배경에 사용되는 색상mb_helpTooltipTextAppearance android:textAppearance 로 사용되는 스타일소개 대화 상자
mb_introductionBackgroundColor 소개 화면 배경에 사용되는 색상mb_introductionTitleTextAppearance android:textAppearance 로 사용되는 스타일mb_introductionMessageTextAppearance android:textAppearance 로 사용되는 스타일mb_introductionButtonTextAppearance android:textAppearance 로 사용되는 스타일BlinkIdUISettings.setShowIntroductionDialog(false) 사용 하여이 요소를 비활성화 할 수 있습니다.온 보딩 대화 상자
mb_onboardingBackgroundColor 온 보딩 화면에 사용되는 색상mb_onboardingPageIndicatorColor 온 보딩 화면의 원형 페이지 표시기에 사용되는 색상mb_onboardingTitleTextAppearance android:textAppearance 로 사용되는 스타일mb_onboardingMessageTextAppearance android:textAppearance 로 사용되는 스타일mb_onboardingButtonTextAppearance android:textAppearance 로 사용되는 스타일BlinkIdUISettings.setShowOnboardingInfo(false) 사용 하여이 요소를 비활성화 할 수 있습니다. SDK가 불리는 경고 대화 상자에는 styles.xml 로 수정할 수있는 자체 속성 세트가 있습니다.
MB_alert_dialog Theme.AppCompat.Light.Dialog.Alert 확장하는 테마입니다. 사용자 애플리케이션에서 다른 속성을 변경하지 않고 이러한 경고 대화 상자의 속성을 변경하려면 MB_alert_dialog 테마를 덮어 쓰야합니다.
< style name = " MB_alert_dialog " parent = " Theme.AppCompat.Light.Dialog.Alert " >
< item name = " android:textSize " >TEXT_SIZE</ item >
< item name = " android:background " >COLOR</ item >
< item name = " android:textColorPrimary " >COLOR</ item >
< item name = " colorAccent " >COLOR</ item >
</ style >덮어 쓰지 않은 속성은 응용 프로그램 테마의 기본 색상과 크기를 사용합니다.
colorAccent Attbuct는 경고 대화 버튼의 색상을 변경하는 데 사용됩니다. 응용 프로그램 테마의 colorAccent 속성이 다른 곳으로 변경되면이 경고 대화 버튼 색상도 변경됩니다. 그러나 MB_alert_dialog 테마를 덮어 쓰고이 속성은 MicroBlink SDK의 경고 대화 상자의 버튼 색상 만 변경되도록합니다. 응용 프로그램 테마가 MaterialComponents 세트 (예 : Theme.MaterialComponents.Light.NoActionBar )에서 테마를 연장하는 경우, 위에서 언급 한 버튼 색상은 colorAccent 속성 대신 colorOnPrimary 속성을 덮어 쓰면 변경 될 수 있습니다.
DocumentUISettings DocumentUISettings 대체 UI와 함께 BlinkIdOverlayController 사용하는 활동을 시작합니다. 다양한 카드 문서의 단일 문서 측면을 스캔하는 데 가장 적합하며 뒷면으로 전환 할 시점에 대한 사용자 지침을 제공하지 않으므로 결합 된 인식기와 함께 사용해서는 안됩니다.
LegacyDocumentVerificationUISettings LegacyDocumentVerificationUISettings 대체 UI와 함께 BlinkIdOverlayController 사용하는 활동을 시작합니다. 단일 카메라 개방에서 여러 문서 측면의 스캔을 관리하고 스캔 프로세스를 통해 사용자를 안내하기 때문에 결합 된 인식기 에 가장 적합합니다. ID 카드, 여권, 운전 면허증 등의 단일 측면 스캔에도 사용할 수 있습니다.
내장 활동 및 오버레이 내에 사용되는 문자열은 모든 언어에 현지화 할 수 있습니다. 사용자 정의 스캔 활동 또는 조각에서 RecognizerRunnerView RecognizerRunnerView 문자열이나 드로어블을 사용하지 않으며 assets/microblink 폴더의 자산 만 사용합니다. 이러한 자산은 인식이 올바르게 작동하는 데 필요한 자산을 만져서는 안됩니다.
그러나 내장 활동이나 오버레이를 사용하는 경우 LibBlinkID.aar 내에 포장 된 리소스를 사용하여 카메라보기 위에 문자열과 이미지를 표시합니다. 우리는 이미 상자에서 사용할 수있는 여러 언어에 대한 문자열을 준비했습니다. 해당 문자열을 수정하거나 자신의 언어를 추가 할 수 있습니다.
언어를 사용하려면 코드에서 언어를 활성화해야합니다.
응용 프로그램 시작시 특정 언어를 사용하려면 SDK에서 UI 구성 요소를 열기 전에 Method LanguageUtils.setLanguageAndCountry(language, country, context) 를 호출해야합니다. 예를 들어, 다음과 같이 언어를 크로아티아어로 설정할 수 있습니다.
// define BlinkID language
LanguageUtils . setLanguageAndCountry ( "hr" , "" , this ); Blinkid는 다른 언어로 쉽게 번역 될 수 있습니다. LibBlinkID.aar 아카이브의 res 폴더에는 strings.xml 이 포함 된 폴더 values 있습니다.이 파일에는 영어 문자열이 포함되어 있습니다. In order to make eg croatian translation, create a folder values-hr in your project and put the copy of strings.xml inside it (you might need to extract LibBlinkID.aar archive to access those files). Then, open that file and translate the strings from English into Croatian.
To modify an existing string, the best approach would be to:
strings.xml in folder res/values-hr of the LibBlinkID.aar archive<string name="MBBack">Back</string>strings.xml in the folder res/values-hr , if it doesn't already exist<string name="MBBack">Natrag</string>RecognizerRunner and RecognizerRunnerViewProcessing events, also known as Metadata callbacks are purely intended for giving processing feedback on UI or to capture some debug information during development of your app using BlinkID SDK. For that reason, built-in activities and fragments handle those events internally. If you need to handle those events yourself, you need to use either RecognizerRunnerView or RecognizerRunner.
Callbacks for all events are bundled into the MetadataCallbacks object. Both RecognizerRunner and RecognizerRunnerView have methods which allow you to set all your callbacks.
We suggest that you check for more information about available callbacks and events to which you can handle in the javadoc for MetadataCallbacks class.
Please note that both those methods need to pass information about available callbacks to the native code and for efficiency reasons this is done at the time setMetadataCallbacks method is called and not every time when change occurs within the MetadataCallbacks object. This means that if you, for example, set QuadDetectionCallback to MetadataCallbacks after you already called setMetadataCallbacks method, the QuadDetectionCallback will not be registered with the native code and you will not receive its events.
Similarly, if you, for example, remove the QuadDetectionCallback from MetadataCallbacks object after you already called setMetadataCallbacks method, your app will crash with NullPointerException when our processing code attempts to invoke the method on removed callback (which is now set to null ). We deliberately do not perform null check here because of two reasons:
null callback, while still being registered to native code is illegal state of your program and it should therefore crash Remember , each time you make some changes to MetadataCallbacks object, you need to apply those changes to to your RecognizerRunner or RecognizerRunnerView by calling its setMetadataCallbacks method.
Recognizer concept and RecognizerBundle This section will first describe what is a Recognizer and how it should be used to perform recognition of the images, videos and camera stream. Next, we will describe how RecognizerBundle can be used to tweak the recognition procedure and to transfer Recognizer objects between activities.
RecognizerBundle is an object which wraps the Recognizers and defines settings about how recognition should be performed. Besides that, RecognizerBundle makes it possible to transfer Recognizer objects between different activities, which is required when using built-in activities to perform scanning, as described in first scan section, but is also handy when you need to pass Recognizer objects between your activities.
List of all available Recognizer objects, with a brief description of each Recognizer , its purpose and recommendations how it should be used to get best performance and user experience, can be found here .
Recognizer concept The Recognizer is the basic unit of processing within the BlinkID SDK. Its main purpose is to process the image and extract meaningful information from it. As you will see later, the BlinkID SDK has lots of different Recognizer objects that have various purposes.
Each Recognizer has a Result object, which contains the data that was extracted from the image. The Result object is a member of corresponding Recognizer object and its lifetime is bound to the lifetime of its parent Recognizer object. If you need your Result object to outlive its parent Recognizer object, you must make a copy of it by calling its method clone() .
Every Recognizer is a stateful object, that can be in two states: idle state and working state . While in idle state , you can tweak Recognizer object's properties via its getters and setters. After you bundle it into a RecognizerBundle and use either RecognizerRunner or RecognizerRunnerView to run the processing with all Recognizer objects bundled within RecognizerBundle , it will change to working state where the Recognizer object is being used for processing. While being in working state , you cannot tweak Recognizer object's properties. If you need to, you have to create a copy of the Recognizer object by calling its clone() , then tweak that copy, bundle it into a new RecognizerBundle and use reconfigureRecognizers to ensure new bundle gets used on processing thread.
While Recognizer object works, it changes its internal state and its result. The Recognizer object's Result always starts in Empty state. When corresponding Recognizer object performs the recognition of given image, its Result can either stay in Empty state (in case Recognizer failed to perform recognition), move to Uncertain state (in case Recognizer performed the recognition, but not all mandatory information was extracted), move to StageValid state (in case Recognizer successfully scanned one part/side of the document and there are more fields to extract) or move to Valid state (in case Recognizer performed recognition and all mandatory information was successfully extracted from the image).
As soon as one Recognizer object's Result within RecognizerBundle given to RecognizerRunner or RecognizerRunnerView changes to Valid state, the onScanningDone callback will be invoked on same thread that performs the background processing and you will have the opportunity to inspect each of your Recognizer objects' Results to see which one has moved to Valid state.
As already stated in section about RecognizerRunnerView , as soon as onScanningDone method ends, the RecognizerRunnerView will continue processing new camera frames with same Recognizer objects, unless paused. Continuation of processing or resetting recognition will modify or reset all Recognizer objects's Results . When using built-in activities, as soon as onScanningDone is invoked, built-in activity pauses the RecognizerRunnerView and starts finishing the activity, while saving the RecognizerBundle with active Recognizer objects into Intent so they can be transferred back to the calling activities.
RecognizerBundle The RecognizerBundle is wrapper around Recognizers objects that can be used to transfer Recognizer objects between activities and to give Recognizer objects to RecognizerRunner or RecognizerRunnerView for processing.
The RecognizerBundle is always constructed with array of Recognizer objects that need to be prepared for recognition (ie their properties must be tweaked already). The varargs constructor makes it easier to pass Recognizer objects to it, without the need of creating a temporary array.
The RecognizerBundle manages a chain of Recognizer objects within the recognition process. When a new image arrives, it is processed by the first Recognizer in chain, then by the second and so on, iterating until a Recognizer object's Result changes its state to Valid or all of the Recognizer objects in chain were invoked (none getting a Valid result state). If you want to invoke all Recognizers in the chain, regardless of whether some Recognizer object's Result in chain has changed its state to Valid or not, you can allow returning of multiple results on a single image.
You cannot change the order of the Recognizer objects within the chain - no matter the order in which you give Recognizer objects to RecognizerBundle , they are internally ordered in a way that provides best possible performance and accuracy. Also, in order for BlinkID SDK to be able to order Recognizer objects in recognition chain in the best way possible, it is not allowed to have multiple instances of Recognizer objects of the same type within the chain. Attempting to do so will crash your application.
Recognizer objects between activities Besides managing the chain of Recognizer objects, RecognizerBundle also manages transferring bundled Recognizer objects between different activities within your app. Although each Recognizer object, and each its Result object implements Parcelable interface, it is not so straightforward to put those objects into Intent and pass them around between your activities and services for two main reasons:
Result object is tied to its Recognizer object, which manages lifetime of the native Result object.Result object often contains large data blocks, such as images, which cannot be transferred via Intent because of Android's Intent transaction data limit. Although the first problem can be easily worked around by making a copy of the Result and transfer it independently, the second problem is much tougher to cope with. This is where, RecognizerBundle's methods saveToIntent and loadFromIntent come to help, as they ensure the safe passing of Recognizer objects bundled within RecognizerBundle between activities according to policy defined with method setIntentDataTransferMode :
STANDARD , the Recognizer objects will be passed via Intent using normal Intent transaction mechanism , which is limited by Android's Intent transaction data limit. This is same as manually putting Recognizer objects into Intent and is OK as long as you do not use Recognizer objects that produce images or other large objects in their Results .OPTIMISED , the Recognizer objects will be passed via internal singleton object and no serialization will take place. This means that there is no limit to the size of data that is being passed. This is also the fastest transfer method, but it has a serious drawback - if Android kills your app to save memory for other apps and then later restarts it and redelivers Intent that should contain Recognizer objects, the internal singleton that should contain saved Recognizer objects will be empty and data that was being sent will be lost. You can easily provoke that condition by choosing No background processes under Limit background processes in your device's Developer options , and then switch from your app to another app and then back to your app.PERSISTED_OPTIMISED , the Recognizer objects will be passed via internal singleton object (just like in OPTIMISED mode) and will additionaly be serialized into a file in your application's private folder. In case Android restarts your app and internal singleton is empty after re-delivery of the Intent , the data will be loaded from file and nothing will be lost. The files will be automatically cleaned up when data reading takes place. Just like OPTIMISED , this mode does not have limit to the size of data that is being passed and does not have a drawback that OPTIMISED mode has, but some users might be concerned about files to which data is being written.onSaveInstanceState and save bundle back to file by calling its saveState method. Also, after saving state, you should ensure that you clear saved state in your onResume , as onCreate may not be called if activity is not restarted, while onSaveInstanceState may be called as soon as your activity goes to background (before onStop ), even though activity may not be killed at later time.OPTIMISED mode to transfer large data and image between activities or create your own mechanism for data transfer. Note that your application's private folder is only accessible by your application and your application alone, unless the end-user's device is rooted. This section will give a list of all Recognizer objects that are available within BlinkID SDK, their purpose and recommendations how they should be used to get best performance and user experience.
The FrameGrabberRecognizer is the simplest recognizer in BlinkID SDK, as it does not perform any processing on the given image, instead it just returns that image back to its FrameCallback . Its Result never changes state from Empty.
This recognizer is best for easy capturing of camera frames with RecognizerRunnerView . Note that Image sent to onFrameAvailable are temporary and their internal buffers all valid only until the onFrameAvailable method is executing - as soon as method ends, all internal buffers of Image object are disposed. If you need to store Image object for later use, you must create a copy of it by calling clone .
Also note that FrameCallback interface extends Parcelable interface, which means that when implementing FrameCallback interface, you must also implement Parcelable interface.
This is especially important if you plan to transfer FrameGrabberRecognizer between activities - in that case, keep in mind that the instance of your object may not be the same as the instance on which onFrameAvailable method gets called - the instance that receives onFrameAvailable calls is the one that is created within activity that is performing the scan.
The SuccessFrameGrabberRecognizer is a special Recognizer that wraps some other Recognizer and impersonates it while processing the image. However, when the Recognizer being impersonated changes its Result into Valid state, the SuccessFrameGrabberRecognizer captures the image and saves it into its own Result object.
Since SuccessFrameGrabberRecognizer impersonates its slave Recognizer object, it is not possible to give both concrete Recognizer object and SuccessFrameGrabberRecognizer that wraps it to same RecognizerBundle - doing so will have the same result as if you have given two instances of same Recognizer type to the RecognizerBundle - it will crash your application.
This recognizer is best for use cases when you need to capture the exact image that was being processed by some other Recognizer object at the time its Result became Valid . When that happens, SuccessFrameGrabber's Result will also become Valid and will contain described image. That image can then be retrieved with getSuccessFrame() method.
Unless stated otherwise for concrete recognizer, single side BlinkID recognizers from this list can be used in any context, but they work best with BlinkIdUISettings and DocumentScanUISettings , with UIs best suited for document scanning.
Combined recognizers should be used with BlinkIdUISettings . They manage scanning of multiple document sides in the single camera opening and guide the user through the scanning process. Some combined recognizers support scanning of multiple document types, but only one document type can be scanned at a time.
The BlinkIdSingleSideRecognizer scans and extracts data from the single side of the supported document. You can find the list of the currently supported documents here. We will continue expanding this recognizer by adding support for new document types in the future. Star this repo to stay updated.
The BlinkIdSingleSideRecognizer works best with the BlinkIdUISettings and BlinkIdOverlayController .
Use BlinkIdMultiSideRecognizer for scanning both sides of the supported document. First, it scans and extracts data from the front, then scans and extracts data from the back, and finally, combines results from both sides. The BlinkIdMultiSideRecognizer also performs data matching and returns a flag if the extracted data captured from the front side matches the data from the back. You can find the list of the currently supported documents here. We will continue expanding this recognizer by adding support for new document types in the future. Star this repo to stay updated.
The BlinkIdMultiSideRecognizer works best with the BlinkIdUISettings and BlinkIdOverlayController .
The MrtdRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various Machine Readable Travel Documents (MRTDs) like ID cards and passports. This recognizer is not bound to the specific country, but it can be configured to only return data that match some criteria defined by the MrzFilter .
You can find information about usage context at the beginning of this section.
The MrtdCombinedRecognizer scans Machine Readable Zone (MRZ) after scanning the full document image and face image (usually MRZ is on the back side and face image is on the front side of the document). Internally, it uses DocumentFaceRecognizer for obtaining full document image and face image as the first step and then MrtdRecognizer for scanning the MRZ.
You can find information about usage context at the beginning of this section.
The PassportRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various passport documents. This recognizer also returns face image from the passport.
You can find information about usage context at the beginning of this section.
The VisaRecognizer is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various visa documents. This recognizer also returns face image from the visa document.
You can find information about usage context at the beginning of this section.
The IdBarcodeRecognizer is used for scanning barcodes from various ID cards. Check this document to see the list of supported document types.
You can find information about usage context at the beginning of this section.
The DocumentFaceRecognizer is a special type of recognizer that only returns face image and full document image of the scanned document. It does not extract document fields like first name, last name, etc. This generic recognizer can be used to obtain document images in cases when specific support for some document type is not available.
You can find information about usage context at the beginning of this section.
You need to ensure that the final app gets all resources required by BlinkID . At the time of writing this documentation, Android does not have support for combining multiple AAR libraries into single fat AAR. The problem is that resource merging is done while building application, not while building AAR, so application must be aware of all its dependencies. There is no official Android way of "hiding" third party AAR within your AAR.
This problem is usually solved with transitive Maven dependencies, ie when publishing your AAR to Maven you specify dependencies of your AAR so they are automatically referenced by app using your AAR. Besides this, there are also several other approaches you can try:
RecognizerRunnerView ). You can perform custom UI integration while taking care that all resources (strings, layouts, images, ...) used are solely from your AAR, not from BlinkID . Then, in your AAR you should not reference LibBlinkID.aar as gradle dependency, instead you should unzip it and copy its assets to your AAR's assets folder, its classes.jar to your AAR's lib folder (which should be referenced by gradle as jar dependency) and contents of its jni folder to your AAR's src/main/jniLibs folder.BlinkID is distributed with ARMv7 and ARM64 native library binaries.
ARMv7 architecture gives the ability to take advantage of hardware accelerated floating point operations and SIMD processing with NEON. This gives BlinkID a huge performance boost on devices that have ARMv7 processors. Most new devices (all since 2012.) have ARMv7 processor so it makes little sense not to take advantage of performance boosts that those processors can give. Also note that some devices with ARMv7 processors do not support NEON and VFPv4 instruction sets, most popular being those based on NVIDIA Tegra 2, ARM Cortex A9 and older. Since these devices are old by today's standard, BlinkID does not support them. For the same reason, BlinkID does not support devices with ARMv5 ( armeabi ) architecture.
ARM64 is the new processor architecture that most new devices use. ARM64 processors are very powerful and also have the possibility to take advantage of new NEON64 SIMD instruction set to quickly process multiple pixels with a single instruction.
There are some issues to be considered:
LibBlinkID.aar archive contains ARMv7 and ARM64 builds of the native library. By default, when you integrate BlinkID into your app, your app will contain native builds for all these processor architectures. Thus, BlinkID will work on ARMv7 and ARM64 devices and will use ARMv7 features on ARMv7 devices and ARM64 features on ARM64 devices. However, the size of your application will be rather large.
We recommend that you distribute your app using App Bundle. This will defer apk generation to Google Play, allowing it to generate minimal APK for each specific device that downloads your app, including only required processor architecture support.
If you are unable to use App Bundle, you can create multiple flavors of your app - one flavor for each architecture. With gradle and Android studio this is very easy - just add the following code to build.gradle file of your app:
android {
...
splits {
abi {
enable true
reset()
include 'armeabi-v7a', 'arm64-v8a'
universalApk true
}
}
}
With that build instructions, gradle will build two different APK files for your app. Each APK will contain only native library for one processor architecture and one APK will contain all architectures. In order for Google Play to accept multiple APKs of the same app, you need to ensure that each APK has different version code. This can easily be done by defining a version code prefix that is dependent on architecture and adding real version code number to it in following gradle script:
// map for the version code
def abiVersionCodes = ['armeabi-v7a':1, 'arm64-v8a':2]
import com.android.build.OutputFile
android.applicationVariants.all { variant ->
// assign different version code for each output
variant.outputs.each { output ->
def filter = output.getFilter(OutputFile.ABI)
if(filter != null) {
output.versionCodeOverride = abiVersionCodes.get(output.getFilter(OutputFile.ABI)) * 1000000 + android.defaultConfig.versionCode
}
}
}
For more information about creating APK splits with gradle, check this article from Google.
After generating multiple APK's, you need to upload them to Google Play. For tutorial and rules about uploading multiple APK's to Google Play, please read the official Google article about multiple APKs.
If you won't be distributing your app via Google Play or for some other reasons want to have single APK of smaller size, you can completely remove support for certain CPU architecture from your APK. This is not recommended due to consequences .
To keep only some CPU architectures, for example armeabi-v7a and arm64-v8a , add the following statement to your android block inside build.gradle :
android {
...
ndk {
// Tells Gradle to package the following ABIs into your application
abiFilters 'armeabi-v7a', 'arm64-v8a'
}
}
This will remove other architecture builds for all native libraries used by the application.
To remove support for a certain CPU architecture only for BlinkID , add the following statement to your android block inside build.gradle :
android {
...
packagingOptions {
exclude 'lib/<ABI>/libBlinkID.so'
}
}
where <ABI> represents the CPU architecture you want to remove:
exclude 'lib/armeabi-v7a/libBlinkID.so'exclude 'lib/arm64-v8a/libBlinkID.so' You can also remove multiple processor architectures by specifying exclude directive multiple times. Just bear in mind that removing processor architecture will have side effects on performance and stability of your app. Please read this for more information.
Google decided that as of August 2019 all apps on Google Play that contain native code need to have native support for 64-bit processors (this includes ARM64 and x86_64). This means that you cannot upload application to Google Play Console that supports only 32-bit ABI and does not support corresponding 64-bit ABI.
By removing ARMv7 support, BlinkID will not work on devices that have ARMv7 processors.
By removing ARM64 support, BlinkID will not use ARM64 features on ARM64 device
If you are combining BlinkID library with other libraries that contain native code into your application, make sure you match the architectures of all native libraries. For example, if third party library has got only ARMv7 version, you must use exactly ARMv7 version of BlinkID with that library, but not ARM64. Using this architectures will crash your app at initialization step because JVM will try to load all its native dependencies in same preferred architecture and will fail with UnsatisfiedLinkError .
libc++_shared.so BlinkID contains native code that depends on the C++ runtime. This runtime is provided by the libc++_shared.so , which needs to be available in your app that is using BlinkID . However, the same file is also used by various other libraries that contain native components. If you happen to integrate both such library together with BlinkID in your app, your build will fail with an error similar to this one:
* What went wrong:
Execution failed for task ':app:mergeDebugNativeLibs'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.MergeJavaResWorkAction
> 2 files found with path 'lib/arm64-v8a/libc++_shared.so' from inputs:
- <path>/.gradle/caches/transforms-3/3d428f9141586beb8805ce57f97bedda/transformed/jetified-opencv-4.5.3.0/jni/arm64-v8a/libc++_shared.so
- <path>/.gradle/caches/transforms-3/609476a082a81bd7af00fd16a991ee43/transformed/jetified-blinkid-6.12.0/jni/arm64-v8a/libc++_shared.so
If you are using jniLibs and CMake IMPORTED targets, see
https://developer.android.com/r/tools/jniLibs-vs-imported-targets
The error states that multiple different dependencies provide the same file lib/arm64/libc++_shared.so (in this case, OpenCV and BlinkID).
You can resolve this issue by making sure that the dependency that uses newer version of libc++_shared.so is listed first in your dependency list, and then, simply add the following to your build.gradle :
android {
packaging {
jniLibs {
pickFirsts.add("lib/armeabi-v7a/libc++_shared.so")
pickFirsts.add("lib/arm64-v8a/libc++_shared.so")
}
}
}
중요한 메모
The code above will always select the first libc++_shared.so from your dependency list, so make sure that the dependency that uses the latest version of libc++_shared.so is listed first. This is because libc++_shared.so is backward-compatible, but not forward-compatible. This means that, eg libBlinkID.so built against libc++_shared.so from NDK r24 will work without problems when you package it together with libc++_shared.so from NDK r26, but will crash when you package it together with libc++_shared.so from NDK r21. This is true for all your native dependencies.
In case of problems with SDK integration, first make sure that you have followed integration instructions. If you're still having problems, please contact us at help.microblink.com.
If you are getting "invalid license key" error or having other license-related problems (eg some feature is not enabled that should be or there is a watermark on top of camera), first check the ADB logcat. All license-related problems are logged to error log so it is easy to determine what went wrong.
When you have to determine what is the license-relate problem or you simply do not understand the log, you should contact us help.microblink.com. When contacting us, please make sure you provide following information:
AndroidManifest.xml and/or your build.gradle file)Keep in mind: Versions 5.8.0 and above require an internet connection to work under our new License Management Program.
We're only asking you to do this so we can validate your trial license key. Data extraction still happens offline, on the device itself. Once the validation is complete, you can continue using the SDK in offline mode (or over a private network) until the next check.
If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside BlinkID or anything unmentioned, please do as follows:
enable logging to get the ability to see what is library doing. To enable logging, put this line in your application:
com . microblink . blinkid . util . Log . setLogLevel ( com . microblink . blinkid . util . Log . LogLevel . LOG_VERBOSE );After this line, library will display as much information about its work as possible. Please save the entire log of scanning session to a file that you will send to us. It is important to send the entire log, not just the part where crash occurred, because crashes are sometimes caused by unexpected behaviour in the early stage of the library initialization.
Contact us at help.microblink.com describing your problem and provide following information:
InvalidLicenseKeyException when I construct specific Recognizer object Each license key contains information about which features are allowed to use and which are not. This exception indicates that your production license does not allow using of specific Recognizer object. You should contact support to check if provided license is OK and that it really contains all features that you have purchased.
InvalidLicenseKeyException with trial license key Whenever you construct any Recognizer object or any other object that derives from Entity , a check whether license allows using that object will be performed. If license is not set prior constructing that object, you will get InvalidLicenseKeyException . We recommend setting license as early as possible in your app, ideally in onCreate callback of your Application singleton.
ClassNotFoundExceptionThis usually happens when you perform integration into Eclipse project and you forget to add resources or native libraries into the project. You must alway take care that same versions of both resources, assets, java library and native libraries are used in combination. Combining different versions of resources, assets, java and native libraries will trigger crash in SDK. This problem can also occur when you have performed improper integration of BlinkID SDK into your SDK. Please read how to embed BlinkID inside another SDK.
UnsatisfiedLinkError This error happens when JVM fails to load some native method from native library If performing integration into Android studio and this error happens, make sure that you have correctly combined BlinkID SDK with third party SDKs that contain native code, especially if you need resolving conflict over libc++_shared.so . If this error also happens in our integration sample apps, then it may indicate a bug in the SDK that is manifested on specific device. Please report that to our support team.
libc++_shared.so Please consult the section about resolving libc++_shared.so conflict.
MetadataCallbacks object, but it is not being called Make sure that after adding your callback to MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.
MetadataCallbacks object, and now app is crashing with NullPointerException Make sure that after removing your callback from MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.
onScanningDone callback I have the result inside my Recognizer , but when scanning activity finishes, the result is gone This usually happens when using RecognizerRunnerView and forgetting to pause the RecognizerRunnerView in your onScanningDone callback. Then, as soon as onScanningDone happens, the result is mutated or reset by additional processing that Recognizer performs in the time between end of your onScanningDone callback and actual finishing of the scanning activity. For more information about statefulness of the Recognizer objects, check this section.
IllegalStateException stating Data cannot be saved to intent because its size exceeds intent limit . This usually happens when you use Recognizer that produces image or similar large object inside its Result and that object exceeds the Android intent transaction limit. You should enable different intent data transfer mode. For more information about this, check this section. Also, instead of using built-in activity, you can use RecognizerRunnerFragment with built-in scanning overlay.
This usually happens when you attempt to transfer standalone Result that contains images or similar large objects via Intent and the size of the object exceeds Android intent transaction limit. Depending on the device, you will get either TransactionTooLargeException, a simple message BINDER TRANSACTION FAILED in log and your app will freeze or your app will get into restart loop. We recommend that you use RecognizerBundle and its API for sending Recognizer objects via Intent in a more safe manner (check this section for more information). However, if you really need to transfer standalone Result object (eg Result object obtained by cloning Result object owned by specific Recognizer object), you need to do that using global variables or singletons within your application. Sending large objects via Intent is not supported by Android.
Direct API When automatic scanning of camera frames with our camera management is used (provided camera overlays or direct usage of RecognizerRunnerView ), we use a stream of video frames and send multiple images to the recognition to boost reading accuracy. Also, we perform frame quality analysis and combine scanning results from multiple camera frames. On the other hand, when you are using the Direct API with a single image per document side, we cannot combine multiple images. We do our best to extract as much information as possible from that image. In some cases, when the quality of the input image is not good enough, for example, when the image is blurred or when glare is present, we are not able to successfully read the document.
Online trial licenses require a public network access for validation purposes. See Licensing issues.
onOcrResult() method in my OcrCallback is never invoked and all Result objects always return null in their OCR result gettersIn order to be able to obtain raw OCR result, which contains locations of each character, its value and its alternatives, you need to have a license that allows that. By default, licenses do not allow exposing raw OCR results in public API. If you really need that, please contact us and explain your use case.
You can find BlinkID SDK size report for all supported ABIs here.
Complete API reference can be found in Javadoc.
For any other questions, feel free to contact us at help.microblink.com.