I am currently porting over an application built in Swift for iOS. In swift one makes a rekognition call the following way:
First initialize the client after importing the package to your podfile:
rekognitionClient = AWSRekognition.default()
Then create a 'faceRequest' to call the service and see if a face in your collection has matched the image you sent:
guard let FaceRequest = AWSRekognitionSearchFacesByImageRequest() else
{
puts("Unable to initialize AWSRekognitionSearchfacerequest.")
return
}
FaceRequest.collectionId = "MY_COLLECTION_NAME"
FaceRequest.faceMatchThreshold = 75
FaceRequest.maxFaces = 2
let FacesourceImage = capturedImage
let Faceimage = AWSRekognitionImage()
Faceimage!.bytes = UIImageJPEGRepresentation(FacesourceImage!, 0.7)
FaceRequest.image = Faceimage
rekognitionClient.searchFaces(byImage:FaceRequest) { (response:AWSRekognitionSearchFacesByImageResponse?, error:Error?) in
if error == nil
{
//print(response!)
for faceMatch in (response?.faceMatches)! {
//do something
}
}
}
I am looking to convert this to Kotlin and am having issues with syntax and making the request. I have an image in a bitmap format that is ready to be sent to the service.
Here is one sample of what I have been trying:
fun doRekognitionRequest(bitmap: Bitmap){
//this says AmazonRekognitionClient has been deprecated
val rekognitionClient = AmazonRekognitionClient()
//unresolved reference
val facesImageRequest = facesByImageRequest()
}
Here are my imports:
import android.Manifest
import android.app.Activity
import android.content.Context
import android.content.ContextWrapper
import android.content.Intent
import android.graphics.Bitmap
import android.graphics.Canvas
import android.graphics.SurfaceTexture
import android.graphics.drawable.BitmapDrawable
import android.hardware.camera2.*
import android.net.Uri
import android.os.*
import android.util.Log
import android.view.*
import android.widget.Toast
import androidx.core.content.ContextCompat
import androidx.fragment.app.Fragment
import androidx.fragment.app.FragmentTransaction
import kotlinx.android.synthetic.main.camera_layout.*
import pub.devrel.easypermissions.AfterPermissionGranted
import pub.devrel.easypermissions.EasyPermissions
import java.io.*
import java.io.File
import java.util.*
import android.provider.MediaStore
import androidx.core.content.FileProvider
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClient
import com.amazonaws.services.rekognition.model.FaceMatch;
import com.amazonaws.services.rekognition.model.Image;
import com.amazonaws.services.rekognition.model.S3Object;
import com.amazonaws.services.rekognition.model.SearchFacesByImageRequest;
import com.amazonaws.services.rekognition.model.SearchFacesByImageResult;
I am trying to build off of what this guy has done: https://github.com/awslabs/serverless-photo-recognition/blob/master/src/main/kotlin/com/budilov/rekognition/RekognitionService.kt
As well as trying to convert the java sample code from rekognition documentation to Kotlin, with no luck.
https://docs.aws.amazon.com/rekognition/latest/dg/search-face-with-image-procedure.html
AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
ObjectMapper objectMapper = new ObjectMapper();
// Get an image object from S3 bucket.
Image image=new Image()
.withS3Object(new S3Object()
.withBucket(bucket)
.withName(photo));
// Search collection for faces similar to the largest face in the image.
SearchFacesByImageRequest searchFacesByImageRequest = new SearchFacesByImageRequest()
.withCollectionId(collectionId)
.withImage(image)
.withFaceMatchThreshold(70F)
.withMaxFaces(2);
SearchFacesByImageResult searchFacesByImageResult =
rekognitionClient.searchFacesByImage(searchFacesByImageRequest);
System.out.println("Faces matching largest face in image from" + photo);
List < FaceMatch > faceImageMatches = searchFacesByImageResult.getFaceMatches();
for (FaceMatch face: faceImageMatches) {
System.out.println(objectMapper.writerWithDefaultPrettyPrinter()
.writeValueAsString(face));
System.out.println();
}
Kotlin and Android are all very new to me, I come from C# and Swift background so any help would be greatly appreciated. Cheers!
Edit:
Managed to get the searchfacesbyImageRequest constructor to get seen by the compiler. Now stuck on converting a bitmap to an Image.
val facesImageRequest = SearchFacesByImageRequest()
facesImageRequest.collectionId = "MY_COLLECTION_NAME"
facesImageRequest.maxFaces = 2
facesImageRequest.faceMatchThreshold = 75.0F
facesImageRequest.image = Image(bitmap)