3
votes

I am trying to upload an image that I get from my webcam to the Microsoft Azure Face Api. I get the image from canvas.toDataUrl(‘image/png’) which contains the Data Uri. I change the Content Type to application/octet-stream and when I attach the Data Uri to the post request, I get a Bad Request (400) Invalid Face Image. If I change the attached data to a Blob, I stop receiving errors however I only get back an empty array instead of a JSON object. I would really appreciate any help for pointing me in the right direction.

Thanks!

4
The result of toDataUrl is already a valid URL. Try using application/json and sending it the url property with that resultMaria Ines Parnisari
When I just attach what I get from toDataUrl and change the Content-Type to application, I get a JSON Parsing Error.pengcheng95
You need to send json. { "url": <your-data-url-here> }Maria Ines Parnisari
I am still getting a JSON parse error.pengcheng95
I switched it to a blob format and it worked. Input the data you get from canvas.toDataUrl() into fetch(data) .then(res => res.blob()).then(blobData => { and then send the blobData to the api }pengcheng95

4 Answers

4
votes

Oh you're in such luck, i've just (successfully!) attempted this 2 days ago.

Sending base64-encoded JPEGs to Face API is seriously inefficient, The ratio of encoded output bytes to input bytes is 4:3 (33% overhead). Just send a byte array, it works, the docs mention it briefly.

send-jpeg-as-octet-stream

And try to read as JPEG not PNG, that's just wasting bandwidth for webcam footage.

    ...

    var dataUri = canvas.toDataURL('image/' + format);
    var data = dataUri.split(',')[1];
    var mimeType = dataUri.split(';')[0].slice(5)

    var bytes = window.atob(data);
    var buf = new ArrayBuffer(bytes.length);
    var byteArr = new Uint8Array(buf);

    for (var i = 0; i < bytes.length; i++) {
        byteArr[i] = bytes.charCodeAt(i);
    }

    return byteArr;

Now use byteArr as your payload (data:) in $.ajax() for jQuery or iDontUnderStandHowWeGotHereAsAPeople() in any other hipster JS framework people use these days.

The reverse-hipster way of doing it is:

var payload = byteArr;

var xhr = new XMLHttpRequest();
xhr.open('POST', 'https://SERVICE_URL');
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
xhr.send(payload);
2
votes

To extend Dalvor's answer: this is the AJAX call that works for me:

fetch(data)
.then(res => res.blob())
.then(blobData => {
  $.post({
      url: "https://westus.api.cognitive.microsoft.com/face/v1.0/detect",
      contentType: "application/octet-stream",
      headers: {
        'Ocp-Apim-Subscription-Key': '<YOUR-KEY-HERE>'
      },
      processData: false,
      data: blobData
    })
    .done(function(data) {
      $("#results").text(JSON.stringify(data));

    })
    .fail(function(err) {
      $("#results").text(JSON.stringify(err));
    })

Full demo code here: https://jsfiddle.net/miparnisari/b1zzpvye/

1
votes

For saving someone's 6 hours, I appended my right code. I hope this code helps you.

Tools

Code

index.tsx

Constants and ref

/**
 * Constants
 */
const videoConstraints = {
  width: 1280,
  height: 720,
  facingMode: 'user',
};
/**
 * Refs
 */
const webcamRef = React.useRef<Webcam>(null);

Call back function

const capture = React.useCallback(() => {
  const base64Str = webcamRef.current!.getScreenshot() || '';
  const s = base64Str.split(',');
  const blob = b64toBlob(s[1]);
  callCognitiveApi(blob);
}, [webcamRef]);

In render

<Webcam audio={false} ref={webcamRef} screenshotFormat="image/jpeg" videoConstraints={videoConstraints} />
<button onClick={capture}>Capture photo</button>

base64toBlob

Thanks to creating-a-blob-from-a-base64-string-in-javascript

export const b64toBlob = (b64DataStr: string, contentType = '', sliceSize = 512) => {
  const byteCharacters = atob(b64DataStr);
  const byteArrays = [];

  for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
    const slice = byteCharacters.slice(offset, offset + sliceSize);

    const byteNumbers = new Array(slice.length);
    for (let i = 0; i < slice.length; i++) {
      byteNumbers[i] = slice.charCodeAt(i);
    }

    const byteArray = new Uint8Array(byteNumbers);
    byteArrays.push(byteArray);
  }

  const blob = new Blob(byteArrays, { type: contentType });
  return blob;
};

callCognitiveApi

import axios from 'axios';

const subscriptionKey: string = 'This_is_your_subscription_key';
const url: string = 'https://this-is-your-site.cognitiveservices.azure.com/face/v1.0/detect';
export const callCognitiveApi = (data: any) => {
  const config = {
    headers: { 'content-type': 'application/octet-stream', 'Ocp-Apim-Subscription-Key': subscriptionKey },
  };
  const response = axios
    .post(url, data, config)
    .then((res) => {
      console.log(res);
    })
    .catch((error) => {
      console.error(error);
    });
};

Result

Result screenshot

0
votes

So I got the answer finally by sending the image as a blob object. You first grab the image from canvas with:

let data = canvas.toDataURL('image/jpeg');

Afterwards, you can reformat it to a blob data object by running:

fetch(data)
  .then(res => res.blob())
  .then(blobData => {
    // attach blobData as the data for the post request
  }

You will also need to switch the Content-Type of the post request to "application/octet-stream"