24
votes

I want to allow a user of my web-app to be able to post multiple objects to their timeline from one page (main_page).

I already have the user's access token stored.

Tags on page I am trying to submit, the url is page_url:

<meta property="fb:app_id"      content="my_app_id" /> 
<meta property="og:type"        content="my_namespace:my_object" /> 
<meta property="og:title"       content="some string" /> 
<meta property="og:description" content="some other string" /> 
<meta property="og:image"       content="some_image_url" />
<meta property="og:locale"      content="en_US" />
<meta property="og:url"         content="page_url" />   

Rails code to submit the url, triggered from main_page:

begin
    fb_post = RestClient.post 'https://graph.facebook.com/me/my_namespace:do', :access_token=>user.get_facebook_auth_token, :my_object=>"page_url"
rescue StandardError => e
    p 'e.response is'
    p e.response
end

Output

2011-11-02T02:42:14+00:00 app[web.1]: "e.response is"
2011-11-02T02:42:14+00:00 app[web.1]: "{\"error\":{\"message\":\"(#3502) Object at URL page_url has og:type of 'website'. The property 'my_object' requires an object of og:type 'my_namespace:my_object'.\",\"type\":\"OAuthException\"}}"

The really weird thing is that, after getting this error, if I test the page_url on the Object Debugger, it passes without any errors/warnings, the og:type is the correct type and note 'website', and then running the same Rails code as above will work fine.

I have tried it without the og:url tag and the same thing happens.

UPDATE:

As per Igy's answer, I tried seperating the object scraping process from the action creating process. So, before the action was submitted for a brand new object, I ran an update on a the object, with scrape=true.

begin
    p 'doing fb_update'
    fb_update = RestClient.post 'https://graph.facebook.com', :id=>page_url, :scrape => true
    p 'fb_update is'
    p fb_update
rescue StandardError => e
    p 'e.response is'
    p e.response
end

Output

2011-11-05T13:27:40+00:00 app[web.1]: "doing fb_update"
2011-11-05T13:27:50+00:00 app[web.1]: "fb_update is"
2011-11-05T13:27:50+00:00 app[web.1]: "{\"url\":\page_url,\"type\":\"website\",\"title\":\page_url,\"updated_time\":\"2011-11-05T13:27:50+0000\",\"id\":\id_here}"

The odd thing is that the type is website, and the title is the page's url. Again, I have checked both in the HTML and the Facebook debugger, and the type and title are both correct in those.

3

3 Answers

26
votes

I'm running into the same issue.

The only way I've been able to successfully publish actions for custom object types that I've defined is to manually test the object url with Object Debugger first, and then post the action on that object through my application.

Even using the linter API -- which Facebook suggests here -- gives me an error.

curl -X POST \
     -F "id=my_custom_object_url" \
     -F "scrape=true" \
     "https://graph.facebook.com"

Only the debugger tool seems to actually scrape the page correctly.

Note that I didn't have this problem when using a pre-defined object type, such as "website":

<meta property="og:type" content="website" />

This problem only seems to affect custom object types for some reason.

UPDATE (WITH SOLUTION):

I finally figured this out. The problem actually arose from my application's inability to handle two simultaneous HTTP requests. (FYI: I'm using Heroku to deploy my Rails application.) When you make a request to the Facebook API to publish an action on an object URL (request #1), Facebook will immediately attempt to scrape the object URL you specified (request #2), and based on what it is able to scrape successfully, it returns a response to the original request. If I'm running request #1 synchronously, that will tie up my web process on Heroku, making it impossible for my application to handle request #2 at the same time. In other words, Facebook can't successfully access the object URL that it needs to scrape; instead, it returns some default values, including the object type "website". Interestingly, this occurred even when I fired up multiple web processes on Heroku. The application was intent on using the same web process to handle both requests.

I solved the problem by handling all Facebook API requests as background jobs (using delayed_job). On Heroku, this requires firing up at least one web process and one worker process. If you can do it, running API requests in the background is a good idea anyway since it doesn't tie up your website for users, making them wait several seconds before they can do anything.

By the way, I recommend running two background jobs. The first one should simply scrape the object URL by POSTing to: https://graph.facebook.com?id={object_url}&scrape=true

Once the first job has completed successfully, fire up another background job to POST an action to the timeline: https://graph.facebook.com/me/{app_namespace}:{action_name}?access_token={user_access_token}

MORE RECENT UPDATE:

Per the suggestion in the comments, using Unicorn will also do the trick without the need for delayed_job. See more here if you're using Heroku:
http://blog.railsonfire.com/2012/05/06/Unicorn-on-Heroku.html

6
votes

The Object creation documents say it should scrape an object the first time you create an action against it but also say

In some hosting and development platforms where you create an object and publish to Facebook simultaneously, you may get an error saying that the object does not exist. This is due to a race condition that exists in some systems.

We recommend that you (a) verify the object is replicated before you post an action or (b) introduce a small delay to account for replication lag (e.g, 15-30 seconds).

Based on that, I think you need to add &scrape=true to the initial call in order to force an immediate scrape, then try to create the action a while later. (I believe the error message you're getting is probably because the page hasn't been cached/scraped yet.)

5
votes

From what I've seen, the Facebook Debugger Page is the best (and, for most practical purposes, only) way to force Facebook's caches of a given page's OpenGraph information to refresh. Otherwise, you'll spend up to a week waiting their cached information about pages they've already scraped.

Basically, you should

  1. Rewrite your pages to act as you desire
  2. Pass the relevant URLs to the Debugger page (to see that they validate, and to refresh the caches), and then
  3. Allow the pages to be served up "normally" to see your changes in action.

There may be other ways to force the Facebook caches to expire; see this Stackoverflow page for some possible solutions. I haven't tried them yet, but they may be helpful.