I've looked through all the questions describing similar problems but I found no solution, so here's yet another of them.
The page in question is this one https://attanasioscrive.it/cipolle/; you'll notice all the meta tags inside <head>
:
<meta property="og:title" content="Cipolle e altre disgrazie" />
<meta property="og:description" content="Un libro per chi non ha pazienza per i libri, una ricca collezione di storie cazzute.
Dai un'occhiata senza impegno e guarda cos'ha da offrire." />
<meta property="og:url" content="https://www.attanasioscrive.it/" />
<meta property="og:site_name" content="AttanasioScrive" />
<meta property="og:locale" content="it_IT" />
<meta property="og:type" content="book" />
<meta property="og:image" content="/static/blog/img/cipolle_fb.png" />
<meta property="og:image:alt" content="Copertina del libro Cipolle e altre disgrazie" />
<meta property="og:image:type" content="image/png" />
<meta property="og:image:width" content="1200" />
<meta property="og:image:height" content="600" />
<meta property="twitter:title" content="Cipolle e altre disgrazie" />
<meta property="twitter:description" content="Un libro per chi non ha pazienza per i libri, una ricca collezione di storie cazzute.
Dai un'occhiata senza impegno e guarda cos'ha da offrire." />
<meta property="twitter:site" content="AttanasioScrive" />
<meta property="twitter:card" content="product" />
<meta property="twitter:image" content="/static/blog/img/cipolle_tw.png" />
<meta property="twitter:image:alt" content="Copertina del libro Cipolle e altre disgrazie" />
Unfortunately Facebook's debugger seems to think none of those tags exists at all, no matter how many times I click the "scrape again" button, which according to some Facebook support page should invalidate the scraper's cache and appropriately see recent changes.
Among the debugger's warnings there's "SSL Error", despite my SSL certificate being in order, which makes me think their scraper discriminates against Let's Encrypt, but most importantly could possibly be preventing the scraper from actually reading the page, to no fault of my own. I've read somewhere around the web that Facebook had trouble scraping https URLs and I hope that's not true anymore, I don't want to support insecure http just for Facebook's (and possibly Twitter's) sake.
UPDATE: turns out part of the problem was caused by my nginx configuration file not pointing to the full chain certificate. Correcting that allowed Facebook's and Twitter's debuggers to correctly see the site.
However, running Facebook's debugger again, I noticed it can now pick up on some properties, but not all of them: og:url, og:type, og:title, og:image, og:description are the ones it mentions, though notably it also complains about the content of og:url not matching the page's, so something is clearly amiss here.
From the "See exactly what our scraper sees for your URL" feature, I can clearly see that the HTML the scraper sees is the one from my home page, not the specific URL I supplied (see URL above), but I want specific outputs for specific pages. Should I correct og:url to the specific page I want to link to? And will this also fix the other tags not being read correctly?