Lately there's been a rash of bot posts to our forms for the purpose of posting links (backlink bots).
We've added reCaptcha only to find it's been cracked and the bots can figure out the proper response.
We've added a Honeypot field but find these submissions are posting without using the form. Looks like the first catalog the form for required fields then submit only those, crack the captcha and post their spam to a comment field.
I'm sure they are blindly posting to any forms thinking they will be a blog comment so our contact us forms are getting caught up in the mess.
Next step is to block the submission if the contents contain a link. This seems like the best logical method as it goes for their goal as the trigger to block.
Question: Would looking for url characters be the best method to isolate a spam submission or could FILTER_SANITIZE_URL be used as a hook to trigger the deny?