As @ChrisHaas already explained, without code and PDF samples its hard to be specific.
First of all, saying itextsharp manages to find roughly 50% of the occurences of a given word is a bit misleading as iText(Sharp) does not directly expose methods to find a specific text in a PDF and, therefore, actually finds 0%. It merely provides a framework and some simple examples for text extraction.
Using this framework for seriously searching for a given word requires more than applying those simple sample usages (provided by the SimpleTextExtractionStrategy and the LocationTextExtractionStrategy, also working under the hood when using PdfTextExtractor.GetTextFromPage(myReader, pageNum)) in combination with some Contains(word) call. You have to:
create a better text extraction strategy which
has a better algorithm to recognize which glyphs belong to which line; e.g. the sample strategies can fail utterly for scanned pages with OCR'ed text with the text lines not 100% straight but instead minimally ascending;
recognizes poor man's bold (printing the same letter twice with a very small offset to achieve the impression of a bold character style) and similar constructs and transforms them accordingly;
create a text normalization which
normalize both the extracted text and your search term and only then search.
Furthermore, as @ChrisHaas mentioned, special attention has to be paid to spaces in the text.
If you create an iText-based text search with those criteria in mind, you'll surely get an acceptable hit rate. Getting as good as Adobe Reader is quite a task as they already have invested quite some resources into this feature.
For completeness sake, you should not only search the page content and everything referred to from there but also the annotations which can have quite some text content, too, which may even appear as if it was part of the page, e.g. in case of free text annotations.