0
votes

I'm new to the SharePoint 2013 .Net Client API. I want to programmatically crawl all of a SharePoint site. I want to fully extract lists, document, pages, everything!

Ideally I want to start with the root of the website and crawl everything from there.

Can someone give a high-level overview of the basic steps involved? For example, do I need to create a catalog, or can I simply crawl if I have the admin credentials?

I'm using C#, .Net 4.0, and the Client runtime API (not REST).

2

2 Answers

0
votes

Some of the links that were helpful for me : 1. Crawling with Rest API or PowerShell - Start a crawl manually via SOAP or REST WebService 2. Recrawling using code - http://sebastian.expert/force-web-whole-list-library-re-crawled-search-sharepoint-2013-using-api/

0
votes

I believe that everything in SharePoint lives under a List. Essentially, I fetch the Lists belonging to a Web and fetch all the ListItems from those. I ignore Folder and File collections as these are duplicates.