1
votes

I am trying to extract the metatags of HTML files and indexing them into solr with tika integration. I am not able to extract those metatags with Tika and not able to display in solr.

My HTML file is look like this.

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="product_id" content="11"/>
<meta name="assetid" content="10001"/>
<meta name="title" content="title of the article"/>
<meta name="type" content="0xyzb"/>
<meta name="category" content="article category"/>
<meta name="first" content="details of the article"/>

<h4>title of the article</h4>
<p class="link"><a href="#link">How cite the Article</a></p>
<p class="list">
  <span class="listterm">Length: </span>13 to 15 feet<br>
  <span class="listterm">Height to Top of Head: </span>up to 18 feet<br>
  <span class="listterm">Weight: </span>1,200 to 4,300 pounds<br>
  <span class="listterm">Diet: </span>leaves and branches of trees<br>
  <span class="listterm">Number of Young: </span>1<br>
  <span class="listterm">Home: </span>Sahara<br>
</p>
</p>

My data-config.xml file look like this

<dataConfig>
<dataSource name="bin" type="BinFileDataSource" />
    <document>   
    <entity name="f" dataSource="null" rootEntity="false"
        processor="FileListEntityProcessor"
        baseDir="/path/to/html/files/" 
        fileName=".*html|xml" onError="skip"
        recursive="false">

        <field column="fileAbsolutePath" name="path" />
        <field column="fileSize" name="size"/>
        <field column="file" name="filename"/>

        <entity name="tika-test" dataSource="bin" processor="TikaEntityProcessor" 
        url="${f.fileAbsolutePath}" format="text" onError="skip">

        <field column="product_id" name="product_id" meta="true"/>
        <field column="assetid" name="assetid" meta="true"/>
        <field column="title" name="title" meta="true"/>
        <field column="type" name="type" meta="true"/>
        <field column="first" name="first" meta="true"/>
        <field column="category" name="category" meta="true"/>      
        </entity>
    </entity>
</document>
</dataConfig>

In my schema.xml file I have added the following fields.

<field name="product_id" type="string" indexed="true" stored="true"/>
<field name="assetid" type="string" indexed="true" stored="true" />
<field name="title" type="string" indexed="true" stored="true"/>
<field name="type" type="string" indexed="true" stored="true"/>
<field name="category" type="string" indexed="true" stored="true"/>
<field name="first" type="text_general" indexed="true" stored="true"/>

In my solrconfing.xml file I have added the following code.

<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler" />
<lst name="defaults">
  <str name="config">/path/to/data-config.xml</str>
</lst>

can anyone know how to extract those metatags from the HTML files and index them in solr and Tika? your help will be appreciated.

3

3 Answers

1
votes

I don't think meta="true" means what you think it means. It usually refers to things that are about the file rather than the content. So, content-type, etc. Possibly http-equiv will get mapped as well.

Other than that, you need to extract actual content. You can do it by using format="xml" and then putting an inner entity with XPathEntityProcessor and mapping the path then. Except, even then, you are limited because stuck because AFAIK, DIH uses DefaultHtmlMapper which is extremely restrictive in what it let's through and skips most of the 'class' and 'id' attributes and even things like 'div'. You can read the list of allowed elements and attributes by yourself in the source code.

Frankly, your easier path is to have a SolrJ client and manage Tika yourself. Then you can set it to use IdentityHtmlMapper which does not muck about with HTML.

1
votes

Which version of Solr you are using? If you are using Solr 4.0 or above then tika is embedded into it. Tika communicates with the the solr using the 'Solr-Cells' 'ExtractingRequestHandler' class that is configured in the solrconfig.xml as follows:

      <!-- Solr Cell Update Request Handler

       http://wiki.apache.org/solr/ExtractingRequestHandler 

    -->
  <requestHandler name="/update/extract" 
                  startup="lazy"
                  class="solr.extraction.ExtractingRequestHandler" >
    <lst name="defaults">
      <str name="lowernames">true</str>
      <str name="uprefix">ignored_</str>

      <!-- capture link hrefs but ignore div attributes -->
      <str name="captureAttr">true</str>
      <str name="fmap.a">links</str>
      <str name="fmap.div">ignored_</str>
    </lst>
  </requestHandler>

Now in solr by default as you can see in above configuration, any fields extracted from HTML document that is not declared in schema.xml is prefixed with 'ignored_' i.e. they are mapped to 'ignored_*' dynamic field inside schema.xml. The default schema.xml that reads as follows:

       <!-- some trie-coded dynamic fields for faster range queries -->
   <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
   <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
   <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
   <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
   <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>

   <dynamicField name="*_pi"  type="pint"    indexed="true"  stored="true"/>
   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>

   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>

   <dynamicField name="random_*" type="random" />

   <!-- uncomment the following to ignore any fields that don't already match an existing 
        field name or dynamic field, rather than reporting them as an error. 
        alternately, change the type="ignored" to some other type e.g. "text" if you want 
        unknown fields indexed and/or stored by default --> 
   <!--dynamicField name="*" type="ignored" multiValued="true" /-->

 </fields>

And following is how 'ignored' types are treated:

<!-- since fields of this type are by default not stored or indexed,
     any data added to them will be ignored outright.  --> 
<fieldtype name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />

So, metadata extracted by tika is by default put in 'ignored' field by the Solr-Cell and thats why they are ignored for indexing and storing. Therefore, to index and store the metadatas you either change the "uprefix=attr_" or 'create specific fields or dynamic field' for your known metadatas and treat them as you want.

So, here is the corrected solrconfig.xml:

  <!-- Solr Cell Update Request Handler

       http://wiki.apache.org/solr/ExtractingRequestHandler 

    -->
  <requestHandler name="/update/extract" 
                  startup="lazy"
                  class="solr.extraction.ExtractingRequestHandler" >
    <lst name="defaults">
      <str name="lowernames">true</str>
      <str name="uprefix">attr_</str>

      <!-- capture link hrefs but ignore div attributes -->
      <str name="captureAttr">true</str>
      <str name="fmap.a">links</str>
      <str name="fmap.div">ignored_</str>
    </lst>
  </requestHandler>
0
votes

Although an older question, I am replying as

  1. I recently asked a similar question (no replies or comments after several days), that I sorted out and which is relevant to this question.

  2. Solr has changed much over the years, and the existing documentation (where it exists) on this topic is both confusing and sometimes erroneous.

  3. While lengthy, this reply provides a solution to the question with an example and documentation.

Briefly, my now-deleted StackOverflow question was "Extracting custom (e.g. <my_id></my_id) tagged text from HTML using Apache Solr." Ancillary to that task was how to index HTML pages, including custom HTML elements:attributes.

The short answer is that while it is relatively easy to index "standard" HTML elements (a; div; h1; h2; li; meta; p; title; ... https://www.w3.org/TR/2005/WD-xhtml2-20050527/elements.html), it is challenging to include custom tagsets without the rigid use of properly formatted XML files and update functions in Solr (see, e.g.: https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.html#uploading-data-with-index-handlers), or the use of the captureAttr parameter with Apache Tika, native to Solr via the ExtractingRequestHandler (described below) or other tools such as Apache Nutch.

Standard HTML elements such as <title>Solr HTML Indexing Tests</title> are easily indexed; however, non-standard elements like <my_id>bt-ic8eew2u</my_id> are ignored.

While you could apply XML-based solutions such as <field name="my_id">bt-ic8eew2u</field>, I prefer a facile HTML-based solution -- hence, the HTML metadata approach.


Environment: Arch Linux (x86_64) command-line; Apache Solr 8.7.0; Solr Admin UI (http://localhost:8983/solr/#/gettingstarted/query) in FireFox 83.0

Test file (solr_test9.html):

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="en-us">
<head>
  <meta charset="UTF-8" />
  <title>Solr HTML Indexing Tests</title>
  <meta name="date_created" content="2019-11-01" />
  <meta name="source_url" content="/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html" />
  <!-- <my_id>bt-ic8eew2u</my_id> -->
  <meta name="doc_id" content="bt-ic8eeW2U" />
  <meta name="date_pub" content="2020-11-16" />
</head>

<body>
<h1>Apples</h1>
<p>I like apples.</p>

<h2>Bananas</h2>
<p>I also like bananas.</p>

<p><div id="div1">This text is located in div element 1.</div></p>
<p><div id="div2">This text is located in div element 2.</div></p>

<br/>
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
<br/>

<p>Suspendisse efficitur pulvinar elementum.</p>

<p>My website is <a href="https://buriedtruth.com/">BuriedTruth.com</a>.</p>

<h1>Nova Scotia</h1>
<p>Nova Scotia is a province on the east coast of Canada.</p>

<h2>Capital of Nova Scotia</h2>
<p>Halifax is the capital of N.S.</p>
<p>Halifax is also N.S.'s largest city.</p>

<h1>British Columbia</h1>
<h2>Capital of British Columbia</h2>
<p>Victoria is the capital of B.C.</p>
<p>Vancouver is the largest city in B.C., however.</p>

<p>Non-terminated sentence (missing period)</p>

<meta name="date_current" content="2020-11-17" />
<!-- Comments like these are not indexed. -->
<p>Current date: 2020-11-17</p>

</body>
</html>

solrconfig.xml

Here are the relevant additions to my solrconfig.xml file.

  <!-- SOLR CELL PLUGINS: -->
  <lib dir="${solr.install.dir:../../..}/contrib/extraction/lib" regex=".*\.jar" />
  <lib dir="${solr.install.dir:../../..}/dist/" regex="solr-cell-\d.*\.jar" />

  <!-- https://lucene.472066.n3.nabble.com/Prons-an-Cons-of-Startup-Lazy-a-Handler-td4059111.html -->
  <requestHandler name="/update/extract"
    class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
    <lst name="defaults">
      <str name="lowernames">true</str>
      <str name="uprefix">ignored_</str>
      <str name="capture">div</str>
      <str name="fmap.div">div</str>
      <str name="capture">h1</str>
      <str name="fmap.h1">h1</str>
      <str name="capture">h2</str>
      <str name="fmap.h2">h2_t</str>
      <str name="capture">p</str>
      <!-- <str name="fmap.p">p_t</str> -->
      <str name="fmap.p">p</str>
      <!-- COMMENT: note that the entries above refer to standard -->
      <!-- HTML elements.  As long as you have <meta/> (metadata) -->
      <!-- entries ("doc-id", "date_pub" ...) in your schema then -->
      <!-- Solr will automatically pick them up when indexing ... -->
      <!-- (hence no need to include those, here!).               -->
    </lst>
  </requestHandler>

  <!-- https://doc.lucidworks.com/fusion-server/5.2/reference/solr-reference-guide/7.7.2/update-request-processors.html -->
  <!-- The update.autoCreateFields property can be turned to false to disable schemaless mode -->
  <updateRequestProcessorChain name="add-unknown-fields-to-the-schema" default="${update.autoCreateFields:true}"
           processor="uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date,add-schema-fields">
    <processor class="solr.LogUpdateProcessorFactory"/>
    <processor class="solr.DistributedUpdateProcessorFactory"/>
    <!-- ======================================== -->
    <!-- https://lucene.apache.org/solr/7_4_0/solr-core/org/apache/solr/update/processor/RegexReplaceProcessorFactory.html -->
    <processor class="solr.RegexReplaceProcessorFactory">
      <str name="fieldName">content</str>
      <str name="fieldName">title</str>
      <str name="fieldName">p</str>
      <!-- Case-sensitive (and only one pattern:replacement allowed, so use as many copies): -->
      <!-- of this processor as needed: -->
      <str name="pattern">\s+</str>
      <str name="replacement"> </str>
      <bool name="literalReplacement">true</bool>
    </processor>

    <!-- Solr bug? URLs parse as "rect https..."  Managed-schema (Admin UI): defined p as text_general -->
    <!-- but did not parse. Looking at content | title: text_general copied to string, so added  -->
    <!-- copyfield of p (text_general) as p_str ... regex below now works! -->
    <!-- https://stackguides.com/questions/22178700/solr-extractingrequesthandler-extracting-rect-in-links/64882751#64882751 -->
      <processor class="solr.RegexReplaceProcessorFactory">
      <str name="fieldName">content</str>
      <str name="fieldName">title</str>
      <str name="fieldName">p</str>
      <!-- Case-sensitive (and only one pattern:replacement allowed, so use as many copies): -->
      <!-- of this processor as needed: -->
      <str name="pattern">rect http</str>
      <str name="replacement">http</str>
      <bool name="literalReplacement">true</bool>
    </processor>
    <!-- ======================================== -->
    <!-- This needs to be last (may need to clear documents and re-index to see changes, e.g. Solr Admin UI): -->
    <processor class="solr.RunUpdateProcessorFactory"/>
  </updateRequestProcessorChain>

managed-schema (schema.xml):

I edited the Solr schema via the Admin UI. Basically, for whatever HTML metadata you want to index, add a similarly-named field (of the appropriate type: e.g., text_general | string | pdate | ...).

For example, to capture the "doc-id" and "date_pub" metadata I created the following (respective) schema entries:

<field name="doc_id" type="string" uninvertible="true" indexed="true" stored="true"/>
<field name="date_pub" type="pdate" uninvertible="true" indexed="true" stored="true"/>

indexing

Here's how I indexed that HTML test file,

[victoria@victoria solr-8.7.0]$ date; pwd; ls -l; echo; ls -l server/solr/gettingstarted/conf/

Tue Nov 17 02:18:12 PM PST 2020

/mnt/Vancouver/apps/solr/solr-8.7.0
total 1792
drwxr-xr-x  3 victoria victoria   4096 Nov 17 13:26 bin
-rw-r--r--  1 victoria victoria 946955 Oct 28 02:40 CHANGES.txt
drwxr-xr-x 12 victoria victoria   4096 Oct 29 07:09 contrib
drwxr-xr-x  4 victoria victoria   4096 Nov 15 12:33 dist
drwxr-xr-x  3 victoria victoria   4096 Nov 15 12:33 docs
drwxr-xr-x  6 victoria victoria   4096 Oct 28 02:40 example
drwxr-xr-x  2 victoria victoria  36864 Oct 28 02:40 licenses
-rw-r--r--  1 victoria victoria  12646 Oct 28 02:21 LICENSE.txt
-rw-r--r--  1 victoria victoria 766662 Oct 28 02:40 LUCENE_CHANGES.txt
-rw-r--r--  1 victoria victoria  27540 Oct 28 02:21 NOTICE.txt
-rw-r--r--  1 victoria victoria   7490 Oct 28 02:40 README.txt
drwxr-xr-x 11 victoria victoria   4096 Nov 15 12:40 server

total 208
drwxr-xr-x 2 victoria victoria  4096 Oct 28 02:21 lang
-rw-r--r-- 1 victoria victoria 33888 Nov 17 13:20 managed-schema
-rw-r--r-- 1 victoria victoria   873 Oct 28 02:21 protwords.txt
-rw-r--r-- 1 victoria victoria 33788 Nov 17 11:36 schema.xml.2020-11-17.13:01
-rw-r--r-- 1 victoria victoria 59248 Nov 17 13:16 solrconfig.xml
-rw-r--r-- 1 victoria victoria 59151 Nov 17 12:59 solrconfig.xml.2020-11-17.13:01
-rw-r--r-- 1 victoria victoria   781 Oct 28 02:21 stopwords.txt
-rw-r--r-- 1 victoria victoria  1124 Oct 28 02:21 synonyms.txt

[victoria@victoria solr-8.7.0]$ solr restart; sleep 1; post -c gettingstarted /mnt/Vancouver/programming/datasci/solr/test/solr_test9.html

Sending stop command to Solr running on port 8983 ... waiting up to 180 seconds to allow Jetty process 3511453 to stop gracefully.
Waiting up to 180 seconds to see Solr running on port 8983 [|]  
Started Solr server on port 8983 (pid=3572520). Happy searching!

/usr/lib/jvm/java-8-openjdk/jre//bin/java -classpath /mnt/Vancouver/apps/solr/solr-8.7.0/dist/solr-core-8.7.0.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool /mnt/Vancouver/programming/datasci/solr/test/solr_test9.html
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file solr_test9.html (text/html) to [base]/extract
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:00:00.755

[victoria@victoria solr-8.7.0]$ 

... and here is the result (Solr Admin UI: http://localhost:8983/solr/#/gettingstarted/query)

http://localhost:8983/solr/gettingstarted/select?q=*%3A*

{
  "responseHeader":{
    "status":0,
    "QTime":0,
    "params":{
      "q":"*:*",
      "_":"1605651674401"}},
  "response":{"numFound":1,"start":0,"numFoundExact":true,"docs":[
      {
        "id":"/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html",
        "stream_size":[1428],
        "x_parsed_by":["org.apache.tika.parser.DefaultParser",
          "org.apache.tika.parser.html.HtmlParser"],
        "stream_content_type":["text/html"],
        "date_created":"2019-11-01T00:00:00Z",
        "date_current":["2020-11-17"],
        "resourcename":["/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html"],
        "title":["Solr HTML Indexing Tests"],
        "date_pub":"2020-11-16T00:00:00Z",
        "doc_id":"bt-ic8eeW2U",
        "source_url":"/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html",
        "dc_title":["Solr HTML Indexing Tests"],
        "content_encoding":["UTF-8"],
        "content_type":["application/xhtml+xml; charset=UTF-8"],
        "content":[" en-us stream_size 1428 X-Parsed-By org.apache.tika.parser.DefaultParser X-Parsed-By org.apache.tika.parser.html.HtmlParser stream_content_type text/html date_created 2019-11-01 resourceName /mnt/Vancouver/programming/datasci/solr/test/solr_test9.html date_pub 2020-11-16 doc_id bt-ic8eeW2U source_url /mnt/Vancouver/programming/datasci/solr/test/solr_test9.html dc:title Solr HTML Indexing Tests Content-Encoding UTF-8 Content-Language en-us Content-Type application/xhtml+xml; charset=UTF-8 Solr HTML Indexing Tests Lorem ipsum dolor sit amet, consectetur adipiscing elit. "],
        "div":[" div1 This text is located in div element 1. div2 This text is located in div element 2."],
        "p":[" I like apples. I also like bananas. Suspendisse efficitur pulvinar elementum. My website is https://buriedtruth.com/ BuriedTruth.com . Nova Scotia is a province on the east coast of Canada. Halifax is the capital of N.S. Halifax is also N.S.'s largest city. Victoria is the capital of B.C. Vancouver is the largest city in B.C., however. Non-terminated sentence (missing period) Current date: 2020-11-17"],
        "h1":[" Apples Nova Scotia British Columbia"],
        "h2_t":" Bananas Capital of Nova Scotia Capital of British Columbia",
        "_version_":1683647678197530624}]
  }}

UPDATE -- managed-schema >> schema.xml pecularities:

While not related to the original question, the following content is related to my answer (above) -- specifically, pecularities associated with switching from Solr's managed-schema to the classic (user-managed) schema.xml. It is included here to provide a complete solution.

First, add

<schemaFactory class="ClassicIndexSchemaFactory"/>

to your solrconfig.xml file.

Then edit this: -->

<updateRequestProcessorChain
  name="add-unknown-fields-to-the-schema"
  default="${update.autoCreateFields:true}"
  processor="uuid,remove-blank,field-name-mutating,parse-boolean,
             parse-long,parse-double,parse-date,add-schema-fields">

... to this:

<updateRequestProcessorChain
  processor="uuid,remove-blank,field-name-mutating,parse-boolean,
             parse-long,parse-double,parse-date">

i.e., delete

  name="add-unknown-fields-to-the-schema"
  default="${update.autoCreateFields:true}"
  add-schema-fields

Rename managed-schema to schema.xml, and restart Solr or reload the core to effect the changes.

To further extend my example (above), here is a sample <updateRequestProcessorChain /> and the output, on the HTML code that I also provided (above).

solrconfig.xml (part):

<updateRequestProcessorChain
  processor="uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date">
  <processor class="solr.LogUpdateProcessorFactory"/>
  <processor class="solr.DistributedUpdateProcessorFactory"/>
  <processor class="solr.RegexReplaceProcessorFactory">
    <str name="fieldName">content</str>
    <str name="fieldName">title</str>
    <str name="fieldName">p</str>
    <!-- Case-sensitive (and only one pattern:replacement allowed, so use as many copies): -->
    <!-- of this processor as needed: -->
    <str name="pattern">\s+</str>
    <str name="replacement"> </str>
    <bool name="literalReplacement">true</bool>
  </processor>

  <processor class="solr.RegexReplaceProcessorFactory">
    <str name="fieldName">content</str>
    <str name="fieldName">title</str>
    <str name="fieldName">p</str>
    <!-- Case-sensitive (and only one pattern:replacement allowed, so use as many copies): -->
    <!-- of this processor as needed: -->
    <str name="pattern">rect http</str>
    <str name="replacement">http</str>
    <bool name="literalReplacement">true</bool>
  </processor>

  <processor class="solr.RegexReplaceProcessorFactory">
    <str name="fieldName">content</str>
    <str name="fieldName">title</str>
    <str name="pattern">[sS]olr</str>
    <str name="replacement">APPLE</str>
    <bool name="literalReplacement">true</bool>
  </processor>

  <processor class="solr.RegexReplaceProcessorFactory">
    <str name="fieldName">content</str>
    <str name="fieldName">title</str>
    <str name="pattern">HTML</str>
    <str name="replacement">BANANA</str>
    <bool name="literalReplacement">true</bool>
  </processor>

  <processor class="solr.RunUpdateProcessorFactory"/>
</updateRequestProcessorChain>

output

{
  "responseHeader":{
    "status":0,
    "QTime":32,
    "params":{
      "q":"*:*",
      "_":"1605767164812"}},
  "response":{"numFound":1,"start":0,"numFoundExact":true,"docs":[
      {
        "id":"/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html",
        "stream_size":[1628],
        "x_parsed_by":["org.apache.tika.parser.DefaultParser",
          "org.apache.tika.parser.html.HtmlParser"],
        "stream_content_type":["text/html"],
        "date_created":"2020-11-11T21:36:38Z",
        "date_current":["2020-11-17"],
        "resourcename":["/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html"],
        "title":["APPLE BANANA Indexing Tests"],
        "date_pub":"2020-11-16T21:37:18Z",
        "doc_id":"bt-ic8eeW2U",
        "source_url":"/mnt/Vancouver/programming/datasci/solr/test/solr_test9.html",
        "dc_title":["Solr HTML Indexing Tests"],
        "content_encoding":["UTF-8"],
        "content_type":["application/xhtml+xml; charset=UTF-8"],
        "content":[" en-us stream_size 1628 X-Parsed-By org.apache.tika.parser.DefaultParser X-Parsed-By org.apache.tika.parser.html.HtmlParser stream_content_type text/html date_created 2020-11-11T21:36:38Z resourceName /mnt/Vancouver/programming/datasci/APPLE/test/APPLE_test9.html date_pub 2020-11-16T21:37:18Z doc_id bt-ic8eeW2U source_url /mnt/Vancouver/programming/datasci/APPLE/test/APPLE_test9.html dc:title APPLE BANANA Indexing Tests Content-Encoding UTF-8 Content-Language en-us Content-Type application/xhtml+xml; charset=UTF-8 APPLE BANANA Indexing Tests Lorem ipsum dolor sit amet, consectetur adipiscing elit. "],
        "div":[" div1 This text is located in div element 1. div2 This text is located in div element 2. apple This text is located in the \"apple\" (class) div element. banana This text is located in the \"banana\" (class) div element."],
        "p":[" I like apples. I also like bananas. Suspendisse efficitur pulvinar elementum. My website is https://buriedtruth.com/ BuriedTruth.com . Nova Scotia is a province on the east coast of Canada. Halifax is the capital of N.S. Halifax is also N.S.'s largest city. Victoria is the capital of B.C. Vancouver is the largest city in B.C., however. Non-terminated sentence (missing period) Current date: 2020-11-17"],
        "h1":[" Apples Nova Scotia British Columbia"],
        "h2_t":" Bananas Capital of Nova Scotia Capital of British Columbia",
        "_version_":1683814668971278336}]
  }}