2
votes

We have a cluste (Ignite v2.7) with 2 data nodes and distributed cache.

We loaded data to this cache and started massive read/write operation. Cluster works perfectly. According to JMX, StripedExecutor queue is empty.

We enabled backups on this cache, loaded data to this cache and started massive read/write operation. According to JMX, StripedExecutor queue constantly grows on one node. Sys-stripe threads consumes CPU, but StripedExecutor works slowly.

We use three kind of read operations:

  1. distirbuted sql from client node, select form xxx where ...

  2. ignite compute from client node,
    Collection offerSearchResults = ignite.compute(ignite.cluster().forServers()).broadcast(new GetProductOfferJob(), computeTaskData); GetProductOfferJob uses cache.get

  3. near cache from client node, cache.get

Is it a bug in backup internals?

Data region configuration:

<property name="dataStorageConfiguration">
        <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
            <property name="systemRegionInitialSize" value="#{100 * 1024 * 1024}"/>
            <property name="pageSize" value="16384"/>
            <property name="walMode" value="LOG_ONLY"/>
            <property name="writeThrottlingEnabled" value="true"/>
            <property name="dataRegionConfigurations">
                <list>
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="default_data_region"/>
                        <property name="initialSize" value="#{10L * 1024 * 1024 * 1024}"/>
                        <property name="maxSize" value="#{50L * 1024 * 1024 * 1024}"/>
                        <property name="metricsEnabled" value="false"/>
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </list>
            </property>                
        </bean>
    </property>

Cache configuration:

<bean class="org.apache.ignite.configuration.CacheConfiguration">
    <property name="name" value="ATTR_VALUE"/>
    <property name="dataRegionName" value="default_data_region"/>
    <property name="cacheMode" value="PARTITIONED"/>
    <property name="backups" value="1"/>    
    <property name="sqlSchema" value="ATTR_VALUE"/>
    <property name="onheapCacheEnabled" value="true"/>
    <property name="copyOnRead" value="false"/>
    <property name="keyConfiguration">
        <bean class="org.apache.ignite.cache.CacheKeyConfiguration">
            <property name="typeName" value="entity.key.AttributeValueKey"/>
            <property name="affinityKeyFieldName" value="segId"/>
        </bean>
    </property>
    <property name="queryEntities">
        <list>
            <bean class="org.apache.ignite.cache.QueryEntity">
                <property name="keyType" value="entity.key.AttributeValueKey"/>
                <property name="valueType" value="entity.AttributeValue"/>
                <property name="fields">
                    <map>
                        <entry key="segId" value="java.lang.String"/>
                        <entry key="value" value="java.lang.String"/>
                        <entry key="attrId" value="java.lang.Long"/>
                        <entry key="entityObjectId" value="java.lang.Integer"/>
                    </map>
                </property>
                <property name="keyFields">
                    <set>
                        <value>segId</value>
                        <value>value</value>
                        <value>attrId</value>
                        <value>entityObjectId</value>
                    </set>
                </property>
            </bean>
        </list>
    </property>
</bean>
2
Could you please show the code for read data from cache using compute? - Pavel Vinokurov
We use three kind of read operations: 1. distirbuted sql from client node, select form xxx where ... 2. ignite compute from client node, Collection<OfferSearchResult> offerSearchResults = ignite.compute(ignite.cluster().forServers()).broadcast(new GetProductOfferJob(), computeTaskData); GetProductOfferJob uses cache.get 3. near cache from client node, cache.get - Andrey Dolmatov

2 Answers

0
votes

When you'd enabled the backups on the cache you've doubled the amount of load on the cluster.

With 0 backups each write is one operation on one node.

With 1 backup each write is two operations - one on each node.

With the doubled load the cluster seem to had choked. I assume you need to add some nodes to handle this amount of load.

-1
votes

It happens because of the large amount of logs fludding the system, even if there is no log appenders in your configuration. You might set higher log level in ignite log configuration.