I have a requirement in my project, to cache 9 million data from oracle database to Hazelcast .But apparently Hazelcast is consuming more heap space than it is supposed to consume . I have allotted 8bg heapspace for the app but still i am getting out of memory error.
Below is my data loader class .
public class CustomerProfileLoader implements ApplicationContextAware, MapLoader<Long, CustomerProfile> {
private static CustomerProfileRepository customerProfileRepository;
@Override
public CustomerProfile load(Long key) {
log.info("load({})", key);
return customerProfileRepository.findById(key).get();
}
@Override
public Map<Long, CustomerProfile> loadAll(Collection<Long> keys) {
log.info("load all in loader executed");
Map<Long, CustomerProfile> result = new HashMap<>();
for (Long key : keys) {
CustomerProfile customerProfile = this.load(key);
if (customerProfile != null) {
result.put(key, customerProfile);
}
}
return result;
}
@Override
public Iterable<Long> loadAllKeys() {
log.info("Find all keys in loader executed");
return customerProfileRepository.findAllId();
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
customerProfileRepository = applicationContext.getBean(CustomerProfileRepository.class);
}
}
Below is the repository query. If i change the below query so that it limit to say 2 million data, then everything works fine.
@Query("SELECT b.id FROM CustomerProfile b ")
Iterable<Long> findAllId();
Below is my map configuration in hazelcast.xml
file. Here i gave backup count
as zero
, before it was 1 but that didn't make any difference.
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.11.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- Use port 5701 and upwards on this machine one for cluster members -->
<network>
<port auto-increment="true">5701</port>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>
</join>
</network>
<map name="com.sample.hazelcast.domain.CustomerProfile">
<indexes>
<!-- custom attribute without an extraction parameter -->
<index ordered="false">postalCode</index>
</indexes>
<backup-count>0</backup-count>
<map-store enabled="true" initial-mode="EAGER">
<class-name>com.sample.hazelcast.CustomerProfileLoader</class-name>
</map-store>
</map>
</hazelcast>
database Table Structure:
ID NOT NULL NUMBER(19)
LOGIN_ID NOT NULL VARCHAR2(32 CHAR)
FIRSTNAME VARCHAR2(50 CHAR)
LASTNAME VARCHAR2(50 CHAR)
ADDRESS_LINE1 VARCHAR2(50 CHAR)
ADDRESS_LINE2 VARCHAR2(50 CHAR)
CITY VARCHAR2(30 CHAR)
postal_code VARCHAR2(20 CHAR)
COUNTRY VARCHAR2(30 CHAR)
CREATION_DATE NOT NULL DATE
UPDATED_DATE NOT NULL DATE
REGISTER_NUM NOT NULL VARCHAR2(10 CHAR)
Other points:
- I have only one instance of hazelcast server running now , with
allocated heapspace as 8GB
JAVA_OPTS=-Xmx8192m
. Before it was 4gb but when i got heapspace error i increased to 8GB , but no luck. - For time being maploader is executed when map get accessed for the first time.
- The particular table (customer_profile) is having 6 columns in it which doesnt have any binary types. It has just basic values like firstname lastname kind of.
- hazelcast version used is 3.8
The problems now i face is :
I am getting heapspace error(java.lang.OutOfMemoryError: Java heap space) when it fetch all data and loads it to map. Now table has 9 million data in it.
Also it is taking lot of time to load data, probably i can fix this by running multiple instances of hazelcast server.
I am a newbie here in hazelcast, so any help would be greatly appreciated :)