24
votes

I'm learning Spring WebFlux and during writing a sample application I found a concern related to Reactive types (Mono/Flux) combined with Spring Cache.

Consider the following code-snippet (in Kotlin):

@Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>

@Service
class TaskService(val taskRepository: TaskRepository) {

    @Cacheable("tasks")
    fun get(id: String): Mono<Task> = taskRepository.findById(id)
}

Is this valid and safe way of caching method calls returning Mono or Flux? Maybe there are some other principles to do this?

The following code is working with SimpleCacheResolver but by default fails with Redis because of the fact that Mono is not Serializable. In order to make them work e.g Kryo serializer needs to be used.

3

3 Answers

37
votes

Hack way

For now, there is no fluent integration of @Cacheable with Reactor 3. However, you may bypass that thing by adding .cache() operator to returned Mono

@Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>

@Service
class TaskService(val taskRepository: TaskRepository) {

    @Cacheable("tasks")
    fun get(id: String): Mono<Task> = taskRepository.findById(id).cache()
}

That hack cache and share returned from taskRepository data. In turn, spring cacheable will cache a reference of returned Mono and then, will return that reference. In other words, it is a cache of mono which holds the cache :).

Reactor Addons Way

There is an addition to Reactor 3 which allows fluent integration with modern in-memory caches like caffeine, jcache, etc. Using that technique you will be capable to cache your data easily:

@Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>

@Service
class TaskService(val taskRepository: TaskRepository) {

    @Autowire
    CacheManager manager;


    fun get(id: String): Mono<Task> = CacheMono.lookup(reader(), id)
                                               .onCacheMissResume(() -> taskRepository.findById(id))
                                               .andWriteWith(writer());

    fun reader(): CacheMono.MonoCacheReader<String, Task> = key -> Mono.<Signal<Task>>justOrEmpty((Signal) manager.getCache("tasks").get(key).get())
    fun writer(): CacheMono.MonoCacheWriter<String, Task> = (key, value) -> Mono.fromRunnable(() -> manager.getCache("tasks").put(key, value));
} 

Note: Reactor addons caching own abstraction which is Signal<T>, so, do not worry about that and following that convention

3
votes

I have used Oleh Dokuka's hacky solution worked great but there is a catch. You must use a greater Duration in Flux cache than your Cachable caches timetolive value. If you dont use a duration for Flux cache it wont invalidate it (Flux documentation says "Turn this Flux into a hot source and cache last emitted signals for further Subscriber."). So making Flux cache 2 minutes and timetolive 30 seconds can be valid configuration. If ehcahce timeout occurs first, than a new Flux cache reference is generated and it will be used.

1
votes

// In a Facade:

public Mono<HybrisResponse> getProducts(HybrisRequest request) {
    return Mono.just(HybrisResponse.builder().build());
}

// In a service layer:

@Cacheable(cacheNames = "embarkations")
public HybrisResponse cacheable(HybrisRequest request) {
    LOGGER.info("executing cacheable");
    return null;
}

@CachePut(cacheNames = "embarkations")
public HybrisResponse cachePut(HybrisRequest request) {
    LOGGER.info("executing cachePut");
    return hybrisFacade.getProducts(request).block();
}

// In a Controller:

HybrisResponse hybrisResponse = null;

try {
   // get from cache
   hybrisResponse = productFeederService.cacheable(request);

} catch (Throwable e) {
   // if not in cache then cache it
   hybrisResponse = productFeederService.cachePut(request);
}

return Mono.just(hybrisResponse)
    .map(result -> ResponseBody.<HybrisResponse>builder()
        .payload(result).build())
    .map(ResponseEntity::ok);