0
votes

I am new in Lagom and Persistence Entities Database concepts.

I am building Streaming Analytics Engine. My each analysis will work as independent micro-service and for each individual micro-service according to its design philosophy the results will save in its own database (my case Cassandra). I am using Flink & Spark for Streaming Analysis which results are then Sink to Cassandra using Phantom for Flink(Scala driver for Cassandra). I am not able to understand following challenges in Lagom Framework.

  1. To store the analytic result still i need to implement Persistence Entity(P.E) to store the record in Cassandra or should i buy-pass it and can store direct to Cassandra? My application neither support for deleter nor update. Only insert to visualize the results. Flink & Spark already have support of Fault-Tolerance.

  2. How can i access access to Cassandra Session without Persistence Entities?

  3. If I use Phantom driver in Lagom then its has some conflict with embedded Cassandra of Lagom; not able to register the service in Service Locator.

Can you please suggest how should i proceed with this situation. In other words each micro-service, its architecture based on KAPPA Architecture

Thanks

1

1 Answers

2
votes

If you have a stream of events then each microservice consuming from it could either keep a copy of all events of maintain a materialized view. An example of such a microservice can be seen on the search-service of the online-auction sample app. In the linked code there's a clas consuming two different streams (in this case Kafka topics) and storing data into an ElasticSearch Index. The same could be achieved using Cassandra or other database.

You may be facing further problems if you try to import a cassandra driver on top of what's provided by Lagom. In that case I would suggest that you: (1) don't depend on any lagom-persistence-xxx so that only your driver is used or (2) use the CassandraSession provided by Lagom's lagomScaladslPersistenceCassandra module (see Lagom Persistence docs). If you choose to use the seconds option, you have to add CassandraSession to the constructor of your class and then the Dependency Injection in your Loader will make sure the adequate instance is provided. See how in the linked code there's 3 arguments in the constructor and the the Loader uses macwire to inject them. Note that you will have to mix in the ReadSideCassandraPersistenceComponents trait so CassandraSession can be injected.