Do you know Akka-HTTP Cache?

 

th_akkahttp

Today we are to going to talk about a very important aspect of reactive web applications development, it’s not from the alien world but we always talk about it but we rarely want to deal with it. It’s Cache. In today’s mammoth scalable architecture we are mostly surrounded by big design issues and somewhere here and there we neglect benefit of using a very useful concept of Cache.

cache-300x158

Here our superman of reactive-world Lightbend Inc. gave us the solution to this problem as well,  Akka-Http Cache. Though it has been designed on top of Caffeine framework which in itself is highly efficient Caching solution based on Java 8. It provides us the capability to implement caching in highly concurrent or Asynchronous environment, which makes it special as scale and concurrency are inseparable for building any robust application to handle Zillions of users.

Akka-Http provides us cache solution in 2 different forms, request-response caching (also called caching directives) and objects Caching. In this post, we will discuss object caching in Akka-HTTP.

Many a time there are requirements for caching heavy computation generated objects which are to be served to many client requests. In such cases cache saves us from recomputing such objects again and again instead we can directly serve the requests from the value from the cache which is saved in cache on first request arrival.

This is very well handled by Akka-Http caching solution supported by Caffeine under the hood.

Let’s roll up our sleeves and write some real code to explain it better.

This is the driver object our Akka-HTTP application containing the main method, we have instantiated Cache object here with implementation class LfuCache which is the implementation of Least Frequently Used cache strategy, this is a frequency based strategy where eviction of cache object depends upon the frequency of access. Internally an access counter is maintained which is incremented with each access to the cache. This counter has a threshold value of 15 and if the counter is further increased all the values are downsampled. It simply means access counter is maintained for the small time window, this allows saving storage space for access counter which is only 4 bits now.  For cache insert policy it follows TinyLFU (based on Bloom Filter theory) which is a very efficient approach for cache insertion based on the time window and decides eviction of keys in the cache. For further reading, you can refer to TinyLRU.

Akka-Http played a smart move and didn’t provided set and get as two different methods to set and get the cache instead gave a higher order function getOrLoad(Key, Function to Compute Cached Value), where the cached value is a Future wrapped function to compute the value corresponding to the key.

It’s not over yet, superheroes should take care of supervillains. Here we are talking about Concurrent requests want to cache same objects into Cache, how to handle this situation?

Solution

This whole process is asynchronous and it avoids to cache multiple copies of same cache object. Only for first access of cache future is put into the cache and for subsequent requests either if future is completed then the value is returned else future is returned from the cache.

 

Here every time a product is added to the cart total cart value is updated by recomputing using the price of the new product added into the cart and correspondingly cache key CART_VALUE is also refreshed with the new value. But every time cart value is accessed we are not going to recompute total value we will simple lookup for CART_VALUE in the cache and return the value.

Here compute cart value is representing a heavy computation task which we want to avoid to run for every single request instead we want to serve it from cache to avoid redundant computation. In real life scenario, it can be accessing some third-party API which we want to avoid if there is no change in response as per business use case.

For complete application with details please click here.

For further reading from Akka-Http Lightbend Inc. official documentation please click here.

Hope this will help you to build even better Reactive Applications to serve the world ♥

Advertisements

Amazon Rekognition: Welcome to Visual Recognition

We have always been captivated ❤ and mesmerized 😮 by sci-fi movies like James Bond, and we dreamt of those technologies to become reality one day but here we are in the world of Artificial Intelligence and Deep Learning living at the edge of the Technology driven ecosystem. Visual Recognition has been there around for so long but there didn’t exist any product which handled use cases in a robust manner but after a decade of research and analysis Amazon launched deep learning based image detection and analysis product “Rekognition”.

I have tried giving you a brief overview of Amazon Rekognition through video tutorial which includes setup aws command line interface, maven project, and demo of how to create an application using aws rekognition.

Keep adding leaves in n-ary tree of knowledge but don’t let it get skewed 🙂

Lagom: Let’s Create Micro Services Application

In previous post we introduced you all with our new hero Micro Services but now let’s create our first application. Light Bend Inc. created Lagom framework for creating reactive micro services. Its really easy to build micro services applications using Lagom framework because of reduced boiler plate task and code.

Architecture overview of Lagom Application, our application is basically divided into 2 components

  1. application-api   (here we define entity objects and api endpoints)
  2. application-impl   (here we provide the implementation of our services)

Prerequisite

  • Knowledge of Java 6 and basics of Java 8 functional programming
  • Lagom framework for micro services, just have an overview
  • Any IDE supporting Java Development (We are using IntellijIdea)
  • Cassandra (Basic SQL queries, we are using embedded cassandra), Kafka is internally used by Lagom Framework for message posting.
  • Lombok just basic idea, using for automated creation of immutable objects (Just to make code look cleaner and concise)
  • Junit Testing Framework

Architecture Overview

Lagom application works in read and write segregated mode, read side operation which only reads from persistent storage system and doesn’t perform any write operation similarly write side operations makes changes into persistent storage (here we are using embedded cassandra provided by lagom framework)

  1. We have to create service interface first which contains entity objects and abstract methods which are service endpoints to be exposed to user and their mapping
  2. Next we have to create service implementation corresponding to abstract methods in service-api. This is the part where we have to focus as we have to define our services here and need to get our hands dirty with code

Below is a snapshot of Lagom project, this is how project structure looks like.

image

Book.java is Entity class here we are declaring out entity which is to be persisted into DB(we can use any relational database but we are using embedded cassandra), you must be curious as we have not defined getter-setter methods but look there is annotation @Builder lombok framework will create getter-setter on the fly.

BookService.java extends Service interface of Lagom framework, here we provide abstract methods for each service. ServiceCall interface is there which provides implementation for how service is implemented and invoked by Lagom Framework which receives request and response type parameters. Inside descriptor we specify mapping of Call identifiers with corresponding method to be invoked, here we are using restCall method for identifying service call.

There are three types of Call identifiers

  1. Named Call : In this we directly provide same name or any alternative name corresponding to service method
  2. Path Call : As name suggests, here we can specify uri path of the resource mapped with service method.
  3. Rest Call : In this we provide url pattern to be mapped with service method with http methods (GET, POST, PUT, DELETE etc)

Here goes the implementation of BookServiceImpl.java where we provide implementation of abstract methods in service api. Lagom framework create and provide instance of cassandra. We create persistent entity registry and cassandra session which allows us to read and persist the data into cassandra.

Continue reading

Let’s get started with Reactive Micro Services

 

“Unity is Strength” is what we believe is de-facto but that’s not true when it comes to behaviour of Web-Services. Monolith application (tightly encapsulated and highly inter-dependent applications) is what existed earlier but in recent time there was need of something called micro services which allows enterprise(large) applications to be broken down into independent application components which communicates to each other in case of interdependent information.

Micro Services need arises because of today’s paced dynamic requirement, let’s take a practical scenario xyz.com is planning to launch sales on electronics items only for 2 days but xyz.com is having different categories of product being hosted on their system e.g. Clothing, Groceries, Beauty Product, Household Products, Accessories etc. But on sales days there will be huge traffic requesting for electronics item mostly so logically we should be able to scale up electronics category of products for 2 days on entire product listing but here monolithic application fails 😦 . As monolithic applications are tightly encapsulated so we cannot scale up or down a particular component of application and segregating(re-engineering) different components is costlier than building an entire application. Here comes our Batman the saviour Micro Services 🙂

flo.ci-microservices-architecture

 

REACTIVE in Reactive micro services

Reactive is feather in the cap of micro services architecture, reactive basically represents bunch of additional performance optimisations like Scalable, Robust, Distributed, Resilience, Responsive. Let’s take all these conundrums one by one

  1. Scalable : One of the core reason for existence of micro services, applications should be scalable as per need
  2. Robust : Without robustness(consistent performance) any additional functionality doesn’t count. Micro services are designed in such a way that their downtime is almost zero.
  3. Distributed : Micro services are distributed in nature, that is there is single point of failure.
  4. Resilience : Because of distributed nature micro services are resilience that is they can recover from failure in almost zero time, micro services architecture allows application to be eventually consistent by using event logs (stable state is achieved after completion of event)
  5. Responsive : Even if best system is not fast and consistent, there is probability it won’t exist for long. Micro services are designed to make overall system fast and consistent.

In the next post we will meet this new Superhero in the battlefield and will try to save the world from monolithic applications ♥

References and Further Reading

https://inform.tmforum.org/nfv-it-transformation/2017/02/what-are-microservices-and-why-should-you-care/

http://downloads.lightbend.com/website/reactive-microservices-architecture/Reactive_Microservices_Architecture.pdf