December 31, 2019

ET Rising Indian Anusha Srinivasan Iyer Wins Big At Perfect Achievers Award 2019

Anusha Srinivasan Iyer, a Writer-Director-Media Strategist, TEDx Speaker, Life Coach, Social Entrepreneur, Gender-Animal and Environment Activist won big at the Perfect Achievers Award of 2019, a one of a kind award function recognizing and empowering women.

'Shruti Jain' bagged the title “Perfect Miss 2019 Of India'

Mr Gurubhai Thakkar, Dr. Khooshi Gurubhai and Dr. Geet S Thakkar on behalf of “Perfect Woman Pvt. Ltd” organised “Perfect Miss 2019 of India” pageant where 15 Top Finalists from all over India walked the ramp to win the crown.

December 26, 2019

December 16, 2019

#GoodWitch: Season 3-Episode 12: I Do dialogue

They're about to say, 'I do'..
Three little letters, two little words. It's the simplest part of the day. 
But there's nothing simple about that will remain unsaid. 

December 12, 2019

BARC TV Ratings- Week 49 2019

Top 10 Hindi GEC Channel
  1. Dangal : 1161909
  2. STAR Plus : 677895
  3. Zee TV : 612462
  4. SONY SAB : 602767
  5. Colors : 571235
  6. Big Magic : 463489
  7. Sony Entertainment Television : 437339
  8. STAR Bharat : 299048
  9. STAR Utsav : 215556
  10. Colors Rishtey : 175374

Priyaank Sharma relives fond childhood memories with cousins Shraddha Kapoor and Siddhanth Kapoor

Actress Padmini Kolhapure’s son Priyaank Sharma is all set to make his Bollywood debut with Karan Vishwanath Kashyap’s Sab Kushal Mangal. While the budding actor is gearing up for the release of his upcoming rom-com alongside newbie Riva Kishan and Akshaye Khanna, the fact that he is a star-kid never bothered him.

December 08, 2019

What is API Gateway?


What is the API Gateway pattern?
Let's start with the use case. Assume we are developing an application, where users can purchase the products. Here is the list of web-services available:

  • /home
  • /productdetails
  • /productdetails/add
  • /productdetails/delete
  • /cartdetails/get
  • /cartdetails/add


HATEOAS and Richardson Maturity Model

The Richardson Maturity Model (RMM) is a model developed by Leonard Richardson that helps organize your REST APIs into four levels.  It is proposed by Leonard Richardson. Here are the four levels of RMM:

  • Level 0: The Swamp of POX
  • Level 1: Resources
  • Level 2: HTTP Verbs
  • Level 3: Hypermedia Control

December 07, 2019

Part 17: Microservices (CQRS and Event Sourcing)

Event Sourcing
A shared database is not recommended in a microservices-based approach, because, if there is a change in one data model, then other services are also impacted. As part of microservices best practices, each microservice should have its own database.

Let's say you have Customer and Order microservices running in their seprate containers. While the Order service will take care of creating, deleting, updating, and retrieving order data, the Customer service will work with customer data.

December 06, 2019

Part 16: Microservices (Implementing Circuit Breaker and Bulkhead patterns using Resilience4j)

Circuit Breaker Pattern
Circuit breaker is a resilience pattern. This pattern help prevent cascading failures. A circuit breaker is used to isolate a faulty service, it wraps a fragile function call (or an integration point with an external service) in a special (circuit breaker) object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all.

ALSO READ: Part 6: Microservices (Fault Tolerance, Resilience, Circuit Breaker Pattern)

December 05, 2019

Part 15: Microservices (Saga Pattern)

Observability
Every application relies on data and the success or failure of any business relies on efficient data management.

Data Management in a monolithic system can get pretty complex. However, it could be a completely different if you are using Microservices architecture.

Here are a couple of data management patterns for microservices:
  • Database per Service - each service has its own private database
  • Shared database - services share a database
  • Saga - use sagas, which a sequences of local transactions, to maintain data consistency across services
  • API Composition - implement queries by invoking the services that own the data and performing an in-memory join
  • CQRS - implement queries by maintaining one or more materialized views that can be efficiently queried
  • Domain event - publish an event whenever data changes
  • Event sourcing - persist aggregates as a sequence of events

BARC TV Ratings- Week 48, 2019

Top 10 Hindi GEC Channel
  1. Dangal : 1181474
  2. STAR Plus : 682152
  3. SONY SAB : 608366
  4. Zee TV : 600399
  5. Colors : 537344
  6. Sony Entertainment Television : 502121
  7. Big Magic : 443528
  8. STAR Bharat : 297846
  9. STAR Utsav : 183197
  10. Colors Rishtey : 150552

Part 14: Microservices (Observability Patterns)

Observability
  • Log aggregation - aggregate application logs
  • Application metrics - instrument a service’s code to gather statistics about operations
  • Audit logging - record user activity in a database
  • Distributed tracing - instrument services with code that assigns each external request an unique identifier that is passed between services. Record information (e.g. start time, end time) about the work (e.g. service requests) performed when handling the external request in a centralized service
  • Exception tracking - report all exceptions to a centralized exception tracking service that aggregates and tracks exceptions and notifies developers.
  • Health check API - service API (e.g. HTTP endpoint) that returns the health of the service and can be pinged, for example, by a monitoring service
  • Log deployments and changes

December 04, 2019

Part 13: Microservices (Deployment Patterns)

Deployment Patterns
  • Multiple service instances per host - deploy multiple service instances on a single host
  • Service instance per host - deploy each service instance in its own host
  • Service instance per VM - deploy each service instance in its VM
  • Service instance per Container - deploy each service instance in its container
  • Serverless deployment - deploy a service using serverless deployment platform
  • Service deployment platform - deploy services using a highly automated deployment platform that provides a service abstraction

Part 12: Microservices (Decomposition)

How to decompose an application into services?
1). Decompose by business capability pattern: It define services corresponding to business capabilities
2). Decompose by subdomain pattern: It define services corresponding to DDD subdomains
3). Self-contained Service pattern: The design services to handle synchronous requests without waiting for other services to respond
4). Service per team

Part 11: Microservices(Spring Cloud Config Server)


Cross Cutting Concerns
Cross cutting concerns can be handled by:
  • Microservice chassis - a framework that handles cross-cutting concerns and simplifies the development of services.
  • Externalized configuration - externalize all configuration such as database location and credentials.
In case of Externalized configuration pattern, we externalize all application configuration including the database credentials and network location. On startup, a service reads the configuration from an external source, e.g. OS environment variables, etc.

We can configure properties in application.properties. Configuration can be achieved in many ways:
1). A single application.properties inside the jar.
2). Multiple application.properties inside the jar for different profiles.
3). application.properties outside the jar, kept at the same location where the jar is present. When the jar is executed this application.properties will override the one present inside the jar.
4). application.properties for different profiles at some different location, while starting the jar we can pass the active profile and the path of property files.

Part 10: Microservices (Configuring Spring Boot Application: @Value, @ConfigurationProperties)

Spring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use properties files, YAML files, environment variables, and command-line arguments to externalize configuration.

Using property file config with Spring Boot

Let's say you have configured below property in Spring Boot's application.properties:
our.greeting=Hello

Now you want to use this property in your application. Let's say we have greetMe() API inside GreetingController, this API returns the message configured in property file. To read the message from the property file, we can use 'value injection'. We can inject the property values directly into our beans by using the @Value annotation. e.g:

December 03, 2019

Part 9: Microservices (Bulkhead Pattern using Hystrix)

What is Bulkhead Pattern?
Bulkheads in ships separate components or sections of a ship such that if one portion of a ship is breached, flooding can be contained to that section.

Once contained, the ship can continue operations without risk of sinking.

In this fashion, ship bulkheads perform a similar function to physical building firewalls, where the firewall is meant to contain a fire to a specific section of the building.

The microservice bulkhead pattern is analogous to the bulkhead on a ship. The goal of the bulkhead pattern is to avoid faults in one part of a system to take the entire system down. By separating both functionality and data, failures in some component of a solution do not propagate to other components.  This is most commonly employed to help scale what might be otherwise monolithic datastores.

Part 8: Microservices (Hystrix Dashboard)


What is Hystrix Dashboard and how can we add it in our Spring Boot App?
Hystrix also provides an optional feature to monitor all of your circuit breakers in a visually-friendly fashion.

Steps to add Hystrix Dashboard:
1). We need to add spring-cloud-starter-netflix-hystrix-dashboard and spring-boot-starter-actuator in the pom.xml.
2). To enable it we have to add the @EnableHystrixDashboard annotation to our main class.
3). Also, in our application.properties let's include the stream, 'management.endpoints.web.exposure.include= hystrix.stream'. Doing so exposes the /actuator/hystrix.stream as a management endpoint.

Part 7: Microservices (Hystrix)

What is Hystrix?
  • Its an open source library originally created by Netflix.
  • It implements the circuit breaker pattern, so we don't have to implement it. It give us the configuration params based on which circuit open and close.
  • The Hystrix framework library helps to control the interaction between services by providing fault tolerance and latency tolerance. It improves overall resilience of the system by isolating the failing services and stopping the cascading effect of failures.
  • For example, when you are calling a 3rd party API, which is taking more time to send the response, the control goes to the fallback method and returns the custom response to your application.
  • The best part is it works well with Spring Boot.
  • The sad part is Hystrix its no longer under active development now, it has been maintained right now.
How can we add Hystrix to a Spring Boot App?
  • Add Maven dependency for 'spring-cloud-starter-netflix-hystrix'
  • Add @EnableCircuitBreaker annotation to the application class.
  • Add @HystrixCommand to the methods that need circuit breakers.
  • Configure Hystrix behaviour (adding the parameters).
How does Hystrix work?
For the Circuit Breaker to work, Hystix will scan @Component or @Service annotated classes for @HystixCommand annotated methods.

Any method annotated with @HystrixCommand is managed by Hystrix, and therefore, is wrapped by a proxy that manages all calls to that method through a separate, initially fixed thread pool.

FYI, @HystrixCommand with an associated fallback method. This fallback has to use the same signature as the ‘original’.

Hystrix Demo
In the blog post part 5, we had created three projects depart-employee-details, department-details and employee-details, and these projects have getDetails,
getDepartmentDetails and getEmployeeDetails API's respectively.

getDetails is calling getDepartmentDetails and then for each department its fetching the employee information by calling getEmployeeDetails, after this it returns the consolidated result.

You can download the code till blog-post 5 from below url:

GIT URL: microservices

Now we will add the Circuit Breaker Pattern in it. Where we are going to add it? Since getDetails is calling two other web-services, we will add the circuit breaker in it getDetails API.

Step 1). Let's add the Hystrix dependency in 'depart-employee-details'.
< dependency >
< groupid > org.springframework.cloud < /groupid >
< artifactid > spring-cloud-starter-netflix-hystrix < /artifactid >
< version > 2.2.3.RELEASE < /version >
< /dependency>

Step 2). Add @EnableCircuitBreaker annotation to the application class of 'depart-employee-details'.

Step 3 &4). Add @HystrixCommand to the methods that need circuit breakers and configure behavior.

getDetails API of 'depart-employee-details' is calling two other API's, so we will add @HystrixCommand on this API.

To test whether its working fine or not, start discovery-server and then depart-employee-details. Open the browser and hit http://localhost:8081/details/getDetails. This will show you hadcoded output from fallback method.

FYI, we have not started employee-details and department-details, that's why when getDetails API tries to call getDepartmentDetails and getEmployeeDetails it gets an error (since services are down) and returns the result from the fallback method 'fallbackgetDetails()'

Download the code till now from below GIT url:
GIT URL: microservices

Refactoring for granular fallback and Configuring Hystrix parameters

The getDetails API of 'depart-employee-details' is calling getDepartmentDetails API of 'department-details' and then getEmployeeDetails API of 'employee-details' project.

As of now we added the fallback in getDetails, but now we make it more granular.

Instead of adding fallback for the wrapper API (i.e getDetails), we will add fallbacks for both getDepartmentDetails and getEmployeeDetails. For that we need to create two service classes DepartmentService and EmployeeService in 'depart-employee-details'. We need to make modification in getDetails [it will now call API's via newly created service classes].
DepEmpResource.java
EmployeeService.java

DepartmentService.java

Why we created new Service classes? 
We could have added the fall back methods for both getDepartmentDetails and getEmployeeDetails in the DepEmpResource [where getDetails API is present], still we created separated service classes to get Department and Employee details. Why?

Well, the hystrix create a proxy class for the class where we add @HystrixCommand. If we create fallback methods in DepEmpResource, it would create a single proxy call from DepEmpResource and fail to call the separate fall backs.

To test whether its working fine or not, start discovery-server and then depart-employee-details. Open the browser and hit http://localhost:8081/details/getDetails. This will show you hadcoded output from fallback method.

FYI, we have not started employee-details and department-details, that's why when getDetails API tries to call getDepartmentDetails and getEmployeeDetails it gets an error (since services are down) and returns the result from the fallback method 'fallbackgetDetails()'

Download the code till now from below GIT url:
GIT URL: microservices

-K Himaanshu Shuklaa..

December 02, 2019

Part 6: Microservices (Fault Tolerance, Resilience, Circuit Breaker Pattern)

What Is Fault Tolerance and Resilience?
As per Wikipedia, Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some of its components.

If there is a fault, what is the impact of that fault in the application is Fault Tolerance.

Resilience is the capacity to recover quickly after the failure. Resilience is how many faults a system can tolerate before its brought down to its knees.

Part 5: Microservices Demo (Service Discovery using Eureka)

Why use Service Discovery?
Service discovery is how applications and (micro)services locate each other on a network. Service discovery implementations include both: a central server (or servers) that maintain a global view of addresses and clients that connect to the central server to update and retrieve addresses.

Let’s imagine that we are writing some code that invokes a service that has a REST API. In order to make a request, our code needs to know the network location (IP address and port etc) of a service instance.

In a traditional application running on physical hardware, the network locations of service instances are relatively static. For example, our code can read the network locations from a configuration file that is occasionally updated. In a modern, cloud-based microservices application, however, this is a much more difficult problem to solve.

Service instances have dynamically assigned network locations. Moreover, the set of service instances changes dynamically because of autoscaling, failures, and upgrades. Consequently, our client code needs to use a more elaborate service discovery mechanism.

Why producer Akshy Mishra decided to cast a real transgender for #Mandi?

Prime Flix's upcoming a web-series #Mandi (earlier titled Brothel). 'Sarabjit' fame Trishaan Singh Maini, Gandi Baat actress Pallavi Mukherjee, Supriya Shukla, Vikram Soni and 'Charam Sukh' actress Garima Maurya are roped in for the series.

Part 4: Microservices Demo

We will develop a microservice using Spring Cloud which will return details about all the departments that exist, along with all the employees working in a particular department.



December 01, 2019

Part 3: Microservices Interview Questions And Answers


What is the difference between Mock and Stub?

A Mock is generally a dummy object where certain features are set into it initially. Its behavior mainly depends on these features, which are then tested.

A Stub is an object that helps in running the test. It functions in a fixed manner under certain conditions. This hard-coded behavior helps the stub to run the test.

Part 2: Microservices Interview Questions And Answers


Name three commonly used tools for Microservices
Wiremock, Docker, and Hystrix are important Microservices tools.

Why do we need Containers for Microservices?
To manage a microservice-based application, containers are the easiest alternative. They play a crucial role in the deployment and management of microservices architectures.

a). Isolation: Containers encapsulate the application and its dependencies, providing a lightweight, isolated environment. Each microservice can run in its own container, ensuring that it has everything it needs to operate without interfering with other services. This isolation helps in avoiding conflicts between dependencies and provides consistency across different environments.

b). Scalability: Containers are designed to be easily scalable. Microservices often require dynamic scaling to handle varying workloads. Containers can be quickly started or stopped, making it easier to scale individual microservices independently based on demand. This elasticity allows for efficient resource utilization and cost management.

c). Portability: Containers are highly portable and can run consistently across various environments, including development, testing, and production. This ensures that a microservice behaves the same way regardless of the underlying infrastructure. This portability simplifies the deployment process and supports a "write once, run anywhere" philosophy.

d). Orchestration: Microservices often involve the coordination and orchestration of multiple services. Container orchestration tools, such as Kubernetes and Docker Swarm, help manage the deployment, scaling, and lifecycle of containers. They automate tasks like load balancing, service discovery, and rolling updates, simplifying the management of complex microservices architectures.

e). Dependency Management: Containers package an application along with its dependencies, libraries, and runtime, ensuring that the microservice runs consistently across different environments. This helps eliminate the common problem of "it works on my machine" by creating a consistent environment from development to production.

f). Fast Deployment: Containers can be started or stopped quickly, allowing for fast deployment and updates. This agility is crucial for microservices, where frequent updates and releases are common. It supports practices like continuous integration and continuous deployment (CI/CD), facilitating a more agile and responsive development process.

What is the use of Docker?
Docker offers a container environment that can be used to host any application. This software application and the dependencies that support it are tightly packaged together.

Kafka Part 10: Implement Exactly Once Processing in Kafka

Let's say we are designing a system using Apache Kafka which will send some kind of messages from one system to another. While designing to need to consider below questions:
  • How do we guarantee all messages are processed?
  • How do we avoid/handle duplicate messages?
A timeout could occur publishing messages to Kafka. Our consumer process could run out of memory or crash while writing to a downstream database. Or may be our broker could run out of disk space, a network partition may form between ZooKeeper instances.

Part 1: Microservices Interview Questions And Answers (Monolithic vs Microservices)

What is monolithic architecture?
  • Monolith means something which is composed all in one piece. 
  • The Monolithic application describes a single-tiered software application in which different components are combined into a single program from a single platform. The monolithic software is designed to be self-contained. The components of the program are interconnected and interdependent rather than loosely coupled as is the case with modular software programs. 
  • In a tightly-coupled architecture, each component and its associated components must be present in order for code to be executed or compiled.
  • Also, if we need to update any program component we need to rewrite the whole application, whereas, in a modular application, any separate module e.g a  microservice can be changed without affecting other parts of the program. Unlike monolithic architecture, the modules of modular architectures are relatively independent which in turn reduces the risk that a change made within one element will create unanticipated changes within other elements. Also, modular programs also lend themselves to iterative processes more readily than monolithic programs.

Kafka Part 9: Compression

Compression In Kafka
Data is send from producer to the Kafka in the text format, commonly called the JSON format. JSON has a demerit because data is stored in the string form and most of the time this creates several duplicated records to get stored in the Kafka topic. Which occupies much disk space. That's why we need compression.

#Neo4j Part 8:Interview Questions & Answers

How files are stored in Neo4j?
Neo4j stores graph data in a number of different store files, and each store file consists of the data for a specific part of the graph(relationships, nodes, properties etc), e.g: Neostore.nodestore.db, neostore.propertystore.db and so on.

Kafka Part 8: Batch Size and linger.ms



What is a Producer Batch and Kafka’s batch size?
  • A producer writes messages to the Kafka, one-by-one. It waits for the messages that are being produced to Kafka. Then, it creates a batch and put the messages into it, until it becomes full. Then, send the batch to the Kafka. Such type of batch is known as a Producer Batch. 
  • We can say Kafka producers buffer unsent records for each partition. Size of these buffers is specified in the batch.size of config file. Once the buffer is full messages will be send.
  • The default batch size is 16KB, and the maximum can be anything. Large is the batch size, more is the compression, throughput, and efficiency of producer requests. The larger messages seem to be disproportionately delayed by small batch sizes.
  • The message size should not exceed the batch size. Otherwise, the message will not be batched. Also, the batch is allocated per partitions, so do not set it to a very high number.

Kafka Part 7: Why ZooKeeper is always configured with odd number of nodes?

Let's understand a few basics:

ZooKeeper is a highly-available, highly-reliable and fault-tolerant coordination and consensus service for distributed applications like Apache Storm or Kafka. Highly-available and highly-reliable is achieved through replication.

Kafka Part 6: Assign and Seek

Assign

When we work with consumer groups, the partitions are assigned automatically to consumers and are rebalanced automatically when consumers are added or removed from the group.

ALSO READ: Kafka Consumer Group, Partition Rebalance, Heartbeat

Sometimes we need a single consumer that always read data from all the partitions in a topic, or from a specific partition in a topic. In this case, there is no reason for groups or rebalancing. We just assign the consumer-specific topic and/or partitions, consume messages, and commit offsets on occasion.

Kafka Part 5: Consumer Group, Partition Rebalance, Heartbeat

What is a Consumer Group?
Consumer Groups is a concept exclusive to Kafka. Every Kafka consumer group consists of one or more consumers that jointly consume a set of subscribed topics.

Let's say we have an application, which read messages from a Kafka topic, perform some validations and so some calculations, and write the results to another data store.

In this case our application will create a consumer object, subscribe to the appropriate topic, and start receiving messages, validating them and writing the results.

This may work well for a while, but imagine a scenario when the rate at which producers write messages to the topic exceeds the rate at which your application can validate them?

Kafka Part 4: Consumers

We have learned how to create Kafka Producer in the previous part of Kafka series. Now we will create Kafka consumer.

Reading data from Kafka is a bit different than reading data from any other messaging systems. Applications that need to read data from Kafka use a KafkaConsumer to subscribe to Kafka topics and receive messages from these topics.

In this blog post, we will discuss about the interview questions related to kafka Consumers and we will also create our own consumer.

Kafka Part 3: Kafka Producer, Callbacks and Keys

What is the role of Kafka producer?
The primary role of a Kafka producer is to take producer properties and record as inputs and write it to an appropriate Kafka broker. Producers serialize, partitions, compresses and load balances data across brokers based on partitions.

The workflow of a producer involves five important steps:
  1. Serialize
  2. Partition
  3. Compress
  4. Accumulate records
  5. Group by broker and send

Kafka Part 2: Kafka Command Line Interface (CLI)

Now we will create topics from CLI.
  • Open another command prompt, execute 'cd D:\Softwares\kafka_2.12-2.3.0\bin\windows'.
  • Make sure zookeeper and kafka broker is running.
  • Now execute 'kafka-topics --zookeeper 127.0.0.1:2181 --topic first_topic --create --partitions 3 --replication-factor 1'. This will create a kafka topic with name 'first_topic', with 3 partitions and replication-factor as 1 (in our case we cannot mention replication-factor more than 1 because we have started only one broker).
  • After executing the above command you will get a message 'Created topic first_topic'.
  • How can we check if our topic is actually created? In the same command prompt, execute 'kafka-topics --zookeeper 127.0.0.1:2181 --list'. This will list all the topics which are present.
  • To get the more details about the topic which is created, execute 'kafka-topics --zookeeper 127.0.0.1:2181 --topic first_topic --describe'

Kafka Part 1: Basics

What is Apache Kafka?
Apache Kafka is a publish-subscribe messaging system developed by Apache written in Scala. It is a distributed, partitioned and replicated log service. It is horizontally scalable, fault tolerant and fast messaging system.

Why Kafka?
Let's say we have a source system and a target system, where in the source consumes data from target system. In a simplest case, we have one source and one target system,so it would be easy for a source system to connect with the target. But now lets say there are x number of sources and y number of targets, and each source need to connect with all the targets. In this case it will become really difficult to maintain the whole system.


Lyrics and English Translation Of Baari




Tenu Takeya Hosh Hi Bhul Gayi,
Garm Garm Chaa Hath Te Dul Gayi,

As I saw you, I lost my senses,
which lead me to spill the hot tea on my hand.