Apache Kafka – Java Producer & Consumer


Apache Kakfa is an opensource distributed event streaming platform which works based on publish/subscribe messaging system. That means, there would be a producer who publishes messages to Kafka and a consumer who reads messages from Kafka. In between, Kafka acts like a filesystem or database commit log.

In this article we will discuss about writing a Kafka producer and consumer using Java with customized serialization and deserializations.

Kakfa Producer Application:

Producer is the one who publish the messages to Kafka topic. Topic is a partitioner in Kafka environment, it is very similar to a folder in a file system. In the below example program, messages are getting published to Kafka topic ‘kafka-message-count-topic‘.

package com.malliktalksjava.kafka.producer;

import java.util.Properties;
import java.util.concurrent.ExecutionException;

import com.malliktalksjava.kafka.constants.KafkaConstants;
import com.malliktalksjava.kafka.util.CustomPartitioner;
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class KafkaSampleProducer {

    static Logger log = LoggerFactory.getLogger(KafkaSampleProducer.class);

    public static void main(String[] args) {
        runProducer();
    }

    static void runProducer() {
        Producer<Long, String> producer = createProducer();

        for (int index = 0; index < KafkaConstants.MESSAGE_COUNT; index++) {
            ProducerRecord<Long, String> record = new ProducerRecord<Long, String>(KafkaConstants.TOPIC_NAME,
                    "This is record " + index);
            try {
                RecordMetadata metadata = producer.send(record).get();
                //log.info("Record sent with key " + index + " to partition " + metadata.partition() +
                 //       " with offset " + metadata.offset());
                System.out.println("Record sent with key " + index + " to partition " + metadata.partition() +
                        " with offset " + metadata.offset());
            } catch (ExecutionException e) {
                log.error("Error in sending record", e);
                e.printStackTrace();
            } catch (InterruptedException e) {
                log.error("Error in sending record", e);
                e.printStackTrace();
            }
        }
    }

    public static Producer<Long, String> createProducer() {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.KAFKA_BROKERS);
        props.put(ProducerConfig.CLIENT_ID_CONFIG, KafkaConstants.CLIENT_ID);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, CustomPartitioner.class.getName());
        return new KafkaProducer<>(props);
    }
}

Kakfa Consumer Program:

Consumer is the one who subscribe to Kafka topic to read the messages. There are different ways to read the messages from Kafka, below example polls the topic for every thousend milli seconds to fetch the messages from Kafka.

package com.malliktalksjava.kafka.consumer;

import java.util.Collections;
import java.util.Properties;

import com.malliktalksjava.kafka.constants.KafkaConstants;
import com.malliktalksjava.kafka.producer.KafkaSampleProducer;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;

import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class KafkaSampleConsumer {
    static Logger log = LoggerFactory.getLogger(KafkaSampleProducer.class);

    public static void main(String[] args) {
        runConsumer();
    }

    static void runConsumer() {
        Consumer<Long, String> consumer = createConsumer();
        int noMessageFound = 0;
        while (true) {
            ConsumerRecords<Long, String> consumerRecords = consumer.poll(1000);
            // 1000 is the time in milliseconds consumer will wait if no record is found at broker.
            if (consumerRecords.count() == 0) {
                noMessageFound++;
                if (noMessageFound > KafkaConstants.MAX_NO_MESSAGE_FOUND_COUNT)
                    // If no message found count is reached to threshold exit loop.
                    break;
                else
                    continue;
            }

            //print each record.
            consumerRecords.forEach(record -> {
                System.out.println("Record Key " + record.key());
                System.out.println("Record value " + record.value());
                System.out.println("Record partition " + record.partition());
                System.out.println("Record offset " + record.offset());
            });

            // commits the offset of record to broker.
            consumer.commitAsync();
        }
        consumer.close();
    }

        public static Consumer<Long, String> createConsumer() {
            Properties props = new Properties();
            props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.KAFKA_BROKERS);
            props.put(ConsumerConfig.GROUP_ID_CONFIG, KafkaConstants.GROUP_ID_CONFIG);
            props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
            props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, KafkaConstants.MAX_POLL_RECORDS);
            props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
            props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, KafkaConstants.OFFSET_RESET_EARLIER);

            Consumer<Long, String> consumer = new KafkaConsumer<>(props);
            consumer.subscribe(Collections.singletonList(KafkaConstants.TOPIC_NAME));
            return consumer;
        }
}

Messages will be published to a Kafka partition called Topic. A Kafka topic is sub-divided into units called partitions for fault tolerance and scalability.

Every Record in Kafka has key value pairs, while publishing messages key is optional. If you don’t pass the key, Kafka will assign its own key for each message. In Above example, ProducerRecord<Integer, String> is the message that published to Kafka has Integer type as key and String as value.

Message Model Class: Below model class is used to publish the object. Refer to below descriptions on how this class being used in the application.

package com.malliktalksjava.kafka.model;

import java.io.Serializable;

public class Message implements Serializable{

    private static final long serialVersionUID = 1L;

    private String id;
    private String name;

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

Constants class: All the constants related to this application have been placed into below class.

package com.malliktalksjava.kafka.constants;

public class KafkaConstants {

    public static String KAFKA_BROKERS = "localhost:9092";

    public static Integer MESSAGE_COUNT=100;

    public static String CLIENT_ID="client1";

    public static String TOPIC_NAME="kafka-message-count-topic";

    public static String GROUP_ID_CONFIG="consumerGroup1";

    public static String GROUP_ID_CONFIG_2 ="consumerGroup2";

    public static Integer MAX_NO_MESSAGE_FOUND_COUNT=100;

    public static String OFFSET_RESET_LATEST="latest";

    public static String OFFSET_RESET_EARLIER="earliest";

    public static Integer MAX_POLL_RECORDS=1;
}

Custom Serializer: Serializer is the class which converts java objects to write into disk. Below custom serializer is converting the Message object to JSON String. Serialized message will be placed into Kafka Topic, this message can’t be read until it is deserialized by the consumer.

package com.malliktalksjava.kafka.util;

import java.util.Map;
import com.malliktalksjava.kafka.model.Message;
import org.apache.kafka.common.serialization.Serializer;
import com.fasterxml.jackson.databind.ObjectMapper;

public class CustomSerializer implements Serializer<Message> {

    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {

    }

    @Override
    public byte[] serialize(String topic, Message data) {
        byte[] retVal = null;
        ObjectMapper objectMapper = new ObjectMapper();
        try {
            retVal = objectMapper.writeValueAsString(data).getBytes();
        } catch (Exception exception) {
            System.out.println("Error in serializing object"+ data);
        }
        return retVal;
    }
    @Override
    public void close() {

    }
}

Custom Deserializer: Below custom deserializer, converts the serealized object coming from Kafka into Java object.

package com.malliktalksjava.kafka.util;

import java.util.Map;

import com.malliktalksjava.kafka.model.Message;
import org.apache.kafka.common.serialization.Deserializer;

import com.fasterxml.jackson.databind.ObjectMapper;

public class CustomObjectDeserializer implements Deserializer<Message> {

    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {
    }

    @Override
    public Message deserialize(String topic, byte[] data) {
        ObjectMapper mapper = new ObjectMapper();
        Message object = null;
        try {
            object = mapper.readValue(data, Message.class);
        } catch (Exception exception) {
            System.out.println("Error in deserializing bytes "+ exception);
        }
        return object;
    }

    @Override
    public void close() {
    }
}

Custom Partitioner: If you would like to do any custom settings for Kafka, you can do that using the java code. Below is the sample custom partitioner created as part of this applicaiton.

package com.malliktalksjava.kafka.util;

import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;
import java.util.Map;

public class CustomPartitioner implements Partitioner{

    private static final int PARTITION_COUNT=50;

    @Override
    public void configure(Map<String, ?> configs) {

    }

    @Override
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        Integer keyInt=Integer.parseInt(key.toString());
        return keyInt % PARTITION_COUNT;
    }

    @Override
    public void close() {
    }
}

Here is the GIT Hub link for the program: https://github.com/mallikarjungunda/kafka-producer-consumer

Hope you liked the details, please share your feedback in comments.

ReactJs basic application creation


React is one of the popular java script library used to build user interfaces in web applications. 

As per Reactjs.orgReact is a declarative, efficient, and flexible JavaScript library for building user interfaces. It lets you compose complex UIs from small and isolated pieces of code called “components”.

In this tutorial, we would like to understand on how to create a basic react application using command line. You need to have nodejs installed into your desktop or laptop to create reactjs based application.

Navigate to your workspace folder(folder where you want to create project) in command/terminal prompt. Type below command to create basic react application with the folder name as malliktalksjava.

npx create-react-app malliktalksjava

Move to react application folder using below command.

cd malliktalksjava/

Start the application using below command.

npm start

Access localhost URL, http://localhost:3000/. React basic application will be displayed as given below.

React JS_Hello World App

This basic application doesn’t handle back-end logic or databases; it just creates a front-end build pipeline, so you can use it with any back-end you want. But, it installs Babel and webpack behind the scenes, below is the purpose of Babel and webpack:

Babel is a JavaScript compiler and transpiler, used to convert one source code to other. You will be able to use the new ES6 features in your code where, babel converts it into plain old ES5 which can be run on all browsers.

Webpack is a module bundler, it takes dependent modules and compiles them to a single bundle file. You can use this bundle while developing apps using command line or, by configuring it using webpack.config file.

 

Apache Kafka – Environment Setup


Apache Kakfa is an opensource distributed event streaming platform which works based on publish/subscribe messaging system. That means, there would be a producer who publishes messages to Kafka and a consumer who reads messages from Kafka. In between, Kafka acts like a filesystem or database commit log.

In this post we will setup kafka local environment, create topic, publish and consume messages using console clients.

Step 1: Download latest version of Apache Kafka from Apache Kafka website: https://kafka.apache.org/downloads.

Extract the folder into your local and navigate to the folder in Terminal session (if Mac) or command line (if windows):

$ tar -xzf kafka_2.13-3.1.0.tgz 
$ cd kafka_2.13-3.1.0

Step 2: Run Kafka in your local:

Run zookeeper using the below command terminal/command line window 1:

# Start the ZooKeeper service
$ bin/zookeeper-server-start.sh config/zookeeper.properties

Run Kafka using the below command in another terminal or command line:

# Start the Kafka broker service
$ bin/kafka-server-start.sh config/server.properties

Note: You must have Java8 or above in your machine to run Kafka.

Once above two services are run successfully in local, you are set with running Kafka in your local machine.

Step 3: Create topic in Kafka to produce/consume the message in another terminal or command like. In below example, topic name is ‘order-details’ and kafka broker is running in my localhost 9092 port.

$ bin/kafka-topics.sh --create --topic order-details --bootstrap-server localhost:9092

If needed, use describe topic to understand more details about topic created above:

$ bin/kafka-topics.sh --describe --topic order-details --bootstrap-server localhost:9092

Output looks like below:
Topic: order-details	PartitionCount: 1	ReplicationFactor: 1	Configs: segment.bytes=1073741824
	Topic: order-details	Partition: 0	Leader: 0	Replicas: 0	Isr: 0

Step 4: Write events to topic

Run the console producer client to write a few events into your topic. By default, each line you enter will result in a separate event being written to the topic.

$ bin/kafka-console-producer.sh --topic order-details --bootstrap-server localhost:9092
Order 1 details
Order 2 details

Step 5: Read events from Kafka

Open another terminal session/command line and run the console consumer client to read the events you just created:

$ bin/kafka-console-consumer.sh --topic order-details --from-beginning --bootstrap-server localhost:9092
Order 1 details
Order 2 details

Conclusion:

By completing all the above steps, you learned about setting up kafka environment, creating topics, producing the messages using console producer and consming the message using console consumer.

Spring Cloud Sleuth & Zipkin – Distributed Logging and Tracing


In standard applications, app logs are implemented into a single file which can be read for debugging purposes. However, apps which follows microservices architecture style comprises multiple small apps and multiple log files are to maintained to have at least one file per microservice. Due to this , identification and correlation of logs to a specific request chain becomes difficult.

For this, distributed logging & tracing mechanism can be implemented using tools like Sleuth, Zipkin, ELK etc

How to use Sleuth?

Sleuth is part of spring cloud libraries. It can be used to generate the traceid, spanid and add this information to the service calls in headers and mapped diagnostic context (MDC). These ids can be used by the tools such as Zipkin, ELK to store, index and process the log file.

To use sleuth in the app, following dependencies needs to be added

<dependency> 
<groupId>org.springframework.cloud</groupId> 
<artifactId>spring-cloud-starter-sleuth</artifactId> 
</dependency>

How to use Zipkin?

Zipkin contains two components

  • Zipkin Client
  • Zipkin Server

Zipkin client contains Sampler which collects data from ms apps with the help of sleuth and provides it the zipkin server.

To use zipkin client following dependency needs to be added in the application

<dependency> 
<groupId>org.springframework.cloud</groupId> 
<artifactId>spring-cloud-starter-zipkin</artifactId> 
</dependency>

To use zipkin server, we need to download and set up the server in our system

zipkin server

Implementation on microservice apps

To see distributed logging implementation, we need to create three services with the same configuration, the only difference has to be the service invocation details where the endpoint changes.

  • Create services as Spring boot applications with WebRest RepositoryZipkin and Sleuth dependencies.
  • Package services inside a single parent project so that three services can be built together. Also, I’ve added useful windows scripts in github repo to start/stop all the services with a single command
  • Below is one rest controller in service1 which exposes one endpoint and also invokes one downstream service using the RestTemplate. Also, we are using Sampler.ALWAYS_SAMPLE that traces each action.

Service 1

package com.mvtechbytes.service1;
 
import brave.sampler.Sampler;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpMethod;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;
 
@SpringBootApplication
public class Service1Application {
 
    public static void main(String[] args) {
        SpringApplication.run(Service1Application.class, args);
    }
}
 
@RestController
class Service1Controller {

    private static final Logger LOG = Logger.getLogger(Service1Controller.class.getName());
     
    @Autowired
    RestTemplate restTemplate;
 
    @Bean
    public RestTemplate getRestTemplate() {
        return new RestTemplate();
    }
 
    @Bean
    public Sampler alwaysSampler() {
        return Sampler.ALWAYS_SAMPLE;
    }
     
    @GetMapping(value="/service1")
    public String service1() 
    {
        LOG.info("Inside Service 1..");         
String response = (String)   restTemplate.exchange("http://localhost:8082/service2", HttpMethod.GET, null, new ParameterizedTypeReference<String>() {}).getBody();
        return response;
    }
}

Appication Configuration

As all services will run in a single machine, so we need to run them in different ports. Also to identify in Zipkin, we need to give proper names. so configure the application name and port information in application.properties file under the resources folder.

application.propertiesserver.port = 8081
spring.application.name = zipkin-server1

Similarly, for the other 2 services, we will use ports 8082, 8083 and their name will also be zipkin-server2 and zipkin-server3

Also, we have intentionally introduced a delay in the second service so that we can view that in Zipkin.

Above project is available in below github location

Github repo : https://github.com/mvtechbytes/Zipkin-Sleuth

On running app using bat files

Find Traces
Individual Trace
Trace details

References

Different ways of sorting an User Object


There are many ways to sort a java object but it is very hard to figure out which one is more efficient. Here is an example which describes different ways of executing sorting mechanism for User object based on age.

Try to run this application in you local machine to see which method is more efficient and good to use in our regular programming life.

package com.malliktalksjava.java8;

import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

import static java.util.stream.Collectors.toList;

public class SortingExampleUser {

    public static void main(String[] args) {
        List<User> userList = new ArrayList<>();
        userList.add(new User("Ram", 28));
        userList.add(new User("Raj", 35));
        userList.add(new User("Rakesh", 31));
        userList.add(new User("Peter", 30));
        userList.add(new User("John", 25));
        userList.add(new User("Sri", 55));

        long starttime = System.currentTimeMillis();
        System.out.println("sortListUsingCollections : " + sortListUsingCollections(userList));
        System.out.println("Time Taken in Millis : " + (System.currentTimeMillis() - starttime));

        long starttime2 = System.currentTimeMillis();
        System.out.println("sortListUsingCollections : " + sortListUsingStreams(userList));
        System.out.println("Time Taken in Millis  2: " + (System.currentTimeMillis() - starttime2));

        long starttime3 = System.currentTimeMillis();
        System.out.println("sortListUsingCollections : " + sortUsingLambda(userList));
        System.out.println("Time Taken in Millis  2: " + (System.currentTimeMillis() - starttime3));


    }


    //using Collections.sort
    private static List<User> sortListUsingCollections(List<User> list){

        Collections.sort(list, Comparator.comparingInt(User::getAge));
        //Collections.reverse(list); // un comment if for descending order

        return list;
    }

    //using streams and comparator
    private static List<User> sortListUsingStreams(List<User> list){

        return list.stream()
                .sorted(Comparator.comparingInt(User::getAge))
                //.sorted(Comparator.comparingInt(User::getAge).reversed()) //-- for reverse order uncomment this line and comment above line
                .collect(toList());
    }

    //using lambda expressions
    private static List<User> sortUsingLambda(List<User> list){

        return list.stream()
                .sorted((User user1, User user2) -> user1.getAge() > user2.getAge() ? 1: 0)
                //.sorted((User user1, User user2) -> user1.getAge() < user2.getAge() ? 1: 0) - uncomment if reverse order needed
                .collect(toList());

    }
}

class User{
    private String name;
    private int age;

    public User(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    @Override
    public String toString() {
        return "User{" +
                "name='" + name + '\'' +
                ", age=" + age +
                '}';
    }
}

Selenium Automation – Open URL in multiple browsers


This example shows how to open a url in multiple browsers for browser based testing using Selenium and WebdriverManager.

package seleniumProjects;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.edge.EdgeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.safari.SafariDriver;

import io.github.bonigarcia.wdm.WebDriverManager;
import io.github.bonigarcia.wdm.config.DriverManagerType;

public class StartBrowser{
	
	static WebDriver driver = null;
	static String[] appList= {"chrome","firefox","edgedriver", "safari"};
	
	public static void main(String[] args) throws Exception {
		
		for(int i=0;i<appList.length;i++) {
		browserStart(appList[i],"http://google.com");
		Thread.sleep(5000);
		browserClose();
	
		}
	}
	

	public static void browserStart(String appName, String appUrl)
			throws InstantiationException, IllegalAccessException {

		if (appName.equals("chrome")) { //Run Chrome browser
			WebDriverManager.chromedriver().setup();
			driver = new ChromeDriver();
		} else if (appName.equals("firefox")) { //Run in Firefox broweser
			WebDriverManager.firefoxdriver().setup();
			driver = new FirefoxDriver();
		} else if (appName.equals("edgedriver")) { // Run in Edge browser
			WebDriverManager.edgedriver().setup();
			driver = new EdgeDriver();
		} else if (appName.equals("safari")) { //Run in Safari browser
            //For Safari browser, you need enable 
			//'Allow Remote Automation' under develop menu
			DriverManagerType safari = DriverManagerType.SAFARI;
			WebDriverManager.getInstance(safari).setup();
			driver = new SafariDriver();
		}

		driver.get(appUrl);
		driver.manage().window().maximize();

	}

	public static void browserClose() {
		driver.close();
	}
}

Apache FreeMarker for transformation between data formats


In this post, we will learn how to use Apache FreeMarker for data format transformations

What is Apache FreeMarker?

Apache FreeMarker is a template engine: a Java library to generate text output (HTML web pages, e-mails, configuration files, XML, JSON, source code, etc.) based on templates and changing data. Templates are written in the FreeMarker Template Language (FTL), which is a simple, specialized language (not a full-blown programming language like PHP). Usually, a general-purpose programming language (like Java) is used to prepare the data (issue database queries, do business calculations). Then, Apache FreeMarker displays that prepared data using templates. In the template you are focusing on how to present the data, and outside the template you are focusing on what data to present.

If your project needs you to transform between data formats like XML to JSON or vice versa. Such transformations can be accomplished using FreeMarker

Apache FreeMarker for Data Transformations

XML TO JSON Transformation using FreeMarker

We will use SpringBoot project created using Spring Initilizer. https://start.spring.io/

FreeMarker Transformations – Project Structure

Firstly add dependencies to pom.xml

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-freemarker</artifactId>
    </dependency>
    <dependency>
        <groupId>com.googlecode.json-simple</groupId>
        <artifactId>json-simple</artifactId>
        <version>1.1</version>
    </dependency>

    <dependency>
        <groupId>no.api.freemarker</groupId>
        <artifactId>freemarker-java8</artifactId>
        <version>1.3.0</version>
    </dependency>

    <dependency>
        <groupId>org.everit.json</groupId>
        <artifactId>org.everit.json.schema</artifactId>
        <version>1.5.1</version>
    </dependency>

Add XML to transform in src/main/resources folder – test.xml

<?xml version="1.0" encoding="UTF-8"?>
<data>
    <employee>
        <id>30123</id>
        <name>Ben</name>
        <location>Toronto</location>
    </employee>
</data>

Add FTL Template in src/main/resources/templates folder – FTL file: xml2json.ftl

<#assign data = xml['child::node()']>
{
    "employee": {
        "id": ${data.employee.id},
        "name": "${data.employee.name}",
        "location": "${data.employee.location}"
    }
}

Create FmtManager to load and process template as below

package com.mvtechbytes.fmt;

import java.io.IOException;
import java.io.StringWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Map;

import freemarker.cache.StringTemplateLoader;
import freemarker.ext.beans.BeansWrapperBuilder;
import freemarker.template.Configuration;
import freemarker.template.Template;

public class FmtManager {

    private Configuration freemarkerConfig;
    private static final String TEMPLATE_DIRECTORY = "src/main/resources/templates/";

    public FmtManager() {
        freemarkerConfig = new Configuration(Configuration.VERSION_2_3_23);
        freemarkerConfig.setTagSyntax(Configuration.ANGLE_BRACKET_TAG_SYNTAX);
        freemarkerConfig.setDefaultEncoding("UTF-8");
        freemarkerConfig.setNumberFormat("computer");
        freemarkerConfig.setObjectWrapper(new BeansWrapperBuilder(Configuration.VERSION_2_3_23).build());
        freemarkerConfig.setTemplateLoader(new StringTemplateLoader());
    }

    private Template loadTemplate(String templateName, String templatePath) {
        try {
            String templateContent = new String(Files.readAllBytes(Paths.get(templatePath)));
            ((StringTemplateLoader) freemarkerConfig.getTemplateLoader()).putTemplate(templateName, templateContent);
            return freemarkerConfig.getTemplate(templateName);
        } catch (IOException e) {
            throw new RuntimeException(e);
        }
    }

    public String processTemplate(String templateName, Map<String, Object> data) {
        Template template = loadTemplate(templateName, TEMPLATE_DIRECTORY + templateName + ".ftl");
        try (StringWriter writer = new StringWriter()) {
            template.process(data, writer);
            return writer.toString();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}

After adding all the code in the respective folders. Execution of use case can be done using below static method with FmtManager bean injected

public static void xmlToJson(FmtManager templateManager) throws Exception {

        String xmlString = new String(Files.readAllBytes(Paths.get("src/main/resources/test.xml")));
        NodeModel xmlNodeModel = NodeModel.parse(new InputSource(new StringReader(xmlString)));

        Map<String, Object> data = new HashMap<>();
        data.put("xml", xmlNodeModel);

        String json = templateManager.processTemplate("xml2json", data);

        System.out.println(json);
 }

Execution Log Output:

12:48:44.926 [main] DEBUG freemarker.cache - TemplateLoader.findTemplateSource("xml2json"): Found
12:48:44.929 [main] DEBUG freemarker.cache - Loading template for "xml2json"("en_US", UTF-8, parsed) from "xml2json"
{
"employee": {
"id": 101,
"name": "Vikas",
"location": "Toronto"
}
}

JSONTOXML Transformation using FreeMarker

Add JSON to transform in src/main/resources folder – test.json

{
  "data": {
    "employee": {
      "empid": 2012,
      "empname": "Virat",
      "location": "Hyderabad"
    }
  }
}

Add FTL Template in src/main/resources/templates folder – FTL file: json2xml.ftl

<#-- @ftlvariable name="JsonUtil" type="de.consol.jbl.util.JsonUtil" -->
<#assign body = JsonUtil.jsonToMap(input)>
<?xml version="1.0" encoding="UTF-8"?>
<data>
    <employee>
        <id>${body.data.employee.empid}</id>
        <name>${body.data.employee.empname}</name>
     	<location>${body.data.employee.location}</location>
    </employee>
</data>

Create FmtJSONUtil – This to convert json to Java object

package com.mvtechbytes.fmt;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;

public class FmtJsonUtil {
    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();

    public static Map<String, Object> jsonToMap(String json) throws IOException {
        return OBJECT_MAPPER.readValue(json, new TypeReference<HashMap<String, Object>>(){});
    }
}

After adding all the code in the respective folders. Execution of use case can be done using below static method with FmtManager bean injected

private static void jsonToXml(FmtManager templateManager) throws IOException, TemplateModelException {
		 String input = new String(Files.readAllBytes(Paths.get("src/main/resources/test.json")));

	        Map<String, Object> data = new HashMap<>();
	        data.put("input", input);

	        TemplateHashModel staticModels = new BeansWrapperBuilder(Configuration.VERSION_2_3_23).build().getStaticModels();
	        data.put("JsonUtil", staticModels.get(FmtJsonUtil.class.getName()));

	        String output = templateManager.processTemplate("json2xml", data);

	        System.out.println(output);
	}

Execution Log Output:

<?xml version="1.0" encoding="UTF-8"?>
<data>
    <employee>
        <id>2012</id>
        <name>Virat</name>
     	<location>Hyderabad</location>
    </employee>
</data>

Full sourcecode is available in below github link

https://github.com/malliktalksjava/FreeMarkerTransformations

References:

REST API Components – Standards and Design aspects


In this post, we will see the different REST API components w.r.t standards and design aspects

Query parameters and QueryString length in HTTP GET

Security Aspect:

Although officially there is no limit specified by RFC 2616, many security protocols and recommendations state that maxQueryStrings on a server should be set to a maximum character limit of 1024. While the entire URL, including the querystring, should be set to a max of 2048 characters. This is to prevent the Slow HTTP Request DDOS vulnerability on a web server. This typically shows up as a vulnerability on the Qualys Web Application Scanner and other security scanners.

Please see the below example code for Windows IIS Servers with Web.config:

<system.webServer>
<security>
    <requestFiltering>
        <requestLimits maxQueryString="1024" maxUrl="2048">
           <headerLimits>
              <add header="Content-type" sizeLimit="100" />
           </headerLimits>
        </requestLimits>
     </requestFiltering>
</security>
</system.webServer>

This would also work on a server level using machine.config.

Note: Limiting query string and URL length may not completely prevent Slow HTTP Requests DDOS attack but it is one step you can take to prevent it.

414 URI Too Long (RFC 7231):

The URI provided was too long for the server to process. Often the result of too much data being encoded as a query-string of a GET request, in which case it should be converted to a POST request. Called “Request-URI Too Long” previously.

Browser restrictions:

  • Microsoft Internet Explorer (Browser)
    Microsoft states that the maximum length of a URL in Internet Explorer is 2,083 characters, with no more than 2,048 characters in the path portion of the URL. Attempts to use URLs longer than this produced a clear error message in Internet Explorer.
  • Microsoft Edge (Browser)
    The limit appears to be around 81578 characters.
  • Chrome
    It stops displayingthe URL after 64k characters, but can serve more than 100k characters. No further testing was done beyond that.
  • Firefox (Browser)
    After 65,536 characters, the location bar no longer displays the URL in Windows Firefox 1.5.x. However, longer URLs will work. No further testing was done after 100,000 characters.
  • Safari (Browser)
    At least 80,000 characters will work. Testing was not tried beyond that.
  • Opera (Browser)
    At least 190,000 characters will work. Stopped testing after 190,000 characters. Opera 9 for Windows continued to display a fully editable, copyable and pasteable URL in the location bar even at 190,000 characters.
  • Apache (Server)
    Early attempts to measure the maximum URL length in web browsers bumped into a server URL length limit of approximately 4,000 characters, after which Apache produces a “413 Entity Too Large” error. The current up to date Apache build found in Red Hat Enterprise Linux 4 was used. The official Apache documentation only mentions an 8,192-byte limit on an individual field in a request.
  • Microsoft Internet Information Server (Server)
    The default limit is 16,384 characters (yes, Microsoft’s web server accepts longer URLs than Microsoft’s web browser). This is configurable.
  • Perl HTTP::Daemon (Server)
    Up to 8,000 bytes will work. Those constructing web application servers with Perl’s HTTP::Daemon module will encounter a 16,384 byte limit on the combined size of all HTTP request headers. This does not include POST-method form data, file uploads, etc., but it does include the URL. In practice this resulted in a 413 error when a URL was significantly longer than 8,000 characters. This limitation can be easily removed. Look for all occurrences of 16×1024 in Daemon.pm and replace them with a larger value. Of course, this does increase your exposure to denial of service attacks.

When to use @QueryParam versus @PathParam

REST may not be a standard as such, Most APIs tend to only have resource names and resource IDs in the path. Such as:

/departments/{dept}/employees/{id}

Some REST APIs use query strings for filtering, pagination and sorting, but REST isn’t a strict standard.

Recommendation is put any required parameters in the path, and any optional parameters should certainly be query string parameters. Putting optional parameters in the path will end up getting really messy when trying to write URL handlers that match different combinations.

When to use Headers versus URL parameters (PathParam or QueryParam)

GET /orders/view
(custom HTTP header) CLIENT_ID: 23

instead of

GET /orders/view/client_id/23 or
GET /orders/view/?client_id=23

The URL indicates the resource itself. A “client” is a resource that can be acted upon, so should be part of the base url: /orders/view/client/23.

Parameters are just that, to parameterize access to the resource. This especially comes into play with posts and searches: /orders/find?q=blahblah&sort=foo. There’s a fine line between parameters and sub-resources: /orders/view/client/23/active versus /orders/view/client/23?show=active. Recommendation is the sub-resource style and reserve parameters for searches.

Since each endpoint Represents a State Transfer (to mangle the mnemonic), custom headers should only be used for things that don’t involve the name of the resource (the url), the state of the resource (the body), or parameters directly affecting the resource (parameters). That leaves true metadata about the request for custom headers.

HTTP has a very wide selection of headers that cover most everything you’ll need. Where we could see custom headers which come up in a system to system request operating on behalf of a user. The proxy system will validate the user and add “X-User: userid” to the headers and use the system credentials to hit the endpoint. The receiving system validates that the system credentials are authorized to act on behalf of the user, then validate that the user is authorized to perform the action.

Custom headers have the following advantages:

  • Can be read easily by network tools/scripts (authentication, meta info)
  • Keeps urls free from security stuff (safer, not in browser/proxy caches)
  • Keeps urls cleaner: allows for better caching of resources

References:

Spring Boot vs LoopBack – Node.js for developing Microservices


In this post, we will see comparison between Spring Boot and LoopBack – Node.js for implementing Microservices.

SpringBoot

Spring Boot is an open source Java-based framework used to create Microservices. It is developed by Pivotal Team and is used to build stand-alone and production ready spring applications.

Microservices architecture using Java Spring Boot

LoopBack – Node.js

Events and event-driven programming

Events are actions generated by the user or the system, like a click, a completed file download, or a hardware or software error.

Event-driven programming is a programming paradigm in which the flow of the program is determined by events. An event-driven program performs actions in response to events. When an event occurs it triggers a callback function.

Node.js is a platform that executes server-side JavaScript programs that can communicate with I/O sources like file systems and networks.

LoopBack

LoopBack is a highly extensible, open-source Node.js and TypeScript framework based on Express that enables you to quickly create APIs and microservices composed from backend systems such as databases and SOAP or REST services.

The diagram below demonstrates how LoopBack serves as a composition bridge between incoming requests and outgoing integrations. It also shows the different personas who are interested in various capabilities provided by LoopBack.

Advantages of LoopBack – Node.js and Spring Boot

LoopBack – Node.js Spring Boot
Lightweight, fast – loosely typed Java is statically-typed (type safety)
Javascript Community: growing rapidly Java Community: mature and thriving
Great for I/O tasks. Example: file writing and reading, network calls, Streaming Long-term support and maintainability for memory intensive applications
Single-threaded – low memory utilization Support for multi-threading
npm is constantly growing Many easily usable dependencies using Maven, Gradle

Disadvantages of LoopBack – Node.js and Spring Boot

LoopBack – Node.js

  • Doesn’t support multi-threading
  • Lack of strict type checking can lead to runtime problems
  • Not great for heavy computing – performance bottlenecks

Spring Boot

  • High memory utilization
  • Java is verbose
  • Contains lots of boilerplate code which makes debugging tough
  • May include unused dependencies – huge deployment binary file size.

Industry Usage of these technologies

Companies using Spring Boot

  • Amazon
  • Intuit
  • JP Morgan Chase & Co.
  • Capital One
  • Google
  • Microsoft

Companies using Node.js

  • FlightOffice
  • Symantec
  • Pen Systems
  • GoDaddy.com
  • Sapient

LoopBack vs SpringBoot on various parameters

Criteria / Parameter SpringBoot LoopBack
Performance Long-term support and maintainability for memory intensive applications Great for I/O tasks. Example: file writing and reading, network calls, Streaming
Circuit Breaker Resilience4j Opossum
Hystrix Levee
Soap Client Apache CXF, Camel, Spring WebServiceTemplate loopback-connector-soap
JSON Manipulation/Validation Jackson, Spring Validator payload-validator
Orchestration and Routing support Apache CXF, Camel, Spring WebServiceTemplate, RestTemplate loopback-connector-soap, loopback-connector-rest
Caching support Spring Cache, external cache support Interception – CachingService, external cache support
Open API Contract first, API first both are supported Contract first, API first both are supported
Recommended For Building applications which consists of Memory intensive tasks Building applications which consists of I/O intensive tasks

References:

Algorithms in Java Interviews


In this post, we will see algorithm problems with their solutions which are asked during Java interviews.

How to check if a number is Palindrome?

void checkPalindrome(int n){
  int temp, sum = 0;
  int input=n;

  while(n>0) {
     temp = n%10;
     sum = (sum*10) + temp;
     n = n/10;
  }

  if(input == sum){
   System.out.println("Palindrome");
  } else {
   System.out.println("Not Palindrome");
  }
}

How to check if a number is Prime in Java8?

void checkPrime(int n) {
if(n > 1 && IntStream.range(2, n).noneMatch(i -> i%n==0)) {
System.out.println("Prime");
} else {
System.out.println("Non-Prime");
}
}

How to sort objects in reverse order in Java8?

Student student1 = new Student(372,"Venkat",1);
Student student2 = new Student(2,"Sachin",4);
Student student3 = new Student(2345,"Ganguly",6);
Student student4 = new Student(72,"Karthik",2);
List studlist = new CopyOnWriteArrayList();
studlist.add(student1);
studlist.add(student2);
studlist.add(student3);
studlist.add(student4);

// Iterate in Java8
studlist.forEach(s -> System.out.println(s.name));

// Sort by Ids
studlist.sort((Student s1,Student s2) -> s1.getId() - s2.getId());

// Sort by Rank in reverse Order
studlist.sort((Student s1,Student s2) -> s2.getRank() - s1.getRank());

Find second highest number in an Array?

int arr[] = {45,89, 29,1, 9, 100};
int highest = 0, secondHighest = 0;

for(int i=0; i<arr.length;i++) {   if(arr[i] > highest) {
     highest = arr[i];
  } else if(arr[i] > secondHighest) {
     secondHighest = arr[i];
  }
}

Find Nth highest Salary from a SQL Table?

SELECT MIN(SALARY) FROM EMPLOYEE
       WHERE SALARY IN (SELECT DISTINCT TOP N 
                               FROM EMPLOYEE ORDER BY SALARY desc);

Print Only Numerics from a String?

String sampleStr = "fdsha3430d3kdjafl0737434833";
String numericsOnlyStr = sampleStr.replaceAll("[^0-9]", "");

Print Duplicates in an Array?

for(int i=0;i<arr.length;i++) {
  for(int j=i+1; j< arr.length; j++) {
     if(arr[i] == arr[j]) {
           System.out.println(arr[j]);
     }
  }
}

Fetch Frequency of Elements repeated in an Array?

  Map<Integer, Integer> mp = new HashMap<>(); 
  
        // Iterating through array elements 
        for (int i = 0; i < n; i++) 
        { 
            if (mp.containsKey(arr[i]))  { 
                mp.put(arr[i], mp.get(arr[i]) + 1); 
            } else { 
                mp.put(arr[i], 1); 
            } 
        } 
        
        // Iterating through Map and Printing frequencies 
        for (Map.Entry<Integer, Integer> entry : mp.entrySet()) { 
            System.out.println(entry.getKey() + " " + entry.getValue()); 
        }

Find Triplets in an array whose sum is equal to n?

public class Triplets {
public static List<List> findTriplets(int[] numbers, int sum) {
List<List> tripletsCombo = new ArrayList<List>();
HashSet set = new HashSet();
List triplets = new ArrayList();

if (numbers.length == 0 || sum <= 0) {
   return tripletsCombo;
}

Arrays.sort(numbers);

for (int i = 0; i < numbers.length - 2; i++) {
int j = i + 1;
int k = numbers.length - 1;

while (j < k) {
   if (numbers[i] + numbers[j] + numbers[k] == sum) {
      String str = numbers[i] + "," + numbers[j] + "," +       numbers[k];
      // Check for the unique Triplet
      if (!set.contains(str)) {
               triplets.add(numbers[i]);
               triplets.add(numbers[j]);
               triplets.add(numbers[k]);
               tripletsCombo.add(triplets);
               triplets = new ArrayList();
               set.add(str);
     }
     j++;
     k--;
} else if (numbers[i] + numbers[j] + numbers[k] < sum) {    j++; } else { // numbers[i] + numbers[j] + numbers[k] > sum
   k--;
}
}
}

return tripletsCombo;
}

public static void main(String[] args) {
int[] numbers = { 2, 3, 1, 5, 4 };
int sum = 9;
List<List> triplets = findTriplets(numbers, sum);

if (triplets.isEmpty()) {
   System.out.println("No triplets are found");
} else {
   System.out.println(triplets);
}
}
}

How to check if two strings are Anagrams?

Two strings are called Anagrams if they contain same set of characters but in different order.  Examples:  “Astronomer – Moon starer”, “A gentleman – Elegant man”, “Dormitory – Dirty Room”, “keep – peek”.

void isAnagram(String input1, String input2) {
   //Removing all white spaces from s1 and s2
   String s1_nonSpaces = input1.replaceAll("\\s", "");
   String s2_nonSpaces = input2.replaceAll("\\s", "");

   boolean status = true;
   if(s1_nonSpaces.length() != s2_nonSpaces.length()) {
      status = false;
   } else {
      char[] s1Array = s1_nonSpaces.toLowerCase().toCharArray();
      char[] s2Array = s2_nonSpaces.toLowerCase().toCharArray();
      Arrays.sort(s1Array); 
      Arrays.sort(s2Array); 
      status = Arrays.equals(s1Array, s2Array);
   }
   System.out.print(status?"Anagrams":"Non-Anagrams");
}

Swap numbers without using temp/third variable?

void swapWithoutTemp(int a, int b) {
 a = a+b;
 b = a-b;
 a = a-b;
}

Find number of combinations for Sum of Two Elements from two arrays is equal to N?

We have two arrays of numbers, suppose we take one element from first array and another element from second array. Their sum should be equal to N(given number).

sumOfTwoElementsInTwoArrays() {
  int arr1[] = {4,8,10,12,7};
  int arr2[] = {6,90,34,45};

  int sumValue = 44; 
  HashSet complements = new HashSet();
  int pairCount = 0;

  for(int i=0;i<arr1.length;i++) {
    complements.add(arr1[i] - sumValue);
  }

  for(int j=0;j<arr1.length;j++) {
    if(complements.contains(arr2[j])) {
      pairCount++;
    }
 }

System.out.print("Number of pairs is "+pairCount);
}

First non repeated character in a String?

String str = "BANANA";
char firsNonRepeatedCharacter;
HashMap<Character, Integer> hmp = new HashMap<Character, Integer>();

for(int z=0;z<s.length();z++) {
  if(hmp.containsKey(str.charAt(z))) {
    hmp.put(str.charAt(z), hmp.get(str.charAt(z))+1);
  } else {
     hmp.put(str.charAt(z), 1);
  }
}

Set characterSet = hmp.keySet();
for(Character c:characterSet){
  if(hmp.get(c).toString()equals("1")) {
    firsNonRepeatedCharacter = c;
    break;
  }
}

Find the number of occurrence of an element in an array using Java8?

int b[] = {1,2,34,1};

List bList = Arrays.stream(b).boxed().collect(Collectors.toList());

System.out.println(bList.stream().filter(z -> z.toString().equalsIgnoreCase("1")).count());

100 doors toggle open/close

There are 100 doors in a row, all doors are initially closed. A person walks through all doors multiple times and toggle (if open then close, if close then open) them in following way:

In first walk, the person toggles every door, In second walk, the person toggles every second door, i.e., 2nd, 4th, 6th, 8th, …, In third walk, the person toggles every third door, i.e. 3rd, 6th, 9th, …

Find in nth walk, what will be the status of all doors

doorsOpenClosed(int no_of_walks) {
  int door_id, walk_id;
  int doors[] = new int[101];
  for(int i=0;i<100;i++) {
   doors[i] = 0;
  }

for (walk_id = 1; walk_id <= no_of_walks; walk_id++) {
  for (door_id = walk_id; door_id <= 100; door_id += walk_id) {
    if(door_id%walk_id == 0) {
      doors[door_id]=(doors[door_id] == 0)?1:0;
    }
  }
}

for (int j = 0; j <= 100; j++) {
 if(doors[j] == 1) {
   System.out.println("Open Door number::::"+j);
 }
}

}