Skip to main content

Usage: UseStringDeduplication : Pros and Cons

Usage: UseStringDeduplication : Pros and Cons


Let me start this article with an interesting statistic (based on the research conducted by the JDK development team):
  • 25 percent of Java applications memory is filled up with strings.
  • 13.5 percent are duplicate strings in Java applications.
  • Average string length is 45 characters.
Yes, you are right — 13.5 percent of memory is wasted due to duplicate strings. 13.5 percent is the average amount of duplicate strings present in Java application. To figure out how much memory your application is wasting because of duplicate strings, you may use tools like HeapHero, which can report how much memory is wasted because of duplicate strings and other inefficient programming practices.

What Are Duplicate Strings?

First, let’s understand what a duplicate string means. Look at the below code snippet:
String string1 = new String("Hello World");
String string2 = new String("Hello World");

In the above code, there are two string objects—string1 and string2 . They have the same contents, i.e. “Hello World,” but they are stored in two different objects. When you do  string1.equals(string2), it will return ‘true’, but  string1 == string2 will return ‘false’. This is what we call duplicate strings.

Why There Are So Many Duplicate Strings?

There are several reasons why an application ends up having a lot of duplicate strings. In this section, let's review the two most common patterns:
# 1. Developers create new string objects for every request, instead of referencing/reusing ‘public static final string literal.' The below example can be optimally written using a string literal pattern:
public static final String HELLO_WORLD = "Hello World";
String string1 = HELLO_WORLD;
String string2 = HELLO_WORLD;

# 2. Suppose you are building banking/e-commerce application. You are storing currency (i.e. ‘USD’, ‘EUR’, ‘INR’, ….) for every transaction record in the database. Say now a customer logs in to your application and is viewing their transaction history page. Now, your application will end up reading all transactions pertaining to this customer from the database. Suppose this customer lives in the US (then most, if not all, his transactions would be in USD). Since every transaction record has currency, your application will end up creating the ‘USD’ string object for every transaction record read from the database. If this customer has thousands of transactions, you will end up creating thousands of duplicate ‘USD’ string objects in memory just for this one single customer.
Similarly, your application could be reading multiple columns (customer name, address, state, country, account number, Ids,…..) from databases multiple times. There could be duplicates among them. Your application reads and writes XML/JSON with external applications, and it manipulates a lot of strings. All these operations can, and often will, create duplicate strings.
This problem has been long recognized by the JDK team since its origin (back in the mid-1990s), thus they have come up with multiple solutions so far. The latest addition to this solution list is ‘- XX:+UseStringDeduplication.’

-XX:+UseStringDeduplication

Least effort attempt to eliminate duplicate strings is to pass -XX:+UseStringDeduplication JVM argument. When you pass this JVM argument during application startup, JVM will try to eliminate duplicate strings as part of the garbage collection process. During the garbage collection process, the JVM inspects all the objects in memory, thus as part of that process, it tries to identify duplicate strings among them and tries to eliminate it.
Does that mean if you just pass ‘-XX:+UseStringDeduplication’ JVM argument will you be able to save 13.5% of memory immediately? Sounds pretty easy, right? We wish it is that easy. But there are some catches to this -XX:+UseStringDeduplication solution. Let’s discuss them.

(1). Works Only With the G1 GC Algorithm

There are several garbage collection algorithms (Serial, Parallel, CMS, G1,…). -XX:+UseStringDeduplication works only if you are using the G1 GC algorithm. So, if you are using some other GC algorithm, you need to switch to G1 GC algorithm to use  -XX:+UseStringDeduplication.

(2). Works Only on Long-Lived Objects

 -XX:+UseStringDeduplication eliminates duplicate strings. which live for a longer period of time. They don’t eliminate duplicate strings among short-lived string objects. If objects are short-lived, they are going to die down soon, and then, what is the point of spending resources to eliminate duplicate strings among them. Here is a real-life case study conducted on a major Java web application that didn’t show any memory relief when -XX:+UseStringDeduplication was used. However,  -XX:+UseStringDeduplication can be of value, if your application has a lot of caches (since cache objects typically tend to be long-lived objects).

(3). -XX:StringDeduplicationAgeThreshold

By default, strings become eligible for deduplication if they have survived three GC runs. It can be changed by passing this  -XX:StringDeduplicationAgeThreshold.
Example:
-XX:StringDeduplicationAgeThreshold=6

(4). Impact on GC Pause Times

Since string deduplication is performed during garbage collection, it has the potential to impact GC pause time. However, the assumption is that a high enough deduplication success rate will balance out most or all of this impact, because deduplication can reduce the amount of work needed in other phases of a GC pause (like reduced number of objects to evacuate) as well as reduce the GC frequency (due to reduced pressure on the heap). To analyze the GC pause time impact, you may consider using tools like GCeasy

(5). Only Underlying char[ ] Is Replaced

The java.lang.String class has two fields:
private final char[] value
private int hash

 -XX:+UseStringDeduplication doesn’t eliminate duplicate string object itself. It only replaces the underlying char[ ]. Deduplicating a string object is conceptually just a re-assignment of the value field, i.e. a  String.value = anotherString.value.
Each string object takes at least 24 bytes (the exact size of a string object depends on the JVM configuration, but 24 bytes is a minimum). Thus, this feature saves less memory if there are a lot of short duplicate strings.

(6). Java 8 Update 20

The -XX:+UseStringDeduplication feature is supported only from Java 8 update 20. Thus, if you are running on any older versions of Java, you will not be able to use this feature.

(7). -XX:+PrintStringDeduplicationStatistics

If you would like to see string deduplication statistics, such as how much time it took to run, how much duplicate strings were evacuated, how much savings you gained, you may pass the -XX:+PrintStringDeduplicationStatistics JVM argument. In the error console, statistics will be printed.

Conclusion

If your application is using G1 GC and running on a version above Java 8 update 20, you may consider enabling -XX:+UseStringDeduplication. You might get fruitful results especially if there are a lot of duplicate strings among long-lived objects. However, do thorough testing before enabling this argument in the production environment.

Comments

Popular posts from this blog

Introduction to Customer Segmentation in Python !!!

Introduction to Customer Segmentation in Python In this tutorial, you're going to learn how to implement customer segmentation using RFM(Recency, Frequency, Monetary) analysis from scratch in Python. In the Retail sector, the various chain of hypermarkets generating an exceptionally large amount of data. This data is generated on a daily basis across the stores. This extensive database of customers transactions needs to analyze for designing profitable strategies. All customers have different-different kind of needs. With the increase in customer base and transaction, it is not easy to understand the requirement of each customer. Identifying potential customers can improve the marketing campaign, which ultimately increases the sales. Segmentation can play a better role in grouping those customers into various segments. In this tutorial, you will cover the following topics: What is Customer Segmentation? Need of Customer Segmentation Types of Segmentation Customer

Let's Understand Ten Machine Learning Algorithms

Ten Machine Learning Algorithms to Learn Machine Learning Practitioners have different personalities. While some of them are “I am an expert in X and X can train on any type of data”, where X = some algorithm, some others are “Right tool for the right job people”. A lot of them also subscribe to “Jack of all trades. Master of one” strategy, where they have one area of deep expertise and know slightly about different fields of Machine Learning. That said, no one can deny the fact that as practicing Data Scientists, we will have to know basics of some common machine learning algorithms, which would help us engage with a new-domain problem we come across. This is a whirlwind tour of common machine learning algorithms and quick resources about them which can help you get started on them. 1. Principal Component Analysis(PCA)/SVD PCA is an unsupervised method to understand global properties of a dataset consisting of vectors. Covariance Matrix of data points is analyzed here to un

Tapping Into the “Long Tail” of Big Data

Variety, not volume or velocity, drives big-data investments !!! Gartner defines big data as the three Vs: high-volume, high-velocity, high-variety information assets. While all three Vs are growing, variety is becoming the single biggest driver of big-data investments, as seen in the results of a recent survey by New Vantage Partners. This trend will continue to grow as firms seek to integrate more sources and focus on the “long tail” of big data. From schema-free JSON to nested types in other databases (relational and NoSQL), to non-flat data (Avro, Parquet, XML), data formats are multiplying and connectors are becoming crucial. In 2017, analytics platforms will be evaluated based on their ability to provide live direct connectivity to these disparate sources. Tapping Into the “Long Tail” of Big Data When asked about drivers of Big Data success, 69% of corporate executives named greater data variety as the most important factor, followed by volume (25%), with velocity (6%)