Skip to main content

Introduction to Customer Segmentation in Python !!!

Introduction to Customer Segmentation in Python

In this tutorial, you're going to learn how to implement customer segmentation using RFM(Recency, Frequency, Monetary) analysis from scratch in Python.
In the Retail sector, the various chain of hypermarkets generating an exceptionally large amount of data. This data is generated on a daily basis across the stores. This extensive database of customers transactions needs to analyze for designing profitable strategies.
All customers have different-different kind of needs. With the increase in customer base and transaction, it is not easy to understand the requirement of each customer. Identifying potential customers can improve the marketing campaign, which ultimately increases the sales. Segmentation can play a better role in grouping those customers into various segments.
In this tutorial, you will cover the following topics:
  • What is Customer Segmentation?
  • Need of Customer Segmentation
  • Types of Segmentation
  • Customer Segmentation using RFM Analysis
  • Identify Potential Customer Segments using RFM in Python
  • Conclusion

What is Customer Segmentation?

Customer segmentation is a method of dividing customers into groups or clusters on the basis of common characteristics. The market researcher can segment customers into the B2C model using various customer's demographic characteristics such as occupation, gender, age, location, and marital status. Psychographic characteristics such as social class, lifestyle and personality characteristics and behavioral characteristics such as spending, consumption habits, product/service usage, and previously purchased products. In the B2B model using various company's characteristics such as the size of the company, type of industry, and location.

Need of Customer Segmentation

  • It will help in identifying the most potential customers.
  • It will help managers to easily communicate with a targetted group of the audience.
  • Also, help in selecting the best medium for communicating with the targetted segment.
  • It improves the quality of service, loyalty, and retention.
  • Improve customer relationship via better understanding needs of segments.
  • It provides opportunities for upselling and cross-selling.
  • It will help managers to design special offers for targetted customers, to encourage them to buy more products.
  • It helps companies to stay a step ahead of competitors.
  • It also helps in identifying new products that customers could be interested in.

Types of Segmentation

Customer Segmentation using RFM analysis

RFM (Recency, Frequency, Monetary) analysis is a behavior-based approach grouping customers into segments. It groups the customers on the basis of their previous purchase transactions. How recently, how often, and how much did a customer buy. RFM filters customers into various groups for the purpose of better service. It helps managers to identify potential customers to do more profitable business. There is a segment of customer who is the big spender but what if they purchased only once or how recently they purchased? Do they often purchase our product? Also, It helps managers to run an effective promotional campaign for personalized service.
  • Recency (R): Who have purchased recently? Number of days since last purchase (least recency)
  • Frequency (F): Who has purchased frequently? It means the total number of purchases. ( high frequency)
  • Monetary Value(M): Who have high purchase amount? It means the total money customer spent (high monetary value)
Here, Each of the three variables(Recency, Frequency, and Monetary) consists of four equal groups, which creates 64 (4x4x4) different customer segments.
Steps of RFM(Recency, Frequency, Monetary):
  1. Calculate the Recency, Frequency, Monetary values for each customer.
  2. Add segment bin values to RFM table using quartile.
  3. Sort the customer RFM score in ascending order.

1. Calculate the Recency, Frequency, Monetary values for each customer.

2. Add segment bin values to RFM table using quartile.

3. Concate all scores in single column(RFM_Score).

Identify Potential Customer Segments using RFM in Python

Importing Required Library

#import modules
import pandas as pd # for dataframes
import matplotlib.pyplot as plt # for plotting graphs
import seaborn as sns # for plotting graphs
import datetime as dt

Loading Dataset

Let's first load the required HR dataset using the pandas read CSV function. You can download the data from this link
(Online Retail.xlsx file)

data = pd.read_excel("Online_Retail.xlsx")
data.head()
data.tail()
data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 541909 entries, 0 to 541908
Data columns (total 8 columns):
InvoiceNo      541909 non-null object
StockCode      541909 non-null object
Description    540455 non-null object
Quantity       541909 non-null int64
InvoiceDate    541909 non-null datetime64[ns]
UnitPrice      541909 non-null float64
CustomerID     406829 non-null float64
Country        541909 non-null object
dtypes: datetime64[ns](1), float64(2), int64(1), object(4)
memory usage: 33.1+ MB
data= data[pd.notnull(data['CustomerID'])]

Removing Duplicates

Sometimes you get a messy dataset. You may have to deal with duplicates, which will skew your analysis. In python, pandas offer function drop_duplicates(), which drops the repeated or duplicate records.
filtered_data=data[['Country','CustomerID']].drop_duplicates()

Let's Jump into Data Insights

#Top ten country's customer
filtered_data.Country.value_counts()[:10].plot(kind='bar')
<matplotlib.axes._subplots.AxesSubplot at 0x7fd81725dfd0>
In the given dataset, you can observe most of the customers are from the "United Kingdom". So, you can filter data for United Kingdom customer.
uk_data=data[data.Country=='United Kingdom']
uk_data.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 361878 entries, 0 to 541893
Data columns (total 8 columns):
InvoiceNo      361878 non-null object
StockCode      361878 non-null object
Description    361878 non-null object
Quantity       361878 non-null int64
InvoiceDate    361878 non-null datetime64[ns]
UnitPrice      361878 non-null float64
CustomerID     361878 non-null float64
Country        361878 non-null object
dtypes: datetime64[ns](1), float64(2), int64(1), object(4)
memory usage: 24.8+ MB
The describe() function in pandas is convenient in getting various summary statistics. This function returns the count, mean, standard deviation, minimum and maximum values and the quantiles of the data.
uk_data.describe()
QuantityUnitPriceCustomerID
count361878.000000361878.000000361878.000000
mean11.0770293.25600715547.871368
std263.12926670.6547311594.402590
min-80995.0000000.00000012346.000000
25%2.0000001.25000014194.000000
50%4.0000001.95000015514.000000
75%12.0000003.75000016931.000000
max80995.00000038970.00000018287.000000
Here, you can observe some of the customers have ordered in a negative quantity, which is not possible. So, you need to filter Quantity greater than zero.
uk_data = uk_data[(uk_data['Quantity']>0)]
uk_data.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 354345 entries, 0 to 541893
Data columns (total 8 columns):
InvoiceNo      354345 non-null object
StockCode      354345 non-null object
Description    354345 non-null object
Quantity       354345 non-null int64
InvoiceDate    354345 non-null datetime64[ns]
UnitPrice      354345 non-null float64
CustomerID     354345 non-null float64
Country        354345 non-null object
dtypes: datetime64[ns](1), float64(2), int64(1), object(4)
memory usage: 24.3+ MB

Filter required Columns

Here, you can filter the necessary columns for RFM analysis. You only need her five columns CustomerID, InvoiceDate, InvoiceNo, Quantity, and UnitPrice. CustomerId will uniquely define your customers, InvoiceDate help you calculate recency of purchase, InvoiceNo helps you to count the number of time transaction performed(frequency). Quantity purchased in each transaction and UnitPrice of each unit purchased by the customer will help you to calculate the total purchased amount.
uk_data=uk_data[['CustomerID','InvoiceDate','InvoiceNo','Quantity','UnitPrice']]
uk_data['TotalPrice'] = uk_data['Quantity'] * uk_data['UnitPrice']
uk_data['InvoiceDate'].min(),uk_data['InvoiceDate'].max()
(Timestamp('2010-12-01 08:26:00'), Timestamp('2011-12-09 12:49:00'))
PRESENT = dt.datetime(2011,12,10)
uk_data['InvoiceDate'] = pd.to_datetime(uk_data['InvoiceDate'])
uk_data.head()
CustomerIDInvoiceDateInvoiceNoQuantityUnitPriceTotalPrice
017850.02010-12-01 08:26:0053636562.5515.30
117850.02010-12-01 08:26:0053636563.3920.34
217850.02010-12-01 08:26:0053636582.7522.00
317850.02010-12-01 08:26:0053636563.3920.34
417850.02010-12-01 08:26:0053636563.3920.34

RFM Analysis

Here, you are going to perform following opertaions:
  • For Recency, Calculate the number of days between present date and date of last purchase each customer.
  • For Frequency, Calculate the number of orders for each customer.
  • For Monetary, Calculate sum of purchase price for each customer.
rfm= uk_data.groupby('CustomerID').agg({'InvoiceDate': lambda date: (PRESENT - date.max()).days,
                                        'InvoiceNo': lambda num: len(num),
                                        'TotalPrice': lambda price: price.sum()})
rfm.columns
Index(['InvoiceDate', 'TotalPrice', 'InvoiceNo'], dtype='object')
# Change the name of columns
rfm.columns=['monetary','frequency','recency']
rfm['recency'] = rfm['recency'].astype(int)
rfm.head()
monetaryfrequencyrecency
CustomerID
12346.032577183.601
12747.024196.01103
12748.0033719.734596
12749.034090.88199
12820.03942.3459

Computing Quantile of RFM values

Customers with the lowest recency, highest frequency and monetary amounts considered as top customers.
qcut() is Quantile-based discretization function. qcut bins the data based on sample quantiles. For example, 1000 values for 4 quantiles would produce a categorical object indicating quantile membership for each customer.
rfm['r_quartile'] = pd.qcut(rfm['recency'], 4, ['1','2','3','4'])
rfm['f_quartile'] = pd.qcut(rfm['frequency'], 4, ['4','3','2','1'])
rfm['m_quartile'] = pd.qcut(rfm['monetary'], 4, ['4','3','2','1'])
rfm.head()
monetaryfrequencyrecencyr_quartilef_quartilem_quartile
CustomerID
12346.032577183.601111
12747.024196.01103414
12748.0033719.734596414
12749.034090.88199414
12820.03942.3459324

RFM Result Interpretation

Combine all three quartiles(r_quartile,f_quartile,m_quartile) in a single column, this rank will help you to segment the customers well group.
rfm['RFM_Score'] = rfm.r_quartile.astype(str)+ rfm.f_quartile.astype(str) + rfm.m_quartile.astype(str)
rfm.head()
monetaryfrequencyrecencyr_quartilef_quartilem_quartileRFM_Score
CustomerID
12346.032577183.601111111
12747.024196.01103414414
12748.0033719.734596414414
12749.034090.88199414414
12820.03942.3459324324
# Filter out Top/Best cusotmers
rfm[rfm['RFM_Score']=='111'].sort_values('monetary', ascending=False).head()
monetaryfrequencyrecencyr_quartilef_quartilem_quartileRFM_Score
CustomerID
16754.03722002.42111111
12346.032577183.61111111
15749.023544534.310111111
16698.02261998.05111111
13135.01963096.01111111

Conclusion

Congratulations, you have made it to the end of this tutorial!
In this tutorial, you covered a lot of details about Customer Segmentation. You have learned what the customer segmentation is, Need of Customer Segmentation, Types of Segmentation, RFM analysis, Implementation of RFM from scratch in python. Also, you covered some basic concepts of pandas such as handling duplicates, groupby, and qcut() for bins based on sample quantiles.
Hopefully, you can now utilize topic modeling to analyze your own datasets. Thanks for reading this tutorial !!!

Comments

Popular posts from this blog

Let's Understand Ten Machine Learning Algorithms

Ten Machine Learning Algorithms to Learn Machine Learning Practitioners have different personalities. While some of them are “I am an expert in X and X can train on any type of data”, where X = some algorithm, some others are “Right tool for the right job people”. A lot of them also subscribe to “Jack of all trades. Master of one” strategy, where they have one area of deep expertise and know slightly about different fields of Machine Learning. That said, no one can deny the fact that as practicing Data Scientists, we will have to know basics of some common machine learning algorithms, which would help us engage with a new-domain problem we come across. This is a whirlwind tour of common machine learning algorithms and quick resources about them which can help you get started on them. 1. Principal Component Analysis(PCA)/SVD PCA is an unsupervised method to understand global properties of a dataset consisting of vectors. Covariance Matrix of data points is analyzed here to un...

gRPC with Java : Build Fast & Scalable Modern API & Microservices using Protocol Buffers

gRPC Java Master Class : Build Fast & Scalable Modern API for your Microservice using gRPC Protocol Buffers gRPC is a revolutionary and modern way to define and write APIs for your microservices. The days of REST, JSON and Swagger are over! Now writing an API is easy, simple, fast and efficient. gRPC is created by Google and Square, is an official CNCF project (like Docker and Kubernetes) and is now used by the biggest tech companies such as Netflix, CoreOS, CockRoachDB, and so on! gRPC is very popular and has over 15,000 stars on GitHub (2 times what Kafka has!). I am convinced that gRPC is the FUTURE for writing API for microservices so I want to give you a chance to learn about it TODAY. Amongst the advantage of gRPC: 1) All your APIs and messages are simply defined using Protocol Buffers 2) All your server and client code for any programming language gets generated automatically for free! Saves you hours of programming 3) Data is compact and serialised 4) API ...

What is Big Data ?

What is Big Data ? It is now time to answer an important question – What is Big Data? Big data, as defined by Wikipedia, is this: “Big data is a broad term for  data sets  so large or complex that traditional  data processing  applications are inadequate. Challenges include  analysis , capture,  data curation , search,  sharing ,  storage , transfer ,  visualization ,  querying  and  information privacy . The term often refers simply to the use of  predictive analytics  or certain other advanced methods to extract value from data, and seldom to a particular size of data set.” In simple terms, Big Data is data that has the 3 characteristics that we mentioned in the last section – • It is big – typically in terabytes or even petabytes • It is varied – it could be a traditional database, it could be video data, log data, text data or even voice data • It keeps increasing as new data keeps flowing in This kin...