value_count(): Pandas DataFrame

We will get unique values and its frequency as series.

Parameters

normalize : Default is False, If set to True then give realtive frequency ( instead of numbers ) of unique values.
sort : Default is True, sort by frequency of unique values
ascending :Default is False, Sort in ascending order
bins : integer or non-uniform width or interval index
dropna: (default True) Not to include counts for NaN

Examples with Parameters

count_values() will return all unique values with number of occurrence. We have different classes in our data column CLASS1.
import pandas as pd 
my_dict={'NAME':['Ravi','Raju','Alex','Ron','King','Jack'],
         'ID':[1,2,3,4,5,6],
         'MATH':[80,40,70,70,60,30],
         'CLASS1':['Four','Three','Three','Four','Five','Three']}
my_data = pd.DataFrame(data=my_dict)
print(my_data['CLASS1'].value_counts())
Output
Three    3
Four     2
Five     1

normalize

By default it is False. By making it True ( normalize=True ) instead of number of occurrence we will display relative frequencies of unique values.
print(my_data['CLASS1'].value_counts(normalize=True))
Output
Three     0.500000
Fourth    0.333333
Five      0.166667

sort

By default it is True ( sort=True ). Sorting by frequency of unique values.
my_data = pd.DataFrame(data=my_dict)
print(my_data['CLASS1'].value_counts(sort=False))
Output
Five     1
Three    3
Four     2

ascending

By default ascending=False , we will set it to ascending=True
print(my_data['CLASS1'].value_counts(ascending=True))
Output
Five     1
Four     2
Three    3

bins

We can create segments using bins. Let us create fixed width bins by using integer.
print(my_data['MATH'].value_counts(bins=3))
Output
(63.333, 80.0]                  3
(29.948999999999998, 46.667]    2
(46.667, 63.333]                1
non-uniform width bins
print(my_data['MATH'].value_counts(bins=[1,50,70,90]))
Output

(50.0, 70.0]     3
(0.999, 50.0]    2
(70.0, 90.0]     1

dropna

Default value is True, Don't include counts of NaN. We will set the value to False ( dropna=False ) to include NaN vlaues.
import pandas as pd 
import numpy as np
my_dict={'NAME':['Ravi','Raju','Alex','Ron','King','Jack'],
         'ID':[1,2,3,4,5,6],
         'MATH':[80,40,70,70,np.nan,30],
         'CLASS1':['Four','Three','Three','Four','Five','Three']}
my_data = pd.DataFrame(data=my_dict)
print(my_data['MATH'].value_counts(dropna=False))
Output
70.0    2
30.0    1
NaN     1
40.0    1
80.0    1
We can set it to True ( dropna=True)
print(my_data['MATH'].value_counts(dropna=True))
Output
70.0    2
30.0    1
40.0    1
80.0    1
Unique data

Pandas groupby cut() segment and sort data values into bins
Subhendu Mohapatra — author at plus2net
Subhendu Mohapatra

Author

🎥 Join me live on YouTube

Passionate about coding and teaching, I publish practical tutorials on PHP, Python, JavaScript, SQL, and web development. My goal is to make learning simple, engaging, and project‑oriented with real examples and source code.



Subscribe to our YouTube Channel here



plus2net.com







Python Video Tutorials
Python SQLite Video Tutorials
Python MySQL Video Tutorials
Python Tkinter Video Tutorials
We use cookies to improve your browsing experience. . Learn more
HTML MySQL PHP JavaScript ASP Photoshop Articles Contact us
©2000-2025   plus2net.com   All rights reserved worldwide Privacy Policy Disclaimer