• Post author:
  • Post category:PySpark
  • Post last modified:March 27, 2024
  • Reading time:41 mins read
You are currently viewing PySpark SQL Functions

PySpark SQL provides several built-in standard functions pyspark.sql.functions to work with DataFrame and SQL queries. All these PySpark Functions return pyspark.sql.Column type.

PySpark SQL Function Introduction

In order to use these SQL Standard Functions, you need to import the below packing into your application.


# sql functions import
from pyspark.sql.functions import <comma separated funtions to use>

PySpark SQL also includes more built-in functions that are less common and are not defined here. You can still access them (and all the functions defined here) using the functions.expr() API and calling them through a SQL expression string. You can find the entire list of functions at SQL API documentation. regr_count is an example of a function that is built-in but not defined here, because it is less commonly used. To invoke it, use expr("regr_count(yCol, xCol)").

When possible try to leverage PySpark SQL standard library functions as they are a little bit more compile-time safety, handle null, and perform better when compared to UDF’s. If your application is critical on performance try to avoid using custom UDF functions at all costs as these are not guaranteed on performance.

PySpark groups all these SQL functions into the below categories. Click on the category for the list of functions, syntax, descriptions, and examples

Note: This page is a work in progress, please visit again if you are looking for more functions.

PySpark SQL Functions

In the below section, I will explain a few functions from each category. For a complete list refer to official Spark documentation.

PySpark SQL String Functions

String functions are grouped as “ string_funcs” in spark SQL. Below is a list of the most commonly used functions defined under this group. Click on each link to learn with a Scala example.

String FunctionsDescription
concat_ws(sep, *cols)Concat multiple strings into a single string with a specified separator
format_number(col, d)Formats the number to ‘#,–#,–#.–’, rounded to d and returns the value in String
format_string(format, *cols)Formats the input string to printf-style.
length(col)Returns the length of the input string column.
lower(col)
upper(col)
lower() – Converts all characters in a string to lowercase
upper() – Convers all characters to uppercase
lpad(col, len, pad)
rpad(col,len,pad)
lpad() – Add a specified character as padding on the left side.
rpad() – Add a specified character as padding on the right side.
ltrim(col)
rtrim()
ltrim() – Removes the space on the left side.
rtrim() – Removes the spaces on the right side.
repeat(col, n)Returns a new string after repeating a column n times
split(str, pattern[, limit])Splits string by specified patterns
substring(str, pos, len)Returns the substring from te string column.

Datetime Functions

Datetime FunctionsDescription
add_months(start, months)Returns the date that is months months after start
current_date()Returns the current date at the start of query evaluation as a DateType column.
current_timestamp()Returns the current timestamp at the start of query evaluation as a TimestampType column.
date_add(start, days)Returns the date that is days days after start
date_format(date, format)Converts a date/timestamp/string to a value of string in the format specified by the date format given by the second argument.
date_sub(start, days)Returns the date that is days days before start
date_trunc(format, timestamp)Returns timestamp truncated to the unit specified by the format.
datediff(end, start)Returns the number of days from start to end.
dayofmonth(col)Extract the day of the month of a given date as integer.
dayofweek(col)Extract the day of the week of a given date as integer.
dayofyear(col)Extract the day of the year of a given date as integer.
second(col)Extract the seconds of a given date as integer.
weekofyear(col)Extract the week number of a given date as integer.
year(col)Extract the year of a given date as integer.
quarter(col)Extract the quarter of a given date as integer.
month(col)Extract the month of a given date as integer.
last_day(date)Returns the last day of the month which the given date belongs to.
minute(col)Extract the minutes of a given date as integer.
months_between(date1, date2[, roundOff])Returns number of months between dates date1 and date2.
next_day(date, dayOfWeek)Returns the first date which is later than the value of the date column.
hour(col)Extract the hours of a given date as integer.
make_date(year, month, day)Returns a column with a date built from the year, month and day columns.
from_unixtime(timestamp[, format])Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the given format.
unix_timestamp([timestamp, format])Convert time string with given pattern (‘yyyy-MM-dd HH:mm:ss’, by default) to Unix time stamp (in seconds), using the default timezone and the default locale, return null if fail.
to_timestamp(col[, format])Converts a Column into pyspark.sql.types.TimestampType using the optionally specified format.
to_date(col[, format])Converts a Column into pyspark.sql.types.DateType using the optionally specified format.
trunc(date, format)Returns date truncated to the unit specified by the format.
from_utc_timestamp(timestamp, tz)This is a common function for databases supporting TIMESTAMP WITHOUT TIMEZONE.
to_utc_timestamp(timestamp, tz)This is a common function for databases supporting TIMESTAMP WITHOUT TIMEZONE.
window(timeColumn, windowDuration[, …])Bucketize rows into one or more time windows given a timestamp specifying column.
session_window(timeColumn, gapDuration)Generates session window given a timestamp specifying column.
timestamp_seconds(col)New in version 3.1.0.

Collection Functions

Collection FunctionsDescription
array_contains(col, value)Collection function: returns null if the array is null, true if the array contains the given value, and false otherwise.
arrays_overlap(a1, a2)Collection function: returns true if the arrays contain any common non-null element; if not, returns null if both the arrays are non-empty and any of them contains a null element; returns false otherwise.
slice(x, start, length)Collection function: returns an array containing all the elements in x from index start (array indices start at 1, or from the end if start is negative) with the specified length.
array_join(col, delimiter[, null_replacement])Concatenates the elements of column using the delimiter.
concat(*cols)Concatenates multiple input columns together into a single column.
array_position(col, value)Collection function: Locates the position of the first occurrence of the given value in the given array.
element_at(col, extraction)Collection function: Returns element of array at given index in extraction if col is array.
array_sort(col)Collection function: sorts the input array in ascending order.
array_remove(col, element)Collection function: Remove all elements that equal to element from the given array.
array_distinct(col)Collection function: removes duplicate values from the array.
array_intersect(col1, col2)Collection function: returns an array of the elements in the intersection of col1 and col2, without duplicates.
array_union(col1, col2)Collection function: returns an array of the elements in the union of col1 and col2, without duplicates.
array_except(col1, col2)Collection function: returns an array of the elements in col1 but not in col2, without duplicates.
transform(col, f)Returns an array of elements after applying a transformation to each element in the input array.
exists(col, f)Returns whether a predicate holds for one or more elements in the array.
forall(col, f)Returns whether a predicate holds for every element in the array.
filter(col, f)Returns an array of elements for which a predicate holds in a given array.
aggregate(col, initialValue, merge[, finish])Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
zip_with(left, right, f)Merge two given arrays, element-wise, into a single array using a function.
transform_keys(col, f)Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new keys for the pairs.
transform_values(col, f)Applies a function to every key-value pair in a map and returns a map with the results of those applications as the new values for the pairs.
map_filter(col, f)Returns a map whose key-value pairs satisfy a predicate.
map_zip_with(col1, col2, f)Merge two given maps, key-wise into a single map using a function.
explode(col)Returns a new row for each element in the given array or map.
explode_outer(col)Returns a new row for each element in the given array or map.
posexplode(col)Returns a new row for each element with position in the given array or map.
posexplode_outer(col)Returns a new row for each element with position in the given array or map.
get_json_object(col, path)Extracts json object from a json string based on json path specified, and returns json string of the extracted json object.
json_tuple(col, *fields)Creates a new row for a json column according to the given field names.
from_json(col, schema[, options])Parses a column containing a JSON string into a MapType with StringType as keys type, StructType or ArrayType with the specified schema.
schema_of_json(json[, options])Parses a JSON string and infers its schema in DDL format.
to_json(col[, options])Converts a column containing a StructTypeArrayType or a MapType into a JSON string.
size(col)Collection function: returns the length of the array or map stored in the column.
sort_array(col[, asc])Collection function: sorts the input array in ascending or descending order according to the natural ordering of the array elements.
array_max(col)Collection function: returns the maximum value of the array.
array_min(col)Collection function: returns the minimum value of the array.
shuffle(col)Collection function: Generates a random permutation of the given array.
reverse(col)Collection function: returns a reversed string or an array with reverse order of elements.
flatten(col)Collection function: creates a single array from an array of arrays.
sequence(start, stop[, step])Generate a sequence of integers from start to stop, incrementing by step.
array_repeat(col, count)Collection function: creates an array containing a column repeated count times.
map_keys(col)Collection function: Returns an unordered array containing the keys of the map.
map_values(col)Collection function: Returns an unordered array containing the values of the map.
map_entries(col)Collection function: Returns an unordered array of all entries in the given map.
map_from_entries(col)Collection function: Returns a map created from the given array of entries.
arrays_zip(*cols)Collection function: Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
map_concat(*cols)Returns the union of all the given maps.
from_csv(col, schema[, options])Parses a column containing a CSV string to a row with the specified schema.
schema_of_csv(csv[, options])Parses a CSV string and infers its schema in DDL format.
to_csv(col[, options])Converts a column containing a StructType into a CSV string.

Math Functions

sqrt(col)Computes the square root of the specified float value.
abs(col)Computes the absolute value.
acos(col)Computes inverse cosine of the input column.
acosh(col)Computes inverse hyperbolic cosine of the input column.
asin(col)Computes inverse sine of the input column.
asinh(col)Computes inverse hyperbolic sine of the input column.
atan(col)Compute inverse tangent of the input column.
atanh(col)Computes inverse hyperbolic tangent of the input column.
atan2(col1, col2)New in version 1.4.0.
bin(col)Returns the string representation of the binary value of the given column.
cbrt(col)Computes the cube-root of the given value.
ceil(col)Computes the ceiling of the given value.
conv(col, fromBase, toBase)Convert a number in a string column from one base to another.
cos(col)Computes cosine of the input column.
cosh(col)Computes hyperbolic cosine of the input column.
cot(col)Computes cotangent of the input column.
csc(col)Computes cosecant of the input column.
exp(col)Computes the exponential of the given value.
expm1(col)Computes the exponential of the given value minus one.
factorial(col)Computes the factorial of the given value.
floor(col)Computes the floor of the given value.
hex(col)Computes hex value of the given column, which could be pyspark.sql.types.StringTypepyspark.sql.types.BinaryTypepyspark.sql.types.IntegerType or pyspark.sql.types.LongType.
unhex(col)Inverse of hex.
hypot(col1, col2)Computes sqrt(a^2 + b^2) without intermediate overflow or underflow.
log(arg1[, arg2])Returns the first argument-based logarithm of the second argument.
log10(col)Computes the logarithm of the given value in Base 10.
log1p(col)Computes the natural logarithm of the given value plus one.
log2(col)Returns the base-2 logarithm of the argument.
pow(col1, col2)Returns the value of the first argument raised to the power of the second argument.
rint(col)Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
round(col[, scale])Round the given value to scale decimal places using HALF_UP rounding mode if scale >= 0 or at integral part when scale < 0.
bround(col[, scale])Round the given value to scale decimal places using HALF_EVEN rounding mode if scale >= 0 or at integral part when scale < 0.
sec(col)Computes secant of the input column.
shiftleft(col, numBits)Shift the given value numBits left.
shiftright(col, numBits)(Signed) shift the given value numBits right.
shiftrightunsigned(col, numBits)Unsigned shift the given value numBits right.
signum(col)Computes the signum of the given value.
sin(col)Computes sine of the input column.
sinh(col)Computes hyperbolic sine of the input column.
tan(col)Computes tangent of the input column.
tanh(col)Computes hyperbolic tangent of the input column.
toDegrees(col)Deprecated since version 2.1.0.
degrees(col)Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
toRadians(col)Deprecated since version 2.1.0.
radians(col)Converts an angle measured in degrees to an approximately equivalent angle measured in radians.

Aggregate Functions

Window Functions

Sort Functions

UDF Functions

pandas_udf([f, returnType, functionType])Creates a pandas user defined function (a.k.a.
udf([f, returnType])Creates a user defined function (UDF).

Conclusion:

In this post, I have listed several commonly used built-in standard library PySpark SQL functions where you could read usage, syntax, and examples.

Happy Learning !!

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium