You are currently viewing Hive DROP TABLE & DATABASE Explained

In Hive, DROP TABLE statement is used to drop a table and DROP DATABASE statement is used to drop a database, here I will explain different examples of dropping the tables and databases.

Hive Drop Table

Use DROP TABLE to drop a table, like any other RDBMS, dropping a table in hive drops the table description from Hive Metastore and it’s data from the Hive warehouse store(For internal tables).

Hive has a Internal and External tables.

When you drop an Internal table, it drops the table from Metastore, metadata and it’s data files from the data warehouse HDFS location.

Dropping an External table drops just the table from Metastore and the actual data in HDFS will not be removed.

DROP TABLE Syntax


DROP TABLE [IF EXISTS] table_name [PURGE];

DATABSE and SCHEMA can be used interchangeably in Hive as both refer to the same. In this article, I am using DATABASE but you can use SCHEMA instead.

DROP TABLE Examples

Below example drops an employee table from Hive metastore.


DROP TABLE employee;

If the table exists in a database, specify the database name. In the below example emp is a database and <strong>employee</strong> is a table.


DROP TABLE emp.employee;

Using IF EXISTS

When dropping a table that doesn’t exist, it returns an error. This behavior can be changed using IF EXISTS optional clause; where it executes the drop table statement only when the table present in the Hive database.


DROP TABLE IF EXISTS employee;

Using PURGE

If Trash is configured by setting true to hive.warehouse.data.skipTrash property, dropping an Internal table moves the actual data to users .Trash directory, you can recover this data post drop.

You can use PURGE option to not move the data to .Trash directory, the data will be permanently removed and it can not be recovered.


DROP TABLE IF EXISTS employee PURGE;

Note: By configuring Trash, you can just retain the actual data in .Trash directory but the Meta data from metastore is removed and cannot be restored.

Hive Drop Database

Use DROP DATABASE statement to drop the database in Hive, By default you can’t drop a database that has tables but, using optional clauses you can override this.

DRP DATABASE Syntax


DROP DATABASE [IF EXISTS] database_name [RESTRICT|CASCADE];

Hive DROP DATABASE consists of several optional clauses, using these we can change the behavior of the Hive statements.

  • IF EXISTS – Use IF EXISTS to check if the database exists before running a drop database statement.
  • RESTRICT – The default behavior is RESTRICT, where DROP DATABASE will fail if the database is not empty
  • CASCADE – Use CASCADE option, if you wanted to drop all tables before dropping the database.

DROP DATABASE Examples

Below are some of the most used Hive DROP DATABASE commands. The Below example drops emp database.


DROP DATABASE emp;

Using IF EXISTS

In case if specified database not present, Hive returns an error. To check the table exists before running a DROP command use the IF EXISTS optional clause.


DROP DATABASE IF EXISTS emp;

Using CASCADE

In order to drop the database, the database should be empty meaning it should not contain tables, views, or any other objects. If you try to drop the database that has tables, you will get an error Database <dbname> is not empty. One or more tables exist.


DROP DATABASE emp;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. InvalidOperationException(message:Database emp is not empty. One or more tables exist.)

You can change this behaviour by using CASCADE optional clause.


DROP DATABASE emp CASCADE;

Using RESTRICT

The default behavior is RESTRICT, where DROP DATABASE will fail if the database is not empty.

Hope you like this article !!

Happy Learning !!

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium

Leave a Reply