3
votes

I am fairly new to Hadoop (HDFS and Hbase) and Hadoop Eco system (Hive, Pig, Impala etc.). I have got a good understanding of Hadoop components such as NamedNode, DataNode, Job Tracker, Task Tracker and how they work in tandem to store the data in efficient manner.

While trying to understand fundamentals of data access layer such as Hive, I need to understand where exactly a table’s data (created in Hive) gets stored? We can create external and internal table in Hive. As external tables can be in HDFS or any other file system, Hive doesnt store data for such tables in warehouse. What about internal tables? This table will be created as a directory on one of the data nodes on Hadoop Cluster. Once we load data in these tables from local or HDFS file system, are there further files getting created to store data in tables created in Hive?

Say for example:

  1. A sample file named test_emp_feedback.csv was brought from local file system to HDFS.
  2. A table (emp_feedback) was created in Hive with a structure similar to csv file structure. This lead to creation of a directory in Hadoop cluster say /users/big_data/hive/emp_feedback
  3. Now once I create the table and load data in emp_feedback table from test_emp_feedback.csv

Is Hive going to create a copy of file in emp_feedback directory? Wont it cause data redundancy?

5

5 Answers

1
votes

Creating a Managed table will create a directory with Same name as table name at Hive warehouse directory(Usually at /user/hive/warehouse/dbname/tablename).Also the table structure(Hive Metadata) is created in the metastore(RDBMS/HCat).

Before you load the data on the table, this directory(with the same name as table name under hive warehouse) is empty.

There could be 2 possible scenarios.

  1. If the table is external the data is not copied to warehouse directory at all.

  2. If the table is managed(not external), when you load your data to the table it is moved(not Copied) from current HDFS location to Hive warehouse directory9/user/hive/warehouse//). So this will not replicate the data.

Caution: It is always advisable to create external table unless the data is only used by hive. Dropping a managed table would delete the data from HDFS(Warehouse of HIVE).

HadoopGig

1
votes

To answer you Question :

For External Tables:

Hive does not move the data into its warehouse directory. If the external table is dropped, then the table metadata is deleted but not the data.

For Internal tables

Hive moves data into its warehouse directory. If the table is dropped, then the table metadata and the data will be deleted.

For your reference


Difference between Internal & External tables:

For External Tables

External table stores files on the HDFS server but tables are not linked to the source file completely.

If you delete an external table the file still remains on the HDFS server.

As an example if you create an external table called “table_test” in HIVE using HIVE-QL and link the table to file “file”, then deleting “table_test” from HIVE will not delete “file” from HDFS.

External table files are accessible to anyone who has access to HDFS file structure and therefore security needs to be managed at the HDFS file/folder level.

Meta data is maintained on master node, and deleting an external table from HIVE only deletes the metadata not the data/file.

For Internal Tables

Stored in a directory based on settings in hive.metastore.warehouse.dir, by default internal tables are stored in the following directory /user/hive/warehouse you can change it by updating the location in the config file.

Deleting the table deletes the metadata and data from master-node and HDFS respectively. Internal table file security is controlled solely via HIVE. Security needs to be managed within HIVE, probably at the schema level (depends on organization).

Hive may have internal or external tables, this is a choice that affects how data is loaded, controlled, and managed.

Use EXTERNAL tables when:

The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn’t lock the files. Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schema (tables or views) at a single data set or if you are iterating through various possible schema. Hive should not own data and control settings, directories, etc., you may have another program or process that will do those things. You are not creating table based on existing table (AS SELECT).

Use INTERNAL tables when:

The data is temporary. You want Hive to completely manage the life-cycle of the table and data.

Source:

HDInsight: Hive Internal and External Tables Intro

Internal & external tables in Hadoop- HIVE

0
votes

It would not cause data redundancy. For managed (not external) tables Hive moves the data into its warehouse directory. In your example, the data will be moved from original location on HDFS to '/users/big_data/hive/emp_feedback'. Be careful with the removal of the managed table, it will lead to removal data on HDFS also.

0
votes

You can send data in two days

A) use LOAD DATA INPATH 'file_location_of_csv' INTO TABLE emp_feedback; Note that this command will remove content at source directory and create a internal table

OR)

B) Use copyFromLocal or put command to copy local file into HDFS and then create external table and copy the data into table. Now data won't be moved from source. You can drop external table but still source data is available.

e.g.

create external table emp_feedback (
  emp_id int,
  emp_name string
)
location '/location_in_hdfs_for_csv file';

When you drop an external table, it only drops the meta data of HIVE table. Data still exists at HDFS file location.

0
votes

Got it. This is what I was able to understand so far.

It all depends upon which type of table is being created and where from the file is picked up. Below are possible use cases

enter image description here