1
votes

This error information shows up:

Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.net.ConnectException Call From undefined.hostname.localhost/192.168.xx.xxx to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused);

I don't understand why I will visit the localhost:9000, the host in my core-site.xml is hdfs://192.168.xx.xx:9000, why did I visit localhost:9000
Is it the default host?

5
Please show your /etc/hosts file contentOneCricketeer
127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost 192.168.60.194 hadoop00 192.168.60.168 hadoop01 192.168.60.172 hadoop02Horizon Zy

5 Answers

1
votes

Please make sure that hive-site.xml is present in your spark config directory /etc/spark/conf/ and configure the hive configuration settings.

## core-site.xml 
fs.defaultFS

## Hive config 
hive.metastore.uris
0
votes

In hive-site.xml, you can configure the as follows. Please configure your hive meta-store details.

<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-xx.xx.xx.xx:8020</value>
</property>

<property>
<name>hive.metastore.uris</name>
<value>thrift://ip-xx.xx.xx:9083</value>
</property>

<property>
<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://localhost/metastorecreateDatabase
IfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password for connecting to mysql server </description>
</property>
</configuration>

0
votes

Your error is related to HDFS, not Hive or SparkSQL

You need to ensure that your HADOOP_HOME or HADOOP_CONF_DIR are correctly setup in the spark-env.sh if you would like to connect to the correct Hadoop environment rather than use the defaults.

0
votes

I reset the metastore in Mysql. I use the localhost in my core-site.xml at that time, I init my metastore.So I reset the metastore, and the problem solved.

0
votes

First,go to the mysql command line,drop the database(metastore) which you set in your hive-site.xml.

Then,change dictory to the $HIVE_HOME/bin,execute schematool -initSchema -dbType mysql, and the problem is solved.the error due to the metastore in mysql is too late(I have set the metastore in standby environments),I turn to the cluster environment later, but the metastore is previous, so I can create table in hive,not in sparksql.

Thank someone who helps me.@Ravikumar,@cricket_007