1
votes

I am a completely new user to Phoenix and have probably missed something slap forehead simple.

  • HBase is up

    21:44:23/sprue $ps -ef | grep HMaster

    501 55936 55922 0 9:50PM ttys014 0:18.12 /Library/Java/JavaVirtualMachines/jdk1.8.0_71.jdk/Contents/Home/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -Djava.net.preferIPv4Stack=true - .. -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

  • and we can connect to it via hbase shell and query stuff:

    hbase(main):010:0> scan 't1'

    ROW COLUMN+CELL r1 column=f1:c1, timestamp=1469077174795, value=val1 1 row(s) in 0.0370 seconds

Now I had copied the phoenix 4.4.6 jar to the $HBASE_HOME/lib dir, restarted hbase and tried to connect via sqlline.py:

$sqlline.py mellyrn.local:2181

Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:mellyrn.local:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:mellyrn.local:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/shared/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/07/20 22:03:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
    at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1603)
    at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1535)
    at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1452)
    at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:429)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:52195)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.except

..

Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks

So any hints on what is needed to bring up phoenix would be helpful.

2

2 Answers

2
votes

Above exception thrown when HBase master couldn't load the phoenixserver.jar, even though phoenix installation instructions says just restart the region servers, its not enough, copy the phoenixserver.jar to HBase master and backup masters same as region servers and restart all of them.

1
votes

Check $HBASE_HOME/lib and $HBASE_HOME/conf/hbase-site.xml on HMaster.

When you start phoenix, it will create 4 system tables:

SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS

Table SYSTEM.CATALOG and SYSTEM.FUNCTION declare to use coprocessor org.apache.phoenix.coprocessor.MetaDataEndpointImpl, but it seems that your HMaster couldn't load it.