Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

USing Sandbox: the step where phoenix_create.sh is to be run, I get an error.

USing Sandbox: the step where phoenix_create.sh is to be run, I get an error.

New Contributor

Also the way I installed phoenix was: downloading from the site of apache Phoenix-4.12.0-Hbase 1.1.-bin.tar.gz then I changed the path of psql.py in phoenix_create.sh. After enabling Phoenix slider in Amabri UI and changing the query timer to 3seconds, the HBAse does not start all the affected area or the region server. I could not find anything in the hbase logs or Ambari-server.logs , instead found :

hbase-ams-master-sandbox-hdf.hortonworks.com.log in /var/log/ambari-metrics-collector which reads as follows:

ERROR [main] persistence.Util: Last transaction was partial.
2017-11-07 07:09:27,580 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272)
at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399)
at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:253)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:188)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:207)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2770)
Tue Nov 7 07:15:27 UTC 2017 Starting master on sandbox-hdf.hortonworks.com
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257635
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2017-11-07 07:15:28,245 INFO [main] util.VersionInfo: HBase 1.1.2.2.6.1.0-118
2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: Source code repository git://c66-9277b38c-2/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hbase revision=718c773662346de98a8ce6fd3b5f64e279cb87d4
2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: Compiled by jenkins on Fri May 26 19:29:36 UTC 2017
2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: From source with checksum 5325f6ee9be058d73a605fd20a4351bb
2017-11-07 07:15:28,702 WARN [main] util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit is deprecated by hbase.regionserver.global.memstore.size
2017-11-07 07:15:28,745 INFO [main] master.HMasterCommandLine: Starting a zookeeper cluster
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.6-118--1, built on 05/26/2017 18:16 GMT
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:host.name=sandbox-hdf.hortonworks.com
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.8.0_131
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=Oracle Corporation
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-0.b11.el6_9.x86_64/jre
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.class.path=.1.0-118.jar:/usr/lib/ams-hbase//lib/hbase-common-1.1.2.2.6.1.0-118.jar:ar:..........so on..
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=/usr/lib/ams-hbase/lib/hadoop-native/
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/var/lib/ambari-metrics-collector/hbase-tmp
2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.version=4.11.4-1.el7.elrepo.x86_64
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.name=ams
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.home=/home/ams
2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/home/ams
2017-11-07 07:15:28,795 INFO [main] server.ZooKeeperServer: Created server with tickTime 6000 minSessionTimeout 12000 maxSessionTimeout 120000 datadir /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2 snapdir /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2
2017-11-07 07:15:28,825 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:61181
2017-11-07 07:15:30,437 ERROR [main] persistence.Util: Last transaction was partial.
2017-11-07 07:15:30,438 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272)
at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399)
at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:253)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:188)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:207)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2770)

and the error that I receive when i run phoenix_Create.sh is

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.1.0-129/apache-phoenix-4.12.0-HBase-1.1-bin/phoenix-4.12.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.1.0-129/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
17/11/07 07:09:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/07 07:09:23 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Nov 07 07:10:11 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0

at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2454)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:261)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Nov 07 07:10:11 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:403)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2388)
... 9 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:411)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:717)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:379)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:201)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:63)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more

Does anybody have any workaround suggestions. I just need the phoenix functioning, maybe my installation is wrong or something. I assume that the sandbox would be pretty straightforward.

2 REPLIES 2

Re: USing Sandbox: the step where phoenix_create.sh is to be run, I get an error.

Super Collaborator

To get Apache Phoenix 4.12 running on top of HDP you need to rebuild it against HDP HBase artifacts. Also, some modifications will be required due to the difference in API. I would recommend using the bundled version of Phoenix.

Re: USing Sandbox: the step where phoenix_create.sh is to be run, I get an error.

New Contributor

we tried to install the phoenix package mentioned in the GUI url: ie. yum install -d 0 -e 0 phoenix_2_6_*
but it would not get connected to the mirror hence we manually downloaded, the package: phoenix_2_6_1_0_129-4.7.0.2.6.1.0-129.noarch.rpm and then copied it to the sandbox and then installed it. After installation we edited the file phoenic_create.sh file to reflect zookepers host ip address. and then the file was made into an executable and then executed. voila!