Member since
09-29-2015
286
Posts
601
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11508 | 03-21-2017 07:34 PM | |
2912 | 11-16-2016 04:18 AM | |
1625 | 10-18-2016 03:57 PM | |
4297 | 09-12-2016 03:36 PM | |
6280 | 08-25-2016 09:01 PM |
01-30-2016
11:55 AM
@Adrian Savory See the following:https://community.hortonworks.com/questions/7739/permission-denied-and-no-such-file-or-directory.html sudo su - hdfs
hdfs dfs -mkdir /user/admin
hdfs dfs -chown root:hdfs /user/admin
... View more
01-29-2016
02:51 AM
1 Kudo
@John Smith You should add this as another question so other folks can find it.
Delete the file at /var/run/ambari-server/ambari-server.pid The sandbox was terminated not gracefully, so you are getting this error.
... View more
01-27-2016
10:39 PM
1 Kudo
Is mthal user the new User?
Did you follow all the steps for Ranger LDAP SSL configuration?
See https://community.hortonworks.com/questions/1018/how-to-configure-ranger-usync-for-ldap-ssl.html and https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_Ranger_Install_Guide/content/configuring_ranger_for_ldap_ssl.html
... View more
01-27-2016
09:12 PM
1 Kudo
@Cassandra Is this the Ranger UI admin password?
Is the Ranger UI LDAP authenticated or still Unix authenticated?
If not LDAP authenticated change the password in database. See https://community.hortonworks.com/questions/4408/is-there-any-way-to-reset-ranger-admin-ui-password.html If LDAP authenticated then admin or a user like this should have been in IPA. Change it from there.
... View more
01-27-2016
07:03 PM
2 Kudos
I haven't looked at you log files as yet but see the following to solve common issues.
Up the heap size ams-env : metrics_collector_heapsize = 1024
Set timeline.metrics.service.default.result.limit = 15840
Restart the Collector https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html https://community.hortonworks.com/questions/8928/ambari-metrics-1.html
... View more
01-27-2016
01:16 AM
3 Kudos
@Raja Sekhar Chintalapati There is no Spark authentication against LDAP in a non kerberized environment. If a Spark job reads from HDFS and the user running the job does not have sufficient HDFS permission, Spark will fail to read data. Spark HiveContext does not connect to HiveServer2. It connects to Hive metastore once you provide the Hive configuration (hive-site.xml) to Spark, else it creates its own metastore in it's working directory I don't know a way to suppress the info in sparl-sql The Spark Master UI is typically on the node with Driver running on port 4040. You can define ports for the Driver, File Server, Executor, UI etc. See doc here
See also setting Spark Configuratin here: https://spark.apache.org/docs/1.1.0/configuration.html See also for YARN Mode: http://spark.apache.org/docs/latest/security.html
Example SPARK_MASTER_OPTS="-Dspark.driver.port=7001 -Dspark.fileserver.port=7002
-Dspark.broadcast.port=7003 -Dspark.replClassServer.port=7004
-Dspark.blockManager.port=7005 -Dspark.executor.port=7006
-Dspark.ui.port=4040 -Dspark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory"
SPARK_WORKER_OPTS="-Dspark.driver.port=7001 -Dspark.fileserver.port=7002
-Dspark.broadcast.port=7003 -Dspark.replClassServer.port=7004
-Dspark.blockManager.port=7005 -Dspark.executor.port=7006
-Dspark.ui.port=4040 -Dspark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory" Programmatic Example import org.apache.spark.SparkConf
import org,apache.spark.SparkContext
val conf = new SparkConf()
.setMaster(master)
.setAppName("namexxx")
.set("spark.driver.port", "7001")
.set("spark.fileserver.port", "7002")
.set("spark.broadcast.port", "7003")
.set("spark.replClassServer.port", "7004")
.set("spark.blockManager.port", "7005")
.set("spark.executor.port", "7006")
val sc= new SparkContext(conf)
... View more
01-26-2016
05:54 PM
13 Kudos
Sometimes Ambari Metrics stops displaying data in the dashboards. Or You may be getting time out issues that were not solved (see https://community.hortonworks.com/questions/4726/ambari-metric-collector-error-sending-metric-to-se.html) At other times, especially on a new install you may receive this error in the Collector Log 2016-01-25 15:33:31,195 WARN
org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource:
Unable to connect to HBase store using Phoenix.
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG. This is usually due to AMS Data being corrupt.
Shut down Ambari Monitors, and Collector via Ambari
Cleared out the /var/lib/ambari-metrics-collector dir for fresh restart
From Ambari -> Ambari Metrics -> Config -> Advanced ams-hbase-site get the hbase.rootdir and hbase-tmp directory Delete or Move the hbase-tmp and hbase.rootdir directories to an archive folder Started AMS. All services will came online and graphs started to display, after a few minutes
... View more
Labels:
01-26-2016
05:47 PM
1 Kudo
I just had this issue and this is how it was solved. I added this to ams-hbase-site :: hbase.zookeeper.property.tickTime = 6000
and then restarted AMS
... View more
01-26-2016
12:47 PM
4 Kudos
We installed HDP 2.3.4 cluster with Ambari 2.2.. HBase Master and Region servers starts but after some time the HBase Master shuts down. The log file says: 2016-01-25 14:46:47,340 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/master
2016-01-25 14:46:47,340 ERROR [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2016-01-25 14:46:47,340 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.ZKUtil: master:16000-0x3527a1898200012, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, baseZNode=/hbase-unsecure Unable to get data of znode /hbase-unsecure/master
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:745)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1164)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-01-25 14:46:47,340 ERROR [master/node03.test.com/x.x.x.x:16000] zookeeper.ZooKeeperWatcher: master:16000-0x3527a1898200012, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, baseZNode=/hbase-unsecure Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:745)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1164)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-01-25 14:46:47,340 ERROR [master/node03.test.com/x.x.x.x:16000] master.ActiveMasterManager: master:16000-0x3527a1898200012, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, baseZNode=/hbase-unsecure Error deleting our own master address node
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:745)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1164)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-01-25 14:46:47,341 INFO [master/node03.test.com/x.x.x.x:16000] hbase.ChoreService: Chore service for: node03.test.com,16000,1453750627948_splitLogManager_ had [] on shutdown
2016-01-25 14:46:47,341 INFO [master/node03.test.com/x.x.x.x:16000] flush.MasterFlushTableProcedureManager: stop: server shutting down.
2016-01-25 14:46:47,342 INFO [master/node03.test.com/x.x.x.x:16000] ipc.RpcServer: Stopping server on 16000
2016-01-25 14:46:47,342 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: stopping
2016-01-25 14:46:47,343 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2016-01-25 14:46:47,343 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2016-01-25 14:46:47,345 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
2016-01-25 14:46:48,345 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
2016-01-25 14:46:50,345 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
2016-01-25 14:46:54,346 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
2016-01-25 14:47:02,346 WARN [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node03.test.com:2181,node02.test.com:2181,node01.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
2016-01-25 14:47:02,346 ERROR [master/node03.test.com/x.x.x.x:16000] zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
2016-01-25 14:47:02,347 WARN [master/node03.test.com/x.x.x.x:16000] regionserver.HRegionServer: Failed deleting my ephemeral node
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-unsecure/rs/node03.test.com,16000,1453750627948
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:178)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1345)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1334)
at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1403)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1079)
at java.lang.Thread.run(Thread.java:745)
2016-01-25 14:47:02,350 INFO [master/node03.test.com/x.x.x.x:16000] regionserver.HRegionServer: stopping server node03.test.com,16000,1453750627948; zookeeper connection closed.
2016-01-25 14:47:02,351 INFO [master/node03.test.com/x.x.x.x:16000] regionserver.HRegionServer: master/node03.test.com/x.x.x.x:16000 exiting
What steps do I take to solve this?
... View more
Labels:
- Labels:
-
Apache HBase
01-26-2016
04:06 AM
2 Kudos
Do the following yum clean all
yum clean dbcache
yum clean metadata
yum makecache
rpm —rebuilddb
yum history new @David Yee Are you following any instructions to install on AWS? It seems that this the help you really need. See the answer to https://community.hortonworks.com/questions/10728/ambari-fails-to-register.html Or check out these resources.
Deploying Hadoop Cluster Amazon ec2 Hortonworks Looking for Steps to Install HDP on AWS
... View more