Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2568 | 11-01-2016 05:43 PM | |
| 8501 | 11-01-2016 05:36 PM | |
| 4860 | 07-01-2016 03:20 PM | |
| 8181 | 05-25-2016 11:36 AM | |
| 4335 | 05-24-2016 05:27 PM |
11-09-2015
12:57 AM
@awatson@hortonworks.com Did it get resolve?
... View more
11-09-2015
12:47 AM
2 Kudos
@vnair@hortonworks.com If an HS2 instance failed while a client is connected, the session is lost. Since this situation need to be handed at the client, there is no automatic failover; the client needs to reconnect using ZooKeeper. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hadoop-ha/content/ha-hs2-requests.html
... View more
11-08-2015
01:30 PM
1 Kudo
@Simon Elliston Ball This came up during a discussion with one of DS groups. Spark-1406 PMML model evaluation support via MLib
... View more
11-08-2015
01:20 PM
@Mats Johansson Did it help?
... View more
11-08-2015
01:06 PM
1 Kudo
@Josh Elser @terry@hortonworks.com @hfaouaz@hortonworks.com Please see this. [root@nsfed01 ~]# /usr/hdp/2.3.2.0-2950/phoenix/bin/sqlline.py n1:2181:/hbase-unsecure:neeraj Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:n1:2181:/hbase-unsecure:neeraj none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix:n1:2181:/hbase-unsecure:neeraj 15/11/08 05:04:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/11/08 05:04:51 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties Connected to: Phoenix (version 4.4) Driver: PhoenixEmbeddedDriver (version 4.4) Autocommit status: true Transaction isolation: TRANSACTION_READ_COMMITTED Building list of tables and columns for tab-completion (set fastconnect to true to skip)... 93/93 (100%) Done Done sqlline version 1.1.8 0: jdbc:phoenix:n1:2181:/hbase-unsecure:neera> 0: jdbc:phoenix:n1:2181:/hbase-unsecure:neera> !list 1 active connection: #0 open jdbc:phoenix:n1:2181:/hbase-unsecure:neeraj 0: jdbc:phoenix:n1:2181:/hbase-unsecure:neera>
... View more
11-08-2015
12:58 PM
2 Kudos
@skonduru@hortonworks.com Please add more information from the logs. check ranger.audit.source.type in Ranger and make sure its set to the method that you want to choose.
... View more
11-08-2015
12:57 PM
@vsomani@hortonworks.com NameNode disk failure. There are couple of if's 1 - HA + RAID 10 If HA is in place then failover to Passive (Assuming that active NN disk failed) + if RAID 10 is configured for NN then you are safe and have enough time to replace failed disk. "When a single disk in a RAID 10 disk array fails, the disk array status changes to Degraded. The disk array remains functional because the data on the Failed disk is also stored on the other member of its mirrored pair.When ever a disk fails, replace it as soon as possible. If a hot spare disk is available, the controller can rebuild the data on the disk automatically. If a hot spare disk is not available, you will need to replace the failed disk and then initiate a rebuild. " 2 scenario - No HA, No RAID but NN backup in place + "dfs.namenode.name.dir" is writing to multiple disks. You are safe as NN metadata writing to multiple disks so you can remove the disk location from Ambari and let operator recover the disk failure. 3 scenario - Bad design : No HA, No RAID, dfs.namenode.name.dir writing to single disk Cluster is down. Backup everything that you can from NN. Let operator replace the disk. Restore the backup and then starts the troubleshooting process. Good disucssion here 1
... View more
11-07-2015
09:44 PM
@hrongali@hortonworks.com Please do update the thread in case you find anything new
... View more
11-07-2015
11:12 AM
@bdurai@hortonworks.com Is there a workaround to disable HADOOP_USER_NAME feature? Also, I noticed that HADOOP_USER_NAME is not valid all the time. In one of my setups, I have LDAP auth in place for HDFS and HS2 and HADOOP_USER_NAME feature does not work "thankfully"
... View more
11-07-2015
02:55 AM
@Pardeep thanks for sharing!
... View more