Member since
03-22-2016
24
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1461 | 05-10-2017 02:58 PM |
11-30-2017
09:22 PM
screen1.png
... View more
11-30-2017
09:21 PM
Hi, I have setup Dr.Elephant on my HDP 2.6 cluster and changed the /opt/dr-elephant/compile.conf with the following - hadoop_version=2.7.3 spark_version=1.6.3 The application started successfully, I have verified the logs. However when I run any MapReduce Jobs, the same is not being reflected in the Dr.Elephant UI. I have attached the screenshot of the UI. Thanks
... View more
- Tags:
- dr-elephant
05-10-2017
02:58 PM
It turned out to be a problem with the file permissions. The umask was not set to 022. Hence it was failing due to access for ambari-infra logs and configurations. The error message was incorrect, as it was pointing to kerberos error.
... View more
05-04-2017
02:04 PM
@Wynner I am using HDF 2.1.1.0
... View more
05-04-2017
01:44 PM
I tried both ways, but still the same error. Even zkCli.sh errors with Auth_Failed.
... View more
05-03-2017
10:01 PM
Thank you @Wynner I have the host files in the format you mention, with FQDN followed by shorter one. However, my hostname is set to shortname (node1) without domain. Would this be an issue?
... View more
05-03-2017
09:04 PM
I have enabled kerberos on the HDF cluster. When starting ambari-infra, it errors out due to zookeeper failure. I have confirmed that the jaas files are updated correctly, and I am able to kinit using both zk.service.keytab and ambari-infra-solr.service.keytab. When solrCloudCli.sh is invoked by Ambari, the following error is reported - "Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7)). org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /clusterprops.json". I have attached the solr client logs.solor-error-log.txt Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
12-07-2016
11:33 PM
Thank you @Josh Elser 🙂
... View more
12-07-2016
08:16 PM
Thanks @Josh Elser I analyzed the issue further, and found that the problem in Zookeeper SASL. After kerberos, Zookeeper is expecting the port number 2888-3888 to be opened between all the 3 Zookeper servers. However, I hadn't opened that range of ports. Hence SASL error was thrown even with a simple ./zkCli.sh command. I have asked the customer to open the port range. Please let me know if this is not correct. Regards,
... View more
12-07-2016
10:05 AM
HBase is throwing an exception after enabling Kerberos- 2016-12-07 10:33:07,963 ERROR [main-SendThread(y.server.com:2181)] client.ZooKeeperSaslClient: SASL authentication failed using login context 'Client'. 2016-12-07 10:33:08,068 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2290) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2304) Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:576) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:555) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1313) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1291) I connected to zookepeer with the following command, and couldn't find the "hbase-secure" directory created. Only "hbase" directory exists - /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server x.server.com,y.server.com,z.server.com get /
... View more
Labels:
- Labels:
-
Apache HBase
12-06-2016
06:51 PM
Perfect. This is exactly what I needed. Thank you @Sunile Manjee
... View more
12-06-2016
03:03 PM
@mqureshi - Thank you. I figured out that the memory settings were wrong for Tez. I fixed it and It works perfectly fine. The following link was helpful as well - http://www.hadoopadmin.co.in/hive/tez-job-fails-with-vertex-failure-error/
... View more
12-06-2016
02:46 PM
I am planning to do a rolling upgrade of the cluster from 2.3.0 to 2.5.3. Is this option available? I need to do this with least downtime, so I cannot use Express upgrade. If it is available, can someone provide a link to it please
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
12-06-2016
02:02 PM
Thank you @mqureshi I have set the flag to false and used 'hive' user instead of 'hdpuser001'. So it is now reflecting "run as user is hive". However, the Hive query is still failing when the engine is set to Tez. It works perfectly fine with MR. There are not much logs available as well
... View more
12-05-2016
10:08 AM
1 Kudo
Hi, I am trying to run a simple hive query, and it keeps failing with following error. To make it simple, I have disabled Authentication/Authorization for Hive. Still the same error. It runs as user 'nobody' Container exited with a non-zero exit code 1 ]], TaskAttempt 3 failed, info=[Container container_e12_1480595328764_0023_01_000005 finished with diagnostics set to [Container failed, exitCode=1. Exception from container-launch. Container id: container_e12_1480595328764_0023_01_000005 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:367) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Shell output: main : command provided 1 main : run as user is nobody main : requested yarn user is hdpuser001
Thanks,
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
11-21-2016
10:30 AM
In the CloudBreak shell, there is an option 'ldapconfig'. Can this be used to configure Ranger UserSync? Can someone please provide the usage of this command? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
11-14-2016
12:16 PM
Thank you @rkovacs
... View more
11-14-2016
12:15 PM
Thank you @rdoktorics
... View more
11-14-2016
11:56 AM
I need to launch a HDP 2.5 cluster with Cloudbreak shell. As per the documentation, the option to enable Kerberos is available with Cloudbreak UI. (https://community.hortonworks.com/questions/27669/files-view-configuration-with-kerberos-cloudbreak.html). However, I couldn't see anything in sourcecode or CLI documentation. Is it supported yet?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak