Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3173 | 12-25-2018 10:42 PM | |
| 14192 | 10-09-2018 03:52 AM | |
| 4763 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2912 | 06-21-2017 12:06 AM |
05-03-2016
05:03 PM
1 Kudo
Hi @Chokroma Tusira, try removing all jars from your Oozie share/lib Spark directory in HDFS except those listed here, actually in case of HDP-2.4 you can start with just these two: oozie-sharelib-spark-4.2.0.2.4.0.0-169.jar
spark-assembly-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar As <spark-opts> I put the number of executors and their memory, it worked without hdp.version. Then restart Oozie, and retry your Spark action.
... View more
05-03-2016
02:02 PM
(+1) Hi Felix, also check that /usr/hdp/current/zookeeper-client is pointing to /usr/hdp/2.3.2.0-2950/zookeeper, and that directory /etc/zookeeper/2.3.3.0-2950/0 exists, if not create it. ZK config files like zoo.cfg and zookeeper-env.sh will be stored there.
... View more
05-03-2016
01:46 PM
In your /etc/hosts, move the line "172.17.0.2 node1" from the top to line number 2: 127.0.0.1 localhost
172.17.0.2 node1 Then, run "hostname", it should be "node1", if not run "hostname node1". Also check your hostname in /etc/sysconfig/network file. And finally, as Ian suggested check whether RM is up and running and listening on ports 8030 and 8050 (and a few other ones).
... View more
05-03-2016
06:29 AM
Hi Rohit, can you inspect /etc/apt/ directory on those 4 nodes. It looks like they contain *.list files listing HDP-2.2.0 repo. If so, move them out of /etc/apt and retry.
... View more
05-02-2016
02:27 PM
Hi Juan, can you consider accepting my answer if you found it useful. Tnx!
... View more
05-02-2016
01:17 PM
2 Kudos
Each version of Ambari has a default HDP version for each stack version. In your case for stack 2.3 it appears to be 2.3.0.0. You can control that by setting Ambari's repository URLs. First inspect your current settings, for example in case of Redhat/Centos-6.x, from Ambari server: curl -u admin:admin -H "X-Requested-By: ambari" http://localhost:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-2.3
curl -u admin:admin -H "X-Requested-By: ambari" http://localhost:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 To change the settings, issue the same 2 curl commands to the same URLs using http PUT method, and upload respective JSON files, one for HDP and one for HDP-UTILS repo. {
"Repositories” : {
"base_url” : ”<HDP-2.3.4.7_REPO_BASE_URL>",
"verify_base_url” : true
}
} You can find repo URLs in Ambari install document, however my recommendation is to download and create local repos, to speed up the install phase. After setting repos, re-deploy the cluster.
... View more
05-01-2016
10:36 AM
3 Kudos
Hi @Juan Rodriguez Hortala, in the latest Sandbox the default user is called maria_dev and it's a read-only user. To enable the admin user, ssh into your instance and run "ambari-admin-password-reset". You can also check these and other related instructions (how to access Ranger, Atlas etc) from the Sandbox startup page located at http://localhost:8888
... View more
05-01-2016
12:46 AM
1 Kudo
Only Yarn distributedshell supports specifying node labels on the command line. For MR jobs like Pi and wordcount, create a queue, set its default label, and submit your MR job to that queue.
... View more
04-30-2016
09:45 AM
+1 Another solution is to comment out hive.tez.java.opts in that sql file and manage the GC from Ambari.
... View more
04-29-2016
07:46 AM
Can you check your dfs.datanode.data.dir setting, and confirm that the directories listed there correspond to your disk mounting points. The setting applies to all Data nodes in the cluster, all of them must have the same disk mounting configuration.
... View more