Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2729 | 04-27-2020 03:48 AM | |
| 5287 | 04-26-2020 06:18 PM | |
| 4458 | 04-26-2020 06:05 PM | |
| 3584 | 04-13-2020 08:53 PM | |
| 5383 | 03-31-2020 02:10 AM |
09-14-2017
11:32 AM
@Juan Vares As we see the following message : WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster by ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1 Above seems to be causing the issue later ... with NullPointerException as [1] https://github.com/apache/ambari/blob/release-2.5.1/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/Stage.java#L630 So at this point i guess we have two option to proceed further. QUICK OPTION(Simple One) As this is a fresh cluster that we are setting up . So better run "ambari-server reset" to clean ambari DB and then recreate cluster freshly. # ambari-server stop
# ambari-server reset
# ambari-server start . OTHER OPTION (Complicated One) If we want to debug what is causing NPE, then we will ahve to look at few DB tables to understand that. Looks like due to few attempt of cluster creation the cluster id got some issues. Can you please share the output of the following SQL queries on Ambari DB? # psql -U ambari ambari
Password for user ambari: bigdata
ambari=> SELECT repo_version_id, stack_id, version, display_name FROM repo_version;
ambari=> SELECT * FROM clusters;
ambari=> SELECT * FROM cluster_version;
ambari=> SELECT * FROM host_version;
.
... View more
09-14-2017
10:13 AM
@Juan Vares As we see the following error on your logs: 14 sep 2017 10:54:16,454 INFO [Thread-1112] BSRunner:372 - Error executing bootstrap Cannot create /var/run/ambari-server/bootstrap14 sep 2017 10:54:16,455
ERROR [Thread-1112] BSRunner:441 - java.io.FileNotFoundException: /var/run/ambari-server/bootstrap/7/srvifsidsp01.xxxxxxxxxxx.done (No existe el fichero o el directorio) . So can you please check what is the permission set for the following directories and if the user who is running ambari has the right to write to those directories? # ls -ltr /var/run
lrwxrwxrwx. 1 root root 6 Dec 1 2014 /var/run -> ../run
# ls -l /var/run/ambari-server/bootstrap
drwxr-xr-x. 2 root root 240 Jun 9 11:06 7 . The user who is running ambari should have proper permissions to these directories as above. . Better try the following approach and then try again, Set the permissions on the /var/run/ambari-server directory, change the permission to 777, and then try the wizard again. # chmod -R 777 /var/run/ambari-server .
... View more
09-14-2017
06:37 AM
@Stefan Warmerdam Not sure about the client that you mentioned as part fo the github link. But In case of a Java Client it determines it using nameservice name something like following: Configuration conf = new Configuration(false);
conf.set("fs.defaultFS", "hdfs://nameservice1");
conf.set("fs.default.name", conf.get("fs.defaultFS"));
conf.set("dfs.nameservices","nameservice1");
conf.set("dfs.ha.namenodes.nameservice1", "namenode1,namenode2");
conf.set("dfs.namenode.rpc-address.nameservice1.namenode1","hadoopnamenode01:8020");
conf.set("dfs.namenode.rpc-address.nameservice1.namenode2", "hadoopnamenode02:8020");
conf.set("dfs.client.failover.proxy.provider.nameservice1","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"); .
... View more
09-14-2017
06:30 AM
@Stefan Warmerdam Usually whatever value you see in the "hdfs-site.xml" for the property "dfs.nameservices" and "fs.defaultFS" is used when HA is enabled. You can also use the Ambari FileView to upload files. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-views/content/ch_using_files_view.html
... View more
09-12-2017
11:22 AM
@Prashant Chaudhuri The path "/tmp/ambari.repo" was example path. The correct path should be : # cat /etc/yum.repos.d/ambari.repo .
... View more
09-12-2017
11:06 AM
@Prashant Chaudhuri Additionally in order to isolate any proxy issue please check if you are able to access the "repomd.xml" file from the ambari repo? # wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.2.0/repodata/repomd.xml . Also check the "/etc/yum.conf" if any proxy is defined there? Or if you need any? # grep 'proxy' /etc/yum.conf .
... View more
09-12-2017
10:58 AM
@Prashant Chaudhuri Can you please check if your "/etc/yum.repos.d/ambari.repo" file has "enabled=1" value as following: wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
# cat /tmp/ambari.repo
#VERSION_NUMBER=2.5.2.0-298
[ambari-2.5.2.0]
name=ambari Version - ambari-2.5.2.0
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.2.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.2.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
. Also please do a yum clean all and then try again, and please check the repo access from your host. # yum clean all .
... View more
09-11-2017
06:51 PM
@uri ben-ari That error you will get only when you have not specified the "Cluster Name" aftaer the ambari server hostname in the config.sh command: /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin -port 8080 get amb25101.example.com plain_ambari cluster-env /tmp/cluster-env.txt .
... View more
09-11-2017
06:11 PM
@rmr1989 Heap Alerts are Available for NameNode & DataNodes by default (not for Resourcemanager). You can find it in the following locations: NameNode: (they are of "type": "SCRIPT" alert, and the following scripts are used) https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/package/alerts/alert_metrics_deviation.py . DataNode: (It's a "type": "METRIC" kind of alert) Which uses the JMX. It checks the DataNode JMXServlet for the MemHeapUsedM and MemHeapMaxM properties. https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json#L1715-L1756 To know more about SCRIPT or METRIC type alerts please refer to https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-operations/content/alert_types.html .
... View more
09-11-2017
01:01 PM
@uri ben-ari Please try this. Step1). On the ambari server host run the following command: # /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin -port 8080 get amb25101.example.com plain_ambari cluster-env /tmp/cluster-env.txt . You should see the file "/tmp/cluster-env.txt" content something like following, NOTICE the "recovery_enabled" is set to "false" (that you need to make as "true") "properties" : {
"agent_mounts_ignore_list" : "",
"alerts_repeat_tolerance" : "1",
"enable_external_ranger" : "false",
"fetch_nonlocal_groups" : "true",
"hide_yarn_memory_widget" : "false",
"ignore_bad_mounts" : "false",
"ignore_groupsusers_create" : "false",
"kerberos_domain" : "EXAMPLE.COM",
"manage_dirs_on_root" : "true",
"managed_hdfs_resource_property_names" : "",
"one_dir_per_partition" : "false",
"override_uid" : "true",
"recovery_enabled" : "false",
.
.
. . Step2). Now edit the file "/tmp/cluster-env.txt" and set the "recovery_enabled" : "true". . Step3). Now run the same config.sh command but this time instead of "get" use "set" to set the "recovery_enabled" is set to "true". # /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin -port 8080 set amb25101.example.com plain_ambari cluster-env /tmp/cluster-env.txt . Step4). refresh the ambari ui and check again the "Auto Recobery Page" . Please Change the Following values in the config.sh command Here in the above command please replace the ambari admin credentials "amb25101.example.com" replace with your Ambari Server Hostname "plain_ambari" with your ambari Cluster name. .
... View more