Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 664 | 06-04-2025 11:36 PM | |
| 1240 | 03-23-2025 05:23 AM | |
| 613 | 03-17-2025 10:18 AM | |
| 2260 | 03-05-2025 01:34 PM | |
| 1461 | 03-03-2025 01:09 PM |
03-23-2023
05:26 AM
1 Kudo
@AbuSaiyeda According to the excerpt you've shared it looks memory-related. Can you share your cluster configurations? The memory settings should match the recommended settings in Ambari Server heap size Most probably your AMBARI_JVM_ARGS variable should be set to -Xmx4GB -Xmn2GB appropriate for 100 to 800 hosts. Share your Ambari-server logs that can give insights to whats could be the probable cause Geoffrey
... View more
01-17-2023
01:11 PM
@admin007 How are you trying to connect? Can yo share the error ? When you are connecting to an impalad running on the same machine the prompt will reflect the current hostname. $ impala-shell If you are connecting to an impalad running on a remote machine, and impalad is listening on a non-default port [21000] $ impala-shell -i some.other.hostname:port_number Hope that starts the conversation
... View more
01-11-2023
11:00 AM
@mike_bronson7 Hadoop uses the attribute dfs.hosts.exclude in hdfs-site.xml as a pointer to a file where node exclusions should be adequately documented. Since there is no default value for this attribute, the Hadoop cluster will not exclude any nodes in the absence of a file location and a file in the absence of dfs.hosts.exclude If dfs.hosts.exclude is not set in your cluster, take the actions listed below. Shutdown the Namenode. Edit hdfs-site.xml and add a dfs.hosts.exclude entry with the file's location. This can be a text file with the hostname that you intend to remove should be added to the file described in dfs.hosts.exclude. Start the namenode Add the hostname to the file specified in dfs.hosts.exclude that you intend to remove when dfs.hosts.exclude is already configured. Run the following command to exclude a data node After adding the hostname to the exclusion run the below command to exclude the node from functioning as a Datanode after adding the hostname to the exclusion. $ hdfs dfsadmin -refreshNodes Below command will exclude the node from functioning as a Node Manager $ yarn rmadmin -refreshNodes After the above actions, you should see one data node marked as decommissioned in Ambari. No data blocks will be sent to this data node as YARN has already marked it as unusable Hope that answers your question
... View more
11-12-2022
01:09 PM
@hassan-ki5 This looks a typical CM database connection issue can you check and compare the entries in cat /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.type=[Oracle/mysql/postgresql] com.cloudera.cmf.db.host=localhost com.cloudera.cmf.db.name=scm com.cloudera.cmf.db.user=scm com.cloudera.cmf.db.setupType=EXTERNAL com.cloudera.cmf.db.password=scm Ensure the DB.password, name, and user are correct since you seem to be running Mysql can you check this page CM using Mysql
... View more
11-12-2022
12:56 PM
@yomz Can you adjust the below parameters restart Postgres and restest pg_hba.conf # TYPE DATABASE USER ADDRESS METHOD host all all 0.0.0.0/0 md5 Locate the postgresql.conf and add the below entry password_encryption = md5 Please let me know
... View more
11-04-2022
03:36 PM
@lysConsulting Are you using embedded DB? if not can you log in to the HUE database from the CLI?
... View more
11-03-2022
03:50 PM
@lysConsulting This looks like a driver issue can you run the below it would affect your ambari server ambari-server setup --jdbc-db=postgres --jdbc-driver=/path/to/postgres/postgresql.jar Then retry the earlier steps or just create HUE database manually then with the credentials connect through the HUE service setup
... View more
10-31-2022
06:00 AM
@Ninja I see a difference in the networking can you change that and retry
... View more
10-30-2022
01:19 PM
@Ninja downloaded the 2.6.5 and there is no difference in the steps above. Please let me know if you still need help
... View more
10-30-2022
06:10 AM
@Ninja Your problem is resolved and here are the steps Downloaded the images in the link H DP3.0.1 extracted successfully Virtualbox setup ensure you enable only bridged adapter and deselect the NAT on Adapater1 gave my sandboy 10 GB plus 4 CPU's Extraction of the HDP it really takes a while Extraction Post extraction you are presented with this screen note the ssh IP linked to the bridge adapter IP Opened the web host CLI using 192.168.0.103 IP in my case your could be different you will be prompted for root default password hadoop and immediately forced to change the default password to something stronger . After successfully changing then you and now in business at the root prompt type the below # ambari-admin-password-reset The above will reset the ambari password I went the simple way with admin/admin when you see that ambari is listening to port 8080 then you are done Using Chrome opened the Ambari UI Using the password set above admin/admin in my case open the Ambari web UI Logged in successfully There you go
... View more