Member since
01-19-2017
3627
Posts
608
Kudos Received
361
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
230 | 08-02-2024 08:15 AM | |
3424 | 04-06-2023 12:49 PM | |
769 | 10-26-2022 12:35 PM | |
1510 | 09-27-2022 12:49 PM | |
1777 | 05-27-2022 12:02 AM |
01-11-2023
11:00 AM
@mike_bronson7 Hadoop uses the attribute dfs.hosts.exclude in hdfs-site.xml as a pointer to a file where node exclusions should be adequately documented. Since there is no default value for this attribute, the Hadoop cluster will not exclude any nodes in the absence of a file location and a file in the absence of dfs.hosts.exclude If dfs.hosts.exclude is not set in your cluster, take the actions listed below. Shutdown the Namenode. Edit hdfs-site.xml and add a dfs.hosts.exclude entry with the file's location. This can be a text file with the hostname that you intend to remove should be added to the file described in dfs.hosts.exclude. Start the namenode Add the hostname to the file specified in dfs.hosts.exclude that you intend to remove when dfs.hosts.exclude is already configured. Run the following command to exclude a data node After adding the hostname to the exclusion run the below command to exclude the node from functioning as a Datanode after adding the hostname to the exclusion. $ hdfs dfsadmin -refreshNodes Below command will exclude the node from functioning as a Node Manager $ yarn rmadmin -refreshNodes After the above actions, you should see one data node marked as decommissioned in Ambari. No data blocks will be sent to this data node as YARN has already marked it as unusable Hope that answers your question
... View more
11-12-2022
01:09 PM
@hassan-ki5 This looks a typical CM database connection issue can you check and compare the entries in cat /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.type=[Oracle/mysql/postgresql] com.cloudera.cmf.db.host=localhost com.cloudera.cmf.db.name=scm com.cloudera.cmf.db.user=scm com.cloudera.cmf.db.setupType=EXTERNAL com.cloudera.cmf.db.password=scm Ensure the DB.password, name, and user are correct since you seem to be running Mysql can you check this page CM using Mysql
... View more
11-12-2022
12:56 PM
@yomz Can you adjust the below parameters restart Postgres and restest pg_hba.conf # TYPE DATABASE USER ADDRESS METHOD host all all 0.0.0.0/0 md5 Locate the postgresql.conf and add the below entry password_encryption = md5 Please let me know
... View more
11-04-2022
03:36 PM
@lysConsulting Are you using embedded DB? if not can you log in to the HUE database from the CLI?
... View more
11-03-2022
03:50 PM
@lysConsulting This looks like a driver issue can you run the below it would affect your ambari server ambari-server setup --jdbc-db=postgres --jdbc-driver=/path/to/postgres/postgresql.jar Then retry the earlier steps or just create HUE database manually then with the credentials connect through the HUE service setup
... View more
10-31-2022
06:00 AM
@Ninja I see a difference in the networking can you change that and retry
... View more
10-30-2022
01:19 PM
@Ninja downloaded the 2.6.5 and there is no difference in the steps above. Please let me know if you still need help
... View more
10-30-2022
06:10 AM
@Ninja Your problem is resolved and here are the steps Downloaded the images in the link H DP3.0.1 extracted successfully Virtualbox setup ensure you enable only bridged adapter and deselect the NAT on Adapater1 gave my sandboy 10 GB plus 4 CPU's Extraction of the HDP it really takes a while Extraction Post extraction you are presented with this screen note the ssh IP linked to the bridge adapter IP Opened the web host CLI using 192.168.0.103 IP in my case your could be different you will be prompted for root default password hadoop and immediately forced to change the default password to something stronger . After successfully changing then you and now in business at the root prompt type the below # ambari-admin-password-reset The above will reset the ambari password I went the simple way with admin/admin when you see that ambari is listening to port 8080 then you are done Using Chrome opened the Ambari UI Using the password set above admin/admin in my case open the Ambari web UI Logged in successfully There you go
... View more
10-27-2022
04:48 AM
@Ninja Cool send me the link I will download it and give you the steps.
... View more
10-26-2022
12:35 PM
@drewski7 Ranger plugins that use Ranger as an authorization module will cache local policy and use the same for authorization purposes Ranger plugins cache the tags and periodically poll the tag store for any changes. When a change is detected, the plugins update the cache. In addition, the plugins store the tag details in a local cache file – just as the policies are stored in a local cache file. When the component restarts, the plugins will use the tag data from the local cache file if the tag store is not reachable. At periodic intervals , a plugin polls the Ranger Admin to retrieve the updated version of policies. The policies are cached locally by the plugin and used for access control The Policy evaluation and policy enforcement happens within the service process. The heart of this processing is the “Policy Engine”. It uses a memory resident-cached set of policies. Ranger takes 30secs to refresh policies check the "Plugin" option in the ranger UI but you can change the refresh time In Ambari UI->HDFS->Services->Configs->"Advance ranger-hdfs-security" you can change the poll interval here[refresh time]. Geoffrey
... View more