Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 866 | 06-04-2025 11:36 PM | |
| 1438 | 03-23-2025 05:23 AM | |
| 720 | 03-17-2025 10:18 AM | |
| 2588 | 03-05-2025 01:34 PM | |
| 1715 | 03-03-2025 01:09 PM |
04-04-2018
08:23 PM
@Juan Gonzalez Did you run the below command at any stage ? # ambari-admin-password-reset If not please do it and retry usual default is admin/admin
... View more
04-04-2018
07:41 PM
@Swaapnika Guntaka That will run independently it won't interfere with the Mysql or any other database.Falcon doesn't use the classic db's like run. Before you proceed make sure Falcon is not running ,kill any rogue falcon process
... View more
04-04-2018
06:57 PM
1 Kudo
@Anurag Mishra This is the ultimate reference for knox. I am sure you will get the above questions answered with examples knox_ldap
... View more
04-04-2018
06:45 PM
@Swaapnika Guntaka As of HDP 2.5.3, you need to install the Berkeley DB prior to upgrading or installing Falcon. That might be the problem you are encountering. You might try the following: [Updated content below] 1.Download the required Berkeley DB implementation file. wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar 2.Log in to the Ambari server with administrator privileges. su – root 3.Copy the file to the Ambari server share folder. cp je-5.0.73.jar /usr/share/ 4.Set permissions on the file to owner=read/write, group=read, other=read. chmod 644 /usr/share/je-5.0.73.jar 5.Configure the Ambari server to use the Berkeley DB driver. ambari-server setup --jdbc-db=bdb --jdbc-driver=/usr/share/je-5.0.73.jar 6.Restart the Ambari server. ambari-server restart 7.Restart the Falcon service from the Ambari UI. You need to have administrator privileges in Ambari to restart a service. 1.In the Ambari web UI, click the Services tab and select the Falcon service in the left Services pane. 2.From the Falcon Summary page, click Service Actions > Restart All. 3.Click Confirm Restart All. When the service is available, the Falcon status displays as Started on the Summary page. Further information and manual install instructions are available in an article at https://community.hortonworks.com/articles/78274/prerequisite-to-installing-or-upgrading-falcon.html.
... View more
04-04-2018
05:32 PM
@Alexandre GRIFFAUT If possible can you paste the blueprint in here and only scramble sensitive info this can help community members analyze it
... View more
04-03-2018
08:09 AM
@Praveen Atmakuri Here is some info I landed on in some forum 1) "fs.trash.interval" is not respected on blob storage. 2) "hadoop fs -expunge" creates a checkpoint instead of empty the trash Apache documentation on 'fs -expunge' is a little confusing or inaccurate in simply stating it will "Empty the Trash". The command will actually do two things: a) delete all the old checkpoints that are older than the 'fs.trash.interval' config value. b) create a new checkpoint of current Trash directory. For Azure blog storage not respecting 'fs.trash.interval, there is a bug being tracked for this issue. There are some technical difficulties in solving the problem. In HDFS, namenode will enforce the interval config and clean up the Trash according to the config. In Azure blog storage, we don't have an HDFS namenode equivalent component that can enforce the rule. Could you try to set it to 5 minutes and test just for curiosity?
... View more
04-03-2018
07:46 AM
@Michael Bronson Yes, I think the steps are correct, but I think for better understanding you add a step between 2 and 3 :-). Mounting the new FS and updating the fstab before copying across the data from the old mount point. Cheers 🙂
... View more
04-02-2018
10:19 PM
@Michael Bronson Sorry I got misguided by your sentence "on of the workers have only 3 disks insted 4 disks" I think there was a typo error instead of "one" you wrote "on" and that completely changes the meaning of the sentence. Yes true if its only one data node that should impact the whole cluster. The other method would be to decommission the worker node (datanode) mount the new FS and then recommission 🙂 It's cool if all worked fine for you.
... View more
04-02-2018
03:45 PM
@Michael Bronson Your steps look okay but I still think the below-updated process portrays better the process. If its a production cluster then you MUST take the necessary precautions like baking up the data. 1. create new disk - /grid/sdd and update /etc/fstab and mount /grid/sdd OK Make sure the old mount point is accessible becase you will copy date to to new mount from them. 2.Stop the cluster instead of only the datanodes as documented there could be a reason why eg some processes/jobs writing to those disks 3.Go to the ambari HDFS configuration and edit the datanode directory configuration: Remove /hadoop/hdfs/data and /hadoop/hdfs/data1. Add /grid/sda,/gird/sdb,/grid/sdc,/grid/sdd save. 4.Login into each datanode VM and copy the contents of /data_old /data1 into /grid/sda,/gird/sdb,/grid/sdc,/grid/sdd 5.Change the ownership of /grid/sda,/gird/sdb,/grid/sdc,/grid/sdd and everything under it to “hdfs”. 6.Start the cluster.
... View more
04-02-2018
02:52 PM
@Michael Bronson Here is a HCC validated documentation to execute successfully your solution How to Move or Change HDFS DataNode Directories Hope that helps 🙂
... View more