Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12011 | 03-08-2019 06:33 PM | |
5083 | 02-15-2019 08:47 PM | |
4291 | 09-26-2018 06:02 PM | |
10873 | 09-07-2018 10:33 PM | |
5870 | 04-25-2018 01:55 AM |
12-08-2016
12:19 PM
my assumption was correct
the datanodes (prbly everynode) will have a uuid which was same and hence this issue,
i removed the install software, diectories and files, then reregisterd which worked fine later
... View more
05-06-2016
06:01 AM
Thank you @Artem Ervits
... View more
05-16-2016
06:51 PM
2 Kudos
https://issues.apache.org/jira/browse/AMBARI-14941 - Looks like custom widgets for Kafka are supported in Ambari-2.2.2 Thanks to @jaimin
... View more
04-28-2017
04:10 PM
1 Kudo
This is a good article by our intern James Medel to protect against accidental deletion: USING HDFS SNAPSHOTS TO PROTECT IMPORTANT ENTERPRISE DATASETS Sometime back, we introduced the ability to create snapshots to protect important enterprise data sets from user or application errors. HDFS Snapshots are read-only point-in-time copies of the file system. Snapshots can be taken on a subtree of the file system or the entire file system and are:
Performant and Reliable: Snapshot creation is atomic and instantaneous, no matter the size or depth of the directory subtree Scalable: Snapshots do not create extra copies of blocks on the file system. Snapshots are highly optimized in memory and stored along with the NameNode’s file system namespace In this blog post we’ll walk through how to administer and use HDFS snapshots. ENABLE SNAPSHOTS In an example scenario, Web Server logs are being loaded into HDFS on a daily basis for processing and long term storage. The logs are loaded in a few times a day, and the dataset is organized into directories that hold log files per day in HDFS. Since the Web Server logs are stored only in HDFS, it’s imperative that they are protected from deletion. /data/weblogs /data/weblogs/20130901 /data/weblogs/20130902 /data/weblogs/20130903 In order to provide data protection and recovery for the Web Server log data, snapshots are enabled for the parent directory: hdfs dfsadmin -allowSnapshot /data/weblogs Snapshots need to be explicitly enabled for directories. This provides system administrators with the level of granular control they need to manage data in HDP. TAKE POINT IN TIME SNAPSHOTS The following command creates a point in time snapshot of the /data/weblogs/directory and its subtree: hdfs dfs -createSnapshot /data/weblogs This will create a snapshot, and give it a default name which matches the timestamp at which the snapshot was created. Users can provide an optional snapshot name instead of the default. With the default name, the created snapshot path will be: /data/weblogs/.snapshot/s20130903-000941.091. Users can schedule a CRON job to create snapshots at regular intervals. Example, when you run CRON job: 30 18 * * * rm /home/someuser/tmp/*, the comand tells your file system to run the content from the tmp folder at 18:30 every day. For instance, to integrate CRON jobs with HDFS snapshots, run the command: 30 18 * * * hdfs dfs -createSnapshot /data/weblogs/* to schedule Snapshots to be created each day at 6:30. To view the state of the directory at the recently created snapshot: hdfs dfs -ls /data/weblogs/.snapshot/s20130903-000941.091 Found3 items drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/.snapshot/s20130903-000941.091/20130901 drwxr-xr-x - web hadoop 02013-09-0200:55/data/weblogs/.snapshot/s20130903-000941.091/20130902 drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/.snapshot/s20130903-000941.091/20130903 RECOVER LOST DATA As new data is loaded into the web logs dataset, there could be an erroneous deletion of a file or directory. For example, an application could delete the set of logs pertaining to Sept 2nd, 2013 stored in the /data/weblogs/20130902 directory. Since /data/weblogs has a snapshot, the snapshot will protect from the file blocks being removed from the file system. A deletion will only modify the metadata to remove /data/weblogs/20130902 from the working directory. To recover from this deletion, data is restored by copying the needed data from the snapshot path: hdfs dfs -cp /data/weblogs/.snapshot/s20130903-000941.091/20130902/data/weblogs/ This will restore the lost set of files to the working data set: hdfs dfs -ls /data/weblogs Found3 items drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/20130901 drwxr-xr-x - web hadoop 02013-09-0412:10/data/weblogs/20130902 drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/20130903 Since snapshots are read-only, HDFS will also protect against user or application deletion of the snapshot data itself. The following operation will fail: hdfs dfs -rmdir /data/weblogs/.snapshot/s20130903-000941.091/20130902 NEXT STEPS With HDP 2.1, you can use snapshots to protect your enterprise data from accidental deletion, corruption and errors. Download HDP to get started.
... View more
04-25-2016
08:11 PM
@swagle Thank you. I will check with start all/stop all again.
... View more
02-23-2018
11:50 AM
@Kuldeep Kulkarni Add "deploy JCE policies" steps as prerequisites. I tried without JCE and it fails for me. Let me know if i am missing anything.
... View more
11-14-2017
06:08 AM
A few days ago, I happened to run into this issue myself. The root cause is, when installing Ranger, in the "External URL" property, the administrator entered "http://hostname.example.com:6080/", instead of the expected "http://hostname.example.com:6080" (WITHOUT the trailing slash character). Even though the Ranger installation would go through, Ranger's Usersync would log errors in /var/log/ranger/usersync/usersync.log due to this extraneous character. Also, any attempt to enable any of Ranger's plugins would fail, with the error message "Ambari admin username and password are blank", because Ranger is indeed missing many users, including the important one amb_ranger_admin. To fix this, just edit this property and remove any character after port 6080, and everything will start working.
... View more
04-24-2016
08:29 AM
Okay, I see 😞 Why hue is unsupported on Ubuntu then? What is the problem? It is pure python application that should be OS-independent? Hadoop's own YARN web interface is so poor and it would be very nice to have Hue supported.
... View more
04-22-2016
05:17 PM
Please keep changing the IP in any URL which does not have the desired IP address. Then it should work. OR you can do following to get rid off this error as mentioned by: @Kuldeep Kulkarni: Add below line in /etc/hosts file (in Mac/Linux). in Windows it is C:\Windows\System32\drivers\etc\hosts: 192.168.183.132 sandbox.hortonworks.com
... View more
04-05-2017
09:32 AM
Hi @Kuldeep Kulkarni, I have lost id_rsa private key file, now I need to add two more nodes, will it be possible to add the new datanodes? What is the solution for this? Can I generate new keygen and can I apply the new private key in Ambari? Thanks in Advance. Regards, Ram
... View more