Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 531 | 06-04-2025 11:36 PM | |
| 1070 | 03-23-2025 05:23 AM | |
| 552 | 03-17-2025 10:18 AM | |
| 2060 | 03-05-2025 01:34 PM | |
| 1289 | 03-03-2025 01:09 PM |
12-24-2019
01:06 AM
@saivenkatg55 I have tried to analyze your situation but with access to the Linux box it rather difficult,but I think there is a workaround. The chattr linux command makes important files IMMUTABLE (Unchangeable). The immutable bit [ +i ] can only be set by superuser (i.e root) user or a user with sudo privileges can be able to set. This will prevent the file from being deleted forcefully, renamed or change the permissions, but it won’t be allowed says 'Operation not permitted“' # ls -al /var/run/hadoop-yarn/yarn/ total 8 . .. -rw-r--r-- 1 yarn hadoop 0 Dec 24 09:34 hadoop-yarn-nodemanager.pid Set immutable bit # chattr +i hadoop-yarn-nodemanager.pid Verify the attribute with command the below command # lsattr ----i--------e-- ./hadoop-yarn-nodemanager.pid The normal ls command shows no difference # ls -al /var/run/hadoop-yarn/yarn/ total 8 drwxr-xr-x 2 root root 4096 Dec 24 09:34 . drwxr-xr-x 3 root root 4096 Dec 24 09:34 .. -rw-r--r-- 1 yarn hadoop 0 Dec 24 09:34 hadoop-yarn-nodemanager.pid Deletion protection # rm -rf /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid rm: cannot remove ‘/var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid’: Operation not permitted Permission change protected # chmod 755 /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid chmod: changing permissions of ‘/var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid’: Operation not permitted How to unset attribute on Files # chattr -i /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid After resetting permissions, verify the immutable status of files using lsattr command # lsattr ---------------- ./var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid Please do that and revert
... View more
12-22-2019
10:13 AM
2 Kudos
@Prakashcit That by design: A NOVALIDATE constraint is basically a constraint that can be enabled but for which hive will not check the existing data to determine whether there might be data that currently violate the constraint. This is useful if we know there’s data that violates the constraint but we want to quickly put on a constraint to prevent further violations, with the intention to clean up any possible violations at some future point in time. It’s also potentially useful if we know the data is clean and so want to prevent the potentially significant overheads of hive having to check all the data to ensure there are indeed no violations.
... View more
12-21-2019
06:47 AM
@Uppal Best way to duplicate a partitioned table in Hive Create the new target table with the schema from the old table the describe formatted could help with the SQL Use hadoop fs -cp to copy all the partitions from source to the target table Run MSCK REPAIR TABLE table_name; on the target table HTH
... View more
12-20-2019
01:02 PM
@GrahamB No you don't need to wait for 24 hours to destroy a kerberos ticket you will need to run on the Kerberos server as the user Check valid ticketTo list all of the entries in the default credentials cache $ klist You should have some out here To delete the default credentials cache for the user $ kdestroy Then to obtain a ticket-granting ticket with a lifetime of 10 hours, which is renewable for five days, type: $ kinit -l 10h -r 5d your_principal HTH
... View more
12-20-2019
03:13 AM
@saivenkatg55 The file permission should be 644 not 444 # chmod 644 /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid Do that and revert please
... View more
12-19-2019
02:12 PM
@saivenkatg55 This "Exiting with status 1: java.io.IOException: Problem starting http server" error should be linked to your other question I just have responded to https://community.cloudera.com/t5/Support-Questions/Unable-to-start-the-node-manager/td-p/286013 If this is resolved then the java.io.IOException shouldn't occur HTH
... View more
12-19-2019
01:55 PM
@saivenkatg55 I think there is a permission issue with the pid file Can you check the permissions, if for any reason the are not as shown in the screenshot please run the chown as root to rectify that # chown yarn:hadoop /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid Do that for all files in the directory whose permissions are not correct. HTH
... View more
12-19-2019
09:17 AM
1 Kudo
@pnkalyan Did you ever use it before? The root/hadoop is the initial password on the first login with the root/hadoop combination you are forced to change the password so my suspicion is you initially did change it. For reference see: Learning the ropes od the HDP sandbox Hope that helps
... View more
12-17-2019
05:48 AM
1 Kudo
@Bindal Do the following steps sandbox-hdp login: root root@sandbox-hdp.hortonworks.com's password: ..... [root@sandbox-hdp~] mkdir -p /tmp/data [root@sandbox-hdp~]cd /tmp/data Now here you should be in /tmp/data to validate that do [root@sandbox-hdp~]pwd copy your riskfactor1.csv to this directory using some tool win Winscp or Mobaxterm see my screenshot using winscp My question is where is riskfactor1.csv file located? If that's not clear you can upload using the ambari view first navigate to /bindal/data and then select the upload please see attached screenshot to upload the file from your laptop. After the successful upload, you can run your Zeppelin job and keep me posted
... View more
12-16-2019
02:32 PM
@RobertCare You will need to run the Ranger AD user sync see good document here https://community.cloudera.com/t5/Community-Articles/Configuring-Ranger-Usersync-with-AD-LDAP-for-a-common/ta-p/245959 To test-run loading User and Group data into Ranger before committing to the changes: Set ranger.usersync.policymanager.mockrun=true. This parameter can be found in Ambari> Ranger> Configs> Advanced> Advanced ranger-ugsync-site. View the Users and Groups that will be loaded into Ranger: tail -f /var/log/ranger/usersync/usersync.log. After confirming that the users and groups are retrieved as intended, set ranger.usersync.policymanager.mockrun=false and restart Ranger Usersync. This will sync the users shown in the usersync log to the Ranger database. HTH
... View more