Member since
12-11-2015
244
Posts
31
Kudos Received
32
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 338 | 07-22-2025 07:58 AM | |
| 947 | 01-02-2025 06:28 AM | |
| 1581 | 08-14-2024 06:24 AM | |
| 3116 | 10-02-2023 06:26 AM | |
| 2385 | 07-28-2023 06:28 AM |
03-05-2020
08:06 PM
Please share the full exception from beeline, if full exception is not available try with --verbose on the beeline command
... View more
03-05-2020
07:55 PM
Hi @san_t_o I wanted to validate once if the mounts and permission are same. Those look exactly same except the additional "sunit=512,swidth=512" for /var mount but that cant be an issue. At this point its unclear what exactly is denied permission. What is the selinux status? Is it disabled on both working and nonworking node. Please run below command on both working and non-working node getenforce If its same in both nodes, Can you clear the entries under this directory /var/log/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/* and also on /var/lib/ambari-agent/tmp/ If that doesn't work, Can you try pointing JAVA_LIBRARY_PATH in yarn-env.sh to a different directory How exactly are you starting nodemanager? Is it by running commands manually? If yes can you try running the command with strace -f -s 2000 <command> [Strace allows to capture all syscalls and we can get more debug info]
... View more
03-05-2020
08:44 AM
Could you please share the result of below commands from one problematic and a working node 1. namei -l /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir 2. mount
... View more
03-05-2020
02:11 AM
The commands i am using,
kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG
kinit -kt /home/mcaf/hdfs.keytab HTTP/hostname@Domain.ORG
kinit -kt /home/mcaf/hbase.keytab hbase/hostname@Domain.ORG
kinit -kt /home/mcaf/yarn.keytab HTTP/hostname@Domain.ORG
kinit -kt /home/mcaf/yarn.keytab yarn/hostname@Domain.ORG
kinit -kt /home/mcaf/zookeeper.keytab zookeeper/hostname@Domain.org You have to kinit as the user by which you want to access the data. In the above commands I see you are trying to run kinit as hdfs, HTTP, hbase, yarn and zookeeper sequentially. When you run kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG It will write a tgt in location set by KRB5CCNAME(default is /tmp/krb5cc_[uid]). When you run the next kinit with hbase, the tgt acquired by previous command gets overwritten. In your case you are running multiple kinit and the last kinit was for the zookeeper user and hence the tgt will be available for zookeeper and all user prior to it gets overwritten. So use 1 kinit command with a user id intended for that application
... View more
03-05-2020
01:36 AM
1 Kudo
This article covers your requirement https://community.cloudera.com/t5/Community-Articles/Hive-Changing-Database-Location/ta-p/246699
... View more
03-04-2020
10:57 PM
1 Kudo
This usually happens if the directory configured for "-Djava.io.tmpdir" is mounted with noexec option. Removing noexec from the mount option would help to fix the issue
... View more
03-04-2020
08:58 PM
Sorry It seems I misread your request. The steps I quoted above is for changing the whole default location of hive warehouse to a new path. If you are looking for changing the database location of existing database alone inside warehouse and dont wants to move the whole warehouse to new location, the above steps aren't right
... View more
03-04-2020
08:36 PM
Is this a cluster with sentry and sentry hdfs synchronisation enabled? If it is just hive without sentry or sentry hdfs synchronisation enabled and you want to switch from default /user/hive/warehouse to a new path then following steps would do Step1: Stop hive
Step2: Take backup of your metastore database
Step3: Change the CM > Hive > Configuration > hive.metastore.warehouse.dir setting to '<new path>'
- Deploy client configuration
Step4: Move the current directory
i.e hdfs dfs -mv /user/hive/warehouse <new_path>
Step5:
Update HMS DB tables.
Login to backend hms db and update the following tables in metastore database.
update SDS set location = (select replace(location, '/user/hive/warehouse', '<new_path>') ) where location like '%/user/hive/warehouse%';
update DBS set DB_LOCATION_URI = (select replace(DB_LOCATION_URI, '/user/hive/warehouse', '<new_path>') ) where DB_LOCATION_URI like '%/user/hive/warehouse%';
update SKEWED_COL_VALUE_LOC_MAP set LOCATION = (select replace(LOCATION, '/user/hive/warehouse', '<new path>') ) where LOCATION like '%/user/hive/warehouse%';
update SERDE_PARAMS set PARAM_VALUE = (select replace(PARAM_VALUE, '/user/hive/warehouse', '<new path>') ) where PARAM_VALUE like '%/user/hive/warehouse%';
update TABLE_PARAMS set PARAM_VALUE = (select replace(PARAM_VALUE, '/user/hive/warehouse', '<new path>') ) where PARAM_VALUE like '%/user/hive/warehouse%';
Step6:
- Start hive Make sure to test this steps on your test clusters first before proceeding towards production cluster.
... View more
03-02-2020
10:22 PM
3 Kudos
You can update this property in CM > Nifi > Instances > Choose the NiFi Node to which you want to update cluster.is.node to false > Configuration > NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/nifi.properties.xml > Add nifi.cluster.is.node in name and false in value field > Save and Restart The nifi properties are stored in server side properties which will be usually stored under - /var/run/cloudera-scm-agent/process/<num>-nifi-NIFI_NODE/ -- Replace num with latest number you see under the /var/run/cloudera-scm-agent/process/ directory
... View more
03-02-2020
06:30 AM
I assume you were checking in hdfs-site.xml of the /etc/hadoop/conf to validate the configuration change. Through CM when you apply this change CM > HDFS > Configuration > Superuser Group > Enter your desired supergroup name > Save and Restart The change gets reflected in the server side [Namenodes, Datanodes] and it is not expected to be present in /etc/hadoop/conf directory. Because these properties are server side properties and hence not propagated to *-site.xml used by clients. Just incase if you want to validate on the server side, you can search for this property in process directories where these services are running grep "dfs.permissions.superusergroup" /var/run/cloudera-scm-agent/process/ -Rani Additionally you can also make sure the group membership of the users by running hdfs groups <supergroup-user>
... View more