Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 599 | 06-04-2025 11:36 PM | |
| 1148 | 03-23-2025 05:23 AM | |
| 573 | 03-17-2025 10:18 AM | |
| 2167 | 03-05-2025 01:34 PM | |
| 1362 | 03-03-2025 01:09 PM |
07-26-2021
10:51 PM
@sipocootap2 Unfortunately, you cannot disallow snapshots in a snapshottable directory that already has snapshots! Yes, you will have to list and delete the snapshot even if it contains subdirs you only pass the root snapshot in the hdfs dfs -deleteSnapshot command. If you had an $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo/work/john
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2/work//peter You would simply delete the snapshots like $ hdfs dfs -deleteSnapshot /app/tomtest/ sipo
$ hdfs dfs -deleteSnapshot /app/tomtest/ tap2
... View more
07-25-2021
01:34 PM
@USMAN_HAIDER There is this step below did you perform that? Kerberos must be specified as the security mechanism for Hadoop infrastructure, starting with the HDFS service. Enable Cloudera Manager Server security for the cluster on an HDFS service. After you do so, the Cloudera Manager Server automatically enables Hadoop security on the MapReduce and YARN services associated with that HDFS service. In the Cloudera Manager Admin Console:
Select Clusters > HDFS-n.
1.Click the Configuration tab.
2.Select HDFS-n for the Scope filter.
3.Select Security for the Category filter.
4.Scroll (or search) to find the Hadoop Secure Authentication property.
5.Click the Kerberos button to select Kerberos: Please revert
... View more
07-24-2021
12:49 PM
@SheltonThank you so much
... View more
07-18-2021
02:10 PM
@mike_bronson7 Are you using the default capacity schedule settings? No queues/leafs created? Is what you shared the current seeting?
... View more
07-12-2021
01:59 AM
@tarekabouzeid91 wrote: I assume you are using Capacity scheduler not fair scheduler. that's why queues wont take available resources from other queues, you can read more regarding that here Comparison of Fair Scheduler with Capacity Scheduler | CDP Public Cloud (cloudera.com) . Yes I am using Capacity scheduler. yarn.resourcemanager.scheduler.class = org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
... View more
07-11-2021
06:52 PM
Hi @srinivasp I believe you are missing SELECT and LIST privilege for the table in ranger for the user. Can you try to list the tables using Hive user from beeline and check whether you are seeing the same problem?
... View more
07-10-2021
05:38 AM
Hi @dgiri_india1989 Could you please share more details for this issue about how you are able to fix this. We are also facing similar issue with Ranger KMS service. RangerKMS Principal is created in AD KDC, Also Keytab creation is success according to Ambari Server log, but it's not distributed to RangerKMS service hosted node. Due to this service is not starting up. Thank you.
... View more
07-09-2021
08:25 AM
hi @Shelton Thanks for the documentation but problem was a time issue first HMS was in UTC and the new one was CEST Changing mnode2 to UTC fixed the issue
... View more
07-08-2021
02:33 PM
@SparkNewbie Bingo you are using the derby DB, which is only recommended for testing. There are three modes for Hive Metastore deployment: Embedded Metastore Local Metastore Remote Metastore In Hive by default, metastore service runs in the same JVM as the Hive service. It uses embedded derby database stored on the local file system in this mode. Thus both metastore service and hive service runs in the same JVM by using embedded Derby Database. But, this mode also has its limitation that, as only one embedded Derby database can access the database files on disk at any one time, so only one Hive session could be open at a time. 21/07/07 23:07:56 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY Derby is an embedded relational database in Java program and used for online transaction processing and has a 3.5 MB disk-space footprint. Depending on your software HDP or Cloudera ensure the hive DB is plugged to an external Mysql database For CDH using Mysql For HDP using mysql Check your current hive UI backend metadata databases !! After installing MySQL then you should toggle hive config to point to the external Mysql database .. Once done your commands and the refresh should succeed Please let me know if you need help Happy hadooping
... View more
07-08-2021
06:59 AM
Thank you for your participation in Cloudera Community. I'm happy to see you resolved your issue. Please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more