Member since
12-11-2015
199
Posts
29
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
309 | 08-14-2024 06:24 AM | |
1234 | 10-02-2023 06:26 AM | |
1144 | 07-28-2023 06:28 AM | |
7247 | 06-02-2023 06:06 AM | |
594 | 01-09-2023 12:20 PM |
03-14-2023
09:14 AM
Hi, This statement in the doc "In cases where new files are added to an existing partition, issue a REFRESH statement for the table, followed by a DROP INCREMENTAL STATS and COMPUTE INCREMENTAL STATS sequence for the changed partition." Applies specifically to a partition in which stats are already available but you added more data to that existing partition. If you are unsure about whether stats exist for a partition you can run show table stats <table_name>; and check the "Incremental stats" section Query: show table stats test_part
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
| b | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats | Location |
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
| 1 | 0 | 1 | 0B | NOT CACHED | NOT CACHED | TEXT | false | hdfs://xxxx:8020/user/hive/warehouse/test_part/b=1 |
| Total | -1 | 1 | 0B | 0B | | | | |
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
Fetched 2 row(s) in 5.60s If false, you can run COMPUTE INCREMENTAL STATS with PARTITION If true and you have added more data to this partition then you have to drop the stats and then run COMPUTE INCREMENTAL STATS with PARTITION
... View more
02-22-2023
12:12 PM
Hi. Yeah its expected when you have the common path for tgt cache for multiple user. Can you make the location unique for each different user - I haven't tested but I see an option in this link https://gpdb.docs.pivotal.io/6-3/admin_guide/kerberos-win-client.html Set up the Kerberos credential cache file. On the Windows system, set the environment variable KRB5CCNAME to specify the file system location of the cache file. The file must be named krb5cache. This location identifies a file, not a directory, and should be unique to each login on the server. When you set KRB5CCNAME, you can specify the value in either a local user environment or within a session. For example, the following command sets KRB5CCNAME in the session: set KRB5CCNAME=%USERPROFILE%\krb5cache
... View more
01-09-2023
12:20 PM
2 Kudos
You can set quota on /tmp - Once quota is reached further write on the directory will fail. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/scaling-namespaces/topics/hdfs-set-quotas-cm.html has the steps to enable quota
... View more
12-12-2022
08:10 AM
The jira HADOOP-9640 added the feature of allowing good users (not hogging NN rpc queue) to have fair response time. The explanation on how this works is available in this video https://www.youtube.com/watch?v=7Axz3bO18l8&ab_channel=DataWorksSummit
... View more
04-26-2020
11:17 PM
Can you share the exact steps/list of configuration you changed, to configure kerberos in kafka? During this time of failure in broker - What is the exact error you notice on zookeeper side? 4:29:51.371 AM ERROR ZooKeeperClient [ZooKeeperClient] Auth failed. Did you tweak any configuration on zookeeper too?
... View more
04-01-2020
11:19 PM
Hi @Amn_468 Please configure it in CM > HDFS > Configuration >
Java Heap Size of NameNode in Bytes
Enter a value per requirement
Save and Restart
... View more
03-31-2020
09:50 AM
Are there any error in JHS logs especially around this timeframe 2020-03-31 13:14:* ?
... View more
03-27-2020
09:37 AM
The call to this region server 1.1.1.1:60020 is getting closed instantly Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException:
Call to hostname003.enterprisenet.org/1.1.1.1:60020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hostname003.enterprisenet.org/1.1.1.1:60020 is closing. Call id=4045, waitTime=2
1. Is there any hbase-site.xml that is bundled with your application jar? 2. If yes, Can you rebuild the jar with latest hbase-site.xml from /etc/hbase/conf/ 3. I am not sure if server is printing any ERROR but it will be worth to check, what exactly is happening on RS logs in node hostname003.enterprisenet.org at the time 2020 Mar 27 01:18:16 (i.e when the connection from client is closed)
... View more
03-27-2020
02:15 AM
Can you attach the full exception or the error log - Its unclear what is the actual error with the snippet you pinged in last response
... View more
03-26-2020
09:00 PM
These are 2 separate issues ERROR1: Did you delete till/disk{1,2,3,4,5}/yarn/nm/usercache/mcaf or you deleted till /disk{1,2,3,4,5}/yarn/nm/usercache/ If you had deleted till /disk{1,2,3,4,5}/yarn/nm/usercache/ then please restart all the nodemanagers. If not, Can you please let me know How many nodemanagers do you have in this cluster? Can you run namei -l /disk{1,2,3,4,5}/yarn/nm/usercache/ across all those machines? Please paste your result with "Insert or code sample" option in the portable so that it will has better readablity ERROR2: Mar26 11:36:00,863 main com.class.engineering.portfolio.dmxsloader.main.DMXSLoaderMain: org.apache.hadoop.hbase.client.RetriesExhaustedException thrown: Can't get the location a. The machine from which you are submitting this job - Does it have hbase gateway installed in it? If not can you run it from a machine which has hbase gateway b. Also since you said this job worked from hbase user and not mcaf - Have you attempted to grant permission to mcaf to the respective table which you are trying to access? https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_hbase_authorization.html#topic_8_3_2 has the steps c. What is the error you see in HMaster logs during the exact timestamp you notice this error in job?
... View more