Member since
12-11-2015
206
Posts
30
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
479 | 08-14-2024 06:24 AM | |
1463 | 10-02-2023 06:26 AM | |
1304 | 07-28-2023 06:28 AM | |
8517 | 06-02-2023 06:06 AM | |
653 | 01-09-2023 12:20 PM |
04-01-2020
11:19 PM
Hi @Amn_468 Please configure it in CM > HDFS > Configuration >
Java Heap Size of NameNode in Bytes
Enter a value per requirement
Save and Restart
... View more
03-31-2020
09:50 AM
Are there any error in JHS logs especially around this timeframe 2020-03-31 13:14:* ?
... View more
03-27-2020
09:37 AM
The call to this region server 1.1.1.1:60020 is getting closed instantly Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException:
Call to hostname003.enterprisenet.org/1.1.1.1:60020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hostname003.enterprisenet.org/1.1.1.1:60020 is closing. Call id=4045, waitTime=2
1. Is there any hbase-site.xml that is bundled with your application jar? 2. If yes, Can you rebuild the jar with latest hbase-site.xml from /etc/hbase/conf/ 3. I am not sure if server is printing any ERROR but it will be worth to check, what exactly is happening on RS logs in node hostname003.enterprisenet.org at the time 2020 Mar 27 01:18:16 (i.e when the connection from client is closed)
... View more
03-27-2020
02:15 AM
Can you attach the full exception or the error log - Its unclear what is the actual error with the snippet you pinged in last response
... View more
03-26-2020
09:00 PM
These are 2 separate issues ERROR1: Did you delete till/disk{1,2,3,4,5}/yarn/nm/usercache/mcaf or you deleted till /disk{1,2,3,4,5}/yarn/nm/usercache/ If you had deleted till /disk{1,2,3,4,5}/yarn/nm/usercache/ then please restart all the nodemanagers. If not, Can you please let me know How many nodemanagers do you have in this cluster? Can you run namei -l /disk{1,2,3,4,5}/yarn/nm/usercache/ across all those machines? Please paste your result with "Insert or code sample" option in the portable so that it will has better readablity ERROR2: Mar26 11:36:00,863 main com.class.engineering.portfolio.dmxsloader.main.DMXSLoaderMain: org.apache.hadoop.hbase.client.RetriesExhaustedException thrown: Can't get the location a. The machine from which you are submitting this job - Does it have hbase gateway installed in it? If not can you run it from a machine which has hbase gateway b. Also since you said this job worked from hbase user and not mcaf - Have you attempted to grant permission to mcaf to the respective table which you are trying to access? https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_hbase_authorization.html#topic_8_3_2 has the steps c. What is the error you see in HMaster logs during the exact timestamp you notice this error in job?
... View more
03-25-2020
11:06 PM
These app cache directories gets auto generated upon job submission - So can you remove them from nodemanagers [so that it gets created fresh with required acls] /disk{1,2,3,4,5}/yarn/nm/usercache/mcaf and then re-submit the job again
... View more
03-23-2020
03:59 AM
"although same property (dfs.datanode.balance.max.concurrent.moves) already exists in Cloudera Manager." --> Okay, I assume you are referring to the one highlighted in screenshot below Yes its unnecessary to add dfs.datanode.balance.max.concurrent.moves in Balancer Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml if you had used the "Maximum Concurrent Moves" section. Also note that this "Maximum Concurrent Moves" is scoped only to balancer and not to datanodes. So for datanodes you have to explicitly set it using " DataNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" Regarding reason for why to add this property both for balancer and datanode is mentioned in my previous comment. Hope that clarifies and let me know if there are further questions I will raise an internal jira for correcting the document to avoid duplicate entry on balancer safety-valve.
... View more
03-22-2020
11:32 PM
Yes you can install CM offline after downloading the packages and - Its documented in this link https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ig_create_local_package_repo.html#internal_package_repo Once the repo is ready you can install the binaries using the steps in link https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/install_cloudera_packages.html#id_z2h_pnm_25
... View more
03-22-2020
10:09 PM
This error usually happens if the client doesnt match the QOP on server. Can you share the connection string used in your code snippet? Is your hiveserver2 kerberised? Can you please share what is the value set for this property hive.server2.thrift.sasl.qop in your hiveserver2's hive-site.xml? Example connection string is in link https://github.com/dropbox/PyHive/pull/135/files/ec5270c4b6556bcd20f0f81afbced4a69ca9eff0
... View more
03-22-2020
08:48 PM
You would need to tune your heap in accordance with the number of files. The tuning guideline is in document https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_command-line-installation/content/configuring-namenode-heap-size.html If you would like to get count of files, You may run hdfs dfs -count /
... View more