Member since
06-26-2018
26
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3198 | 10-22-2019 09:24 AM | |
2226 | 10-29-2018 02:28 PM | |
10526 | 10-08-2018 08:36 AM |
09-28-2020
11:49 AM
1 Kudo
Zookeeper does not allow listing or editing znodes if the current ACL doesn't have a set of permissions for the user or group. This is observed as a security authentication of znodes in all Cloudera Distros inherited from Apache Zookeeper. There are few references for the workaround, just compiling them together for Cloudera Managed clusters.
For the following error:
Authentication is not valid
There are two ways to address them:
Disable any ACL validation in Zookeeper (Not recommended):
Add the following config in CM > Zookeeper config > Search for 'Java Configuration Options for Zookeeper Server': -Dzookeeper.skipACL=yes
Then Restart and refresh the stale configs.
Add a Zookeeper super auth:
Skip the part added in <SKIP> if you want to use ‘password' as the auth key. <SKIP> cd /opt/cloudera/parcels/CDH/lib/zookeeper/
java -cp "./zookeeper.jar:lib/*" org.apache.zookeeper.server.auth.DigestAuthenticationProvider super:password Use the last line from the following output on running the above command : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= </SKIP>
Add the following config in CM > Zookeeper config > Search 'Java Configuration Options for Zookeeper Server': -Dzookeeper.DigestAuthenticationProvider.superDigest=super:DyNYQEQvajljsxlhf5uS4PJ9R28=
Restart and refresh the stale configs.
Once connected to zookeeper-client, add the following command before executing any further command: addauth digest super:password
You will be able to run any operation on any znode post this command.
NOTE:
Version of slf4j-api may differ on later builds.
Update the super password to any string you desire. <password>
... View more
Labels:
10-23-2019
05:33 AM
Thanks for reply @rohitmalhotra User Limit for yarn is set to 65536. Is there any recommended highest value or shall I just make it unlimited? (It can have consequences?) Edit : I tried setting unlimited. Still seeing same error.
... View more
10-22-2019
08:11 PM
I would suggest you to go through the below docs and verify the outbound rules on port 7180. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
... View more
10-22-2019
12:05 PM
Good news. If that resolves your issue, please spare some time in accepting the solution. Thanks.
... View more
10-22-2019
10:44 AM
Seeing below exception on running Hive TPCDS data gen (https://github.com/hortonworks/hive-testbench) for a scale of ~500G.
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread
Attached log for complete stacktrace.
Cluster Configuration :
16 Nodes / 12 Nodemanagers / 12 Datanodes
Per Node Config :
Cores : 40
Memory : 392GB
Ambari Configs changed from initial configs to improve performance :
Decided to set 10G as container size to utilise maximum cores per node (320G/10G = 32 containers using 1 Core/node. Hence ~32 Cores/node utilised)
YARN
yarn.nodemanager.resource.memory-mb = 329216 MB
yarn.scheduler.minimum-allocation-mb = 10240 MB
yarn.scheduler.maximum-allocation-mb = 329216 MB
MapReduce (All Heap Sizes : -Xmx8192m : 80% of container)
mapreduce.map.memory.mb = 10240 MB
mapreduce.reduce.memory.mb = 10240 MB
mapreduce.task.io.sort.mb = 1792 MB
yarn.app.mapreduce.am.resource.mb = 10240 MB
Hive
hive.tez.container.size = 10240MB
hive.auto.convert.join.noconditionaltask.size = 2027316838 B
hive.exec.reducers.bytes.per.reducer = 1073217536 B
Tez
tez.am.resource.memory.mb = 10240 MB
tez.am.resource.java.opts = -server -Xmx8192m
tez.task.resource.memory.mb = 10240 MB
tez.runtime.io.sort.mb = 2047 MB (~20% of container)
tez.runtime.unordered.output.buffer.size-mb = 768 MB (~10% of container)
tez.grouping.max-size = 2073741824 B
tez.grouping.min-size = 167772160 B
Any help would be greatly appreciated. Referred https://community.cloudera.com/t5/Community-Articles/Demystify-Apache-Tez-Memory-Tuning-Step-by-Step/ta-p/245279 for some tuning values.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Tez
10-22-2019
09:38 AM
Regular exception is observed in CM server logs : 2019-10-20 17:32:34,687 ERROR ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: (11 skipped) Unable to retrieve remote parcel repository manifest
java.util.concurrent.ExecutionException: java.net.ConnectException: connection timed out: archive.cloudera.com/151.101.188.167:443 This may happen if you have http_proxy to access public web or you have private network. Currently CM is trying to access the archive url to download parcels as this was the method used while instaling CM and failing to do so. Try running below command on CM node and let us know the output : wget https://archive.cloudera.com/cdh6/6.3.1/parcels/manifest.json If you want to set proxy can be done under Administration > Search for 'Proxy'
... View more
10-22-2019
09:24 AM
Can you verify the proper ownership of the cloudera-scm-server-db folder by running below commands : chown -R cloudera-scm:cloudera-scm /var/lib/cloudera-scm-server-db/
chmod 700 /var/lib/cloudera-scm-server-db/
chmod 700 /var/lib/cloudera-scm-server-db/data
service cloudera-scm-server-db start Also verify the selinux status by running sestatus
... View more
02-28-2019
11:51 AM
3 Kudos
Hi, If you don't want SMARTSENSE in your cluster but still it comes as default selected component during install wizard then go through below steps to save yourself some trouble. Tried on HDP Version : 3.0 and 3.1 Goto below path on the ambari-server node: /var/lib/ambari-server/resources/stacks/HDP/3.0/services/SMARTSENSE/metainfo.xml Open the above file in editor mode (e.g. vi) Uncomment or delete the below line [Line 23 may vary in different release] <selection>MANDATORY</selection> After making the above change restart ambari-server and proceed with cluster install wizard. Now SMARTSENSE won't be a mandate component. Thanks for reading.
... View more
Labels:
10-29-2018
04:21 PM
If it worked for you, please take a moment to login and "Accept" the answer.
... View more
10-29-2018
02:28 PM
I believe there are other files along with your 9GB file and by coincidence the other files constitute 18GB of data. Other files constitute of many component libraries, ambari data, user data, tmp data. Run the below command to find which files are taking size : hadoop fs -du -s -h /* Go further by putting path in place of * till you find which other files are present which add upto 18G
... View more