Member since
03-19-2018
47
Posts
2
Kudos Received
0
Solutions
05-13-2020
06:20 AM
Encountering following error during client configuration deployment
Can't open /run/cloudera-scm-agent/process/ccdeploy_hadoop-conf_etchadoopconf.cloudera.yarn_-3292500189129258295/yarn-conf/hive-env.sh: No such file or directory. ++ dirname /etc/hadoop/conf.cloudera.yarn + ROOT_DIR_NAME=/etc/hadoop + '[' '!' -e /etc/hadoop ']' + for SPECIAL_FILE in '$DEST_PATH/{taskcontroller.cfg,container-executor.cfg}' + '[' -e /etc/hadoop/conf.cloudera.yarn/taskcontroller.cfg ']' + for SPECIAL_FILE in '$DEST_PATH/{taskcontroller.cfg,container-executor.cfg}' + '[' -e /etc/hadoop/conf.cloudera.yarn/container-executor.cfg ']' + cp -a /etc/hadoop/conf.cloudera.yarn/container-executor.cfg /run/cloudera-scm-agent/process/ccdeploy_hadoop-conf_etchadoopconf.cloudera.yarn_-3292500189129258295/yarn-conf
... View more
Labels:
11-08-2019
11:34 PM
@Shelton yes it is still open
... View more
10-13-2019
11:32 PM
i have share most of the details over the Zookeeper Apache issues
https://issues.apache.org/jira/browse/ZOOKEEPER-3576
Could somebody please throw somelights on it
... View more
- Tags:
- apache-jira
- Zookeeper
Labels:
- Labels:
-
Apache Zookeeper
06-28-2019
07:55 AM
Hi @BiggieSmalls Yes , i did happen to got the issue resolved , it's been quite an while ago , let me get back to you on monday
... View more
04-03-2019
01:22 AM
I have an existing ClouderaManager-1 managing 30nodes CDH instance (5.15.0), which has been enabled with TLS level 3 encryption and kerberized
and i have another instance of ClouderaManager-2 managing 5node CDH instance(5.14.0) ( Non-TLS and non-kerberized)
i would like to merge 5node CDH instance(5.14.0) to ClouderaManager-1 instance
Could i do so and can somebody please point me to the correct doc link in acheiving it.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
03-05-2019
09:55 AM
we are planning to move our 30 node cluster from one DC to another DC what would be the other approach that could be taken , considering , we are not setting any new cluster onto target DC for data replication as mentioned in below link https://medium.com/hulu-tech-blog/migrating-hulus-hadoop-clusters-to-a-new-data-center-part-one-extending-our-hadoop-instance-b88c4bda61bc
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
02-26-2019
01:06 AM
i'm currently using cloudera community editition 5.15.0 from past 90 days I had External authentication configured over the Cloudera community edition , after few days of utilization i'm noticing that external authentication is not avaliable under the Administrattion-> Security Options Does External Authentication comes only will limited period of time ? please advice
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Security
11-21-2018
10:52 PM
@Tomas79, Thank you for the inputs
... View more
11-20-2018
03:47 AM
i'm not sure what's going wrong out here ,ideally it should not happen but , when i happen to execute the below query when HDFS service is down , i would notice the partition being dropped despite of below error Query: alter table fenet5_dev.dw_malicious_events drop partition (occurred_month = 201808) purge
ERROR: ImpalaRuntimeException: Error making 'dropPartition' RPC to Hive Metastore:
CAUSED BY: MetaException: Got exception: java.net.ConnectException Call From hpc143 to hpc123:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache Hive
10-09-2018
11:45 PM
@EricL, i had an impression that MEM_LIMIT only poses an soft limit of memory resource and helps in attaining concurrency but the reality is that it's an hard limit and query would fail exceeding beyond the mem_limit value, so after setting the memory limit to greater value query ran fine with the peak utilization of 27GB.
... View more
09-30-2018
10:24 PM
@Tim Armstrong, i have an follow-up question on this , can an given query exceed the memory consumption specified as Default minimum memory (MEM_LIMIT) Let's suppose , Default memory limit is set to 20g and if a given query require more than 20g can impalad process allocate the additional memory without cancelling it . so far, we have noticed , Default memory limit is setting an hard limit on the memory utilization thus canceling the query and each time we had to set the mem_limit to higher value and re-run although OS has sufficient amount of memory to allocate. Version : CDH5.8.2
... View more
09-27-2018
10:58 PM
i'm receving the below error message on memory limit although there is enough default memory limit set on the resource pool and explan plan only shows 288 MB out of 18 nodes cluster which leads to 5184 MB of total memory consumption
+-----------------------------------------------------------+ | Explain String | +-----------------------------------------------------------+ | Estimated Per-Host Requirements: Memory=288.00MB VCores=1 | | | | 01:EXCHANGE [UNPARTITIONED] | | | limit: 1 | | | | | 00:SCAN HDFS [fenet5.hmig_os_changes_details_malicious] | | partitions=1/25 files=3118 size=110.01GB | | predicates: job_id = 55451 | | limit: 1 | +-----------------------------------------------------------+
WARNINGS: Memory limit exceeded HdfsParquetScanner::ReadDataPage() failed to allocate 269074889 bytes for dictionary.
Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 257.23 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB HDFS_SCAN_NODE (id=0): Consumption=20.00 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.63 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB HDFS_SCAN_NODE (id=0): Consumption=20.00 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.27 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB HDFS_SCAN_NODE (id=0): Consumption=20.00 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.39 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB HDFS_SCAN_NODE (id=0): Consumption=20.00 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 16.09 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.74 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.74 GB HDFS_SCAN_NODE (id=0): Consumption=19.74 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 15.20 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB HDFS_SCAN_NODE (id=0): Consumption=19.64 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 14.61 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB HDFS_SCAN_NODE (id=0): Consumption=19.64 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 257.11 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.47 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.47 GB HDFS_SCAN_NODE (id=0): Consumption=19.47 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.51 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB HDFS_SCAN_NODE (id=0): Consumption=19.24 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.32 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB HDFS_SCAN_NODE (id=0): Consumption=19.24 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.73 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB HDFS_SCAN_NODE (id=0): Consumption=19.49 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.29 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB HDFS_SCAN_NODE (id=0): Consumption=19.49 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 256.61 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: memory limit exceeded. Limit=20.00 GB Consumption=20.00 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=20.00 GB HDFS_SCAN_NODE (id=0): Consumption=20.00 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 256.05 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.69 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=19.69 GB HDFS_SCAN_NODE (id=0): Consumption=19.69 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.35 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.97 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.97 GB HDFS_SCAN_NODE (id=0): Consumption=17.72 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 1.02 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.63 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.63 GB HDFS_SCAN_NODE (id=0): Consumption=17.63 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 1.01 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.94 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.94 GB HDFS_SCAN_NODE (id=0): Consumption=16.68 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 88.00 KB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.61 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.61 GB HDFS_SCAN_NODE (id=0): Consumption=16.36 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded HDFS_SCAN_NODE (id=0) could not allocate 255.23 MB without exceeding limit. Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.30 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.30 GB HDFS_SCAN_NODE (id=0): Consumption=16.30 GB DataStreamSender: Consumption=1.45 KB Block Manager: Limit=16.00 GB Consumption=0 Memory Limit Exceeded Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=8.02 GB Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=8.02 GB HDFS_SCAN_NODE (id=0): Consumption=8.02 GB Block Manager: Limit=16.00 GB Consumption=0
... View more
Labels:
- Labels:
-
Apache Impala
-
HDFS
09-25-2018
12:36 AM
@mmm286- in-case your issue is resolved , could you please kindly update the details and mark as solved
... View more
09-21-2018
10:52 PM
@HEWITT, it looks like agent is not able to resolve the hostname dev1 could you please let us know if you are able to ssh to dev1 from agent machine and vice-versa and telnet dev1 7182
... View more
09-20-2018
02:43 AM
Could you verify the troublshooting steps mentioned in below link and see if you need to fix anything at your end https://community.cloudera.com/t5/Cloudera-Manager-Installation/Getting-quot-heartbeat-quot-errors-when-trying-to-install/ta-p/36843
... View more
09-20-2018
02:24 AM
have you verified if 7182 port is open using telnet
... View more
09-13-2018
07:06 AM
agree to @Tomas79 , in restarting the services for new certs to come into effect
... View more
09-13-2018
07:03 AM
i have applied an new TLS certificate and hence needs to restart cloudera manager and agents so question is , does it going to impact jobs running CDH (hadoop,impala ...etc )services ?
... View more
Labels:
- Labels:
-
Cloudera Manager
09-11-2018
11:42 AM
@Tomas79 point i'm trying to understand is , no matter what format service validates it's certificate...it should get broken in-case of an issue after 15sec interval
... View more
09-11-2018
11:21 AM
i could only see new certificate applied on hue UI, hence i'm pretty sure on the path too of other server
... View more
09-11-2018
11:18 AM
@Tomas79 it should have fallen apart after 15 secs, thats interval at which agents sends heartbeat and i have encountered issues with TLS over the past and when somethig has gone wrong , service would immediately fail and throws error in the log. this is quite weird though
... View more
09-11-2018
11:11 AM
Does it require an restart of the cloudera manager service ?
... View more
09-11-2018
11:03 AM
@Tomas79, openssl s_client connect is reading the old certificate ,whereas i have replaced ceritificates with new one under the /opt/cloudera/security/x509 and /opt/cloudera/security/jks path and i did not happen to notice any heartbeat issue , agents hearbeat are also working fine , i don't see any issues with that
... View more
09-11-2018
10:39 AM
@Tomas79 i meant i have requested for an new certificate and applied it on the server
... View more
09-11-2018
09:01 AM
i have renewed the tls certificates and applied on the cloudera manager server but the browser is still showing the older one by looking at the expiry date , tried clearing the browser cache , but still it shows older ones. appreciate for any help
... View more
Labels:
- Labels:
-
Cloudera Manager
08-16-2018
11:41 PM
Currently we are running Cloudera manager without HA capability. what pre-cautionary measures that can we take now , which would help us in re-building the environment in-case of hardware failure ?
... View more
Labels:
- Labels:
-
Cloudera Manager
04-25-2018
09:24 AM
Adding to the above, below is the Advice i get from cloudera manager . Please advice is to what's the best approach : Advice This is a Hive Metastore health test that checks that a client can connect and perform basic operations. The operations include: (1) creating a database, (2) creating a table within that database with several types of columns and two partition keys, (3) creating a number of partitions, and (4) dropping both the table and the database. The database is created under the /user/hue/.cloudera_manager_hive_metastore_canary/<Hive Metastore role name>/ and is named "cloudera_manager_metastore_canary_test_db". The test returns "Bad" health if any of these operations fail. The test returns "Concerning" health if an unknown failure happens. The canary publishes a metric 'canary_duration' for the time it took for the canary to complete. Here is an example of a trigger, defined for the Hive Metastore role configuration group, that changes the health to "Bad" when the duration of the canary is longer than 5 sec: "IF (SELECT canary_duration WHERE entityName=$ROLENAME AND category = ROLE and last(canary_duration) > 5s) DO health:bad" A failure of this health test may indicate that the Hive Metastore is failing basic operations. Check the logs of the Hive Metastore and the Cloudera Manager Service Monitor for more details. This test can be enabled or disabled using the Hive Metastore Canary Health Test Hive Metastore monitoring setting.
... View more
04-25-2018
08:55 AM
Could you please suggest why and how should i resolve the below error [pool-4-thread-3]: AlreadyExistsException(message:Database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_eab3ee7a2ef37229bc56436ae1121ac2 already exists) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:941) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy8.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8863) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8847) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
04-23-2018
11:13 AM
2 Kudos
Could anyone please let me know , how could i resolve the below error [23/Apr/2018 11:10:05 -0700] conf ERROR failed to get oozie status Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hue/desktop/libs/liboozie/src/liboozie/conf.py", line 61, in get_oozie_status status = str(get_oozie(user).get_oozie_status()) File "/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hue/desktop/libs/liboozie/src/liboozie/oozie_api.py", line 325, in get_oozie_status resp = self._root.get('admin/status', params) File "/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 98, in get return self.invoke("GET", relpath, params, headers=headers, allow_redirects=True) File "/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 79, in invoke urlencode=self._urlencode) File "/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hue/desktop/core/src/desktop/lib/rest/http_client.py", line 170, in execute raise self._exc_class(ex) RestException: bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Security
04-18-2018
11:42 PM
Hi, whiile i'm attempting to enable the SSL , i;m receiving the below errror, could you please let me know where have i gone wrong. i tried verifying the CA file with openssl -CAfile root.pem inter.pem O/p : Ok I0418 23:34:40.401127 25865 status.cc:111] Couldn't open transport for impala:24000 (SSL_get_verify_result(), unable to get local issuer certificate) @ 0x82bee9 (unknown) @ 0xd4c1df (unknown) @ 0xd4c3d2 (unknown) @ 0xa11cb5 (unknown) @ 0xa123d0 (unknown) @ 0xb11d08 (unknown) @ 0xb13c30 (unknown) @ 0x9d4b8c (unknown) @ 0xaf59da (unknown) @ 0x7c4df3 (unknown) @ 0x7f890a895af5 __libc_start_main @ 0x7f79ad (unknown) I0418 23:34:40.401199 25865 thrift-client.cc:55] Unable to connect to impala:24000
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Security