Member since
11-13-2019
12
Posts
2
Kudos Received
0
Solutions
03-25-2022
12:59 AM
Hi @mszurap , Thank you for the solution, I guess this would help to audit HUE access, though I was I looking for CM Audits. For HUE I assume I need to add the parameter you mentioned in the following HUE configuration item. Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini Regarding to the preserve source client IP when login into CM console. I did log a ticket Cloudera Support and apparently there is JIRA OPSAPS-41615 was raised for this enhancement for CM server able to read XFF header. I don't have the access to the JIRA. I am not sure if you do. Thanks again for the solution for HUE. I will try to test it my environment and I will update you. Kind regards, Rama.
... View more
03-22-2022
10:34 AM
1 Kudo
@pandu2022 I have feeling that your kerberos principle doesn't exist on the KDC server. On the statestore server can you try running "kinit impala_test/<host_fqdn>@<REALM>". If you get prompt for password that indicate your principle is exist on KDC server. If you get error (not found in kerberos database) when you kinit, that indicate your principle doesn't exist on the KDC server. If the kinit works from catalog server, then most likely on statestore you are using different KDC server. In this case may be you should check your /etc/krb5.conf to make sure there are match. rgds, Ram.
... View more
03-22-2022
10:03 AM
1 Kudo
Hi @AzfarB I think it is quite common for modern linux kernel to move some inactive pages into swap to bring down memory footprint. As long your application or Solr not having any performance issue, I would not concern much on the swapping and I would just increase the threshold. rgds, Ram.
... View more
03-22-2022
09:48 AM
Hello All, We are connecting to CM console via F5 Load Balancer (Reverse Proxy). We are trying to enable X-Forwarded-For (XFF) in HTTP header, to get actual source client IP address from the CM Audits log. How can I enable CM server to read XFF HTTP header for the source client IP address instead of reading source IP address from layer 3? Thank you in advance.
... View more
Labels:
- Labels:
-
Cloudera Manager
06-24-2020
07:50 PM
If you execute "rmr /solr". you may encounter following authentication error. rmr /solr Authentication is not valid : /solr/security To workaround this you can use skipACL option, which found in the following site. https://www.programmersought.com/article/60861634876/ Note: This is workaround steps in Cloudera Manager. 1. Zookeeper->Configration->java Configuration Options for Zookeeper Server Add -Dzookeeper.skipACL=yes 2. Restart the zookeeper service Remember to revert the change and restart zookeeper after you manage to remove /solr rgds, Rama.
... View more
06-23-2020
02:41 AM
Hi @EricL , My cluster running on VM and it is sandbox and we are not expecting very high I/O throughput. I created softlink parcel-cache into /opt/cloudera/parcels/parcel-cache as /opt/cloudera/parcels different mountpoint with /opt/cloudera/parcel-cache. I tested the mv statement complete instantaneously and parcel distribution phase completed. This is just my workaround. Hope cloudera can fix this issue so thread wait until the unpack and copy complete successfully before reading the parcel or have longer timeout or set it as configurable parameter. Thank you very much for your help rgds, Rama.
... View more
06-22-2020
12:08 AM
Hi Eric @EricL , I manually untar and move the directory from /parcel-cache/ to /parcels and it took 2min 56 seconds. The transfer rate is around 16.8 MB/s. I don't know why this is slow for cloudera manager? Does CM have any setting to say copy expected to complete in given time frame? As this known issue at CM, any workaround to this? [root@rbalvhadoo01 ~]# time mv /opt/cloudera/parcel-cache/CDH-6.1.1-1.cdh6.1.1.p0.875250 /opt/cloudera/parcels/ real 2m56.169s user 0m0.565s sys 0m7.708s
... View more
06-19-2020
06:24 AM
I am running CM version 6.1.1 I encountered same issue. this is looks like a bug from Cloudera parcel distribution. you get error below before the unpack complete. As you can see from the log the error show up 2 minutes before the unpack complete successfully. I check using "watch" command the file doesn't exist when the error occur in the log. This should be a simple logic to make sure the unpack complete before reading the parcel. IOError: [Errno 2] No such file or directory: u'/opt/cloudera/parcels/CDH-6.1.1-1.cdh6.1.1.p0.875250/meta/parcel.json' [19/Jun/2020 15:02:34 +0000] 13865 Thread-13 downloader INFO Failed adding torrent: file:///opt/cloudera/parcel-cache/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel.torrent Already present torrent: CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel
[19/Jun/2020 15:02:34 +0000] 13865 Thread-13 downloader INFO Current state: CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel [totalDownloaded=2043680175 totalSize=2043680175 upload=0 state=seeding seed=[] location=/opt/cloudera/parcels/.flood/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel progress=1000000]
[19/Jun/2020 15:02:34 +0000] 13865 Thread-13 downloader INFO Completed download of https://xxxxx.xxxx.com:7183/cmf/parcel/download/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel code=200 state=downloaded
[19/Jun/2020 15:02:34 +0000] 13865 Thread-13 parcel_cache WARNING No checksum in header, skipping verification
[19/Jun/2020 15:02:34 +0000] 13865 Thread-13 parcel_cache INFO Unpacking /opt/cloudera/parcels/.flood/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel into /opt/cloudera/parcels
[19/Jun/2020 15:02:49 +0000] 13865 MainThread throttling_logger INFO There is already an active download for https://xxxxx.xxx.xxxxx.com:7183/cmf/parcel/download/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel
[19/Jun/2020 15:05:04 +0000] 13865 MainThread parcel INFO Loading parcel manifest for: CDH-5.8.4-1.cdh5.8.4.p0.5
[19/Jun/2020 15:05:04 +0000] 13865 MainThread parcel INFO Loading parcel manifest for: CDH-5.12.1-1.cdh5.12.1.p0.3
[19/Jun/2020 15:05:04 +0000] 13865 MainThread parcel INFO Loading parcel manifest for: CDH-6.1.1-1.cdh6.1.1.p0.875250
[19/Jun/2020 15:05:04 +0000] 13865 MainThread parcel ERROR Exception while reading parcel: CDH-6.1.1-1.cdh6.1.1.p0.875250
Traceback (most recent call last):
File "/opt/cloudera/cm-agent/lib/python2.6/site-packages/cmf/parcel.py", line 114, in refresh
fd = open(manifest)
IOError: [Errno 2] No such file or directory: u'/opt/cloudera/parcels/CDH-6.1.1-1.cdh6.1.1.p0.875250/meta/parcel.json'
[19/Jun/2020 15:07:17 +0000] 13865 Thread-13 parcel_cache INFO Unpack of parcel /opt/cloudera/parcels/.flood/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel/CDH-6.1.1-1.cdh6.1.1.p0.875250-el6.parcel successful
... View more
01-21-2020
11:50 PM
Try this command "zookeeper-client" [hdfs@servername ~]$ zookeeper-client Connecting to localhost:2181 log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Welcome to ZooKeeper! JLine support is enabled WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0]
... View more
01-09-2020
08:22 PM
you could also resize the mountpoint if you think it is oversized. sudo mount -o size=10G -o remount cm_processes. After I resize from 71GB to 10GB, I don't find any different in "free -h". So I feel the tmpfs doesn't really block the physical memory ahead. Taken from following reference. http://man7.org/linux/man-pages/man5/tmpfs.5.html rgds, Rama.
... View more