Member since
09-25-2018
7
Posts
2
Kudos Received
0
Solutions
09-23-2024
08:40 AM
Hi @ShankerSharma , What if i have mix of server on my cluster . Rhel 7 + Rhel8. Which service should I be running on these servers( I assume ntpd and chronyd both are there for rhel7 but we only have chronyd with rhel8) and with what configuration to keep kudu running?. Does it mean having chronyd running on all the servers (rhel7 + rhel8) of the cluster with rtcsync option enabled in chrony.conf? Or on a cluster we can have a mix, that is, we can have ntpd on Rhel7 servers and chronyd on Rhel8 servers? We are having a setup where we are going to add RHel8 nodes to the cluster which is already running on Rhel7 servers. Regards Akshay
... View more
04-25-2024
10:00 PM
2 Kudos
@akshay0103 Could you please check with your account team of a new docker credentials or getting the existing one activated or create an administrative case.
... View more
04-05-2021
05:16 AM
@akshay0103 Please check the Hue.ini content under field [useradmin] if there are any non default permissions being used? Are you adding the user using create home directory permissions?
... View more
10-28-2020
04:48 AM
Hi What i have seen is tht the share option only gives you read or read+modify permisson. There is nothing as such execute? If i give read+modify other users will be ale to run the oozie workflow. I have seen it does not happen. As the permission on the underlying hdfs folder for the workflow is only for my user and it does not get modified. drwxrwx--- - kuaksha hue 0 2020-10-28 10:42 /user/hue/oozie/workspaces/hue-oozie-1520605312.96 Please elaborate and help. Regards Akshay
... View more
09-06-2019
07:06 AM
Yes, we had the same dilemma when creating a fall-back queue as it doesn’t respect our model either! We observed the compress files in HDFS Oozie job to be allocated 1 container with 2GiB of memory and 1 VCore in YARN. We use a 5 VCore and 10GiB resource queue and the largest amount of data we’ve compressed is 100GiB. The YARN resource allocation doesn’t seem to change based on amount of data being compressed and therefore I think the YARN queue will not be limiting. As discussed earlier in the thread the architecture of the compress files in HDFS feature doesn’t appear to be very scalable: 1. All the data being compressed is first localized (copied) to a YARN Node Manager’s local cache (one directory is chosen from yarn.nodemanager.local-dirs). This requires enough local disk space on the partition where the directory resides. 2. The zip shell command is run locally on the same YARN node and uses 1x CPU core; the default zip compress is quite slow. 3. Enough space is required in local /tmp to hold a copy of the completed zip file before it is copied up to HDFS. Without any documentation on the compress files in HDFS feature this is just my opinion based on observations in our environment and reverse engineering. Kind regards, Julian
... View more