Member since
10-27-2015
39
Posts
15
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2187 | 04-16-2018 07:46 AM |
04-26-2024
07:12 AM
2 Kudos
if you want to force cdp onto rocky8: echo "ID_LIKE=\"Red Hat Enterprise Linux release 8.7 (Ootpa)\"" >> /etc/rocky-release this fixes install-agents hang echo "ID=\"rhel\"" >> /usr/lib/os-release this fixes install-parcels hang removing this ID var from os-release after deployment will cause hadoop to fail restart
... View more
12-06-2019
12:09 PM
The following always worked for me: kinit -kt hdfs.keytab hdfs hadoop fs -mkdir /benchmarks hadoop fs -chmod 0777 /benchmarks You can always lock down the directory permissions to only allow a certain group to write to this directory.
... View more
06-19-2018
09:30 AM
Has this situation improved over the past year? Is there any public information on how to secure the back-end database connections?
... View more
04-16-2018
08:25 PM
Assuming that you are referencing Cloudera Navigator Encrypt, as part of the process of encrypting a disk, you can move existing data onto that newly encrypted disk. See the navencrypt-move command. If you are referring to HDFS Transparent Encryption, then you must create a new encryption zone in HDFS (effectively a new directory) and then copy your HDFS data into it. A lot of people ask "How can I encrypt an existing directory". You would have to perform two extra steps and have plenty of available disk space: 1. Rename the existing directory in HDFS: "hdfs dfs -mv /data /data.bak" 2. Set up the encryption zone for /data. "hadoop key create <keyname>; hdfs dfs -mkdir /data; hdfs crypto -createZone -keyName <keyname> -path /data" 3. Copy the data in /data.bak to /data. "hdfs dfs -cp /data.bak/\* /data/" 4. Remove /data.bak. "hdfs dfs -rm -R /data.bak"
... View more
04-16-2018
07:46 AM
In Hadoop and Kafka, one normally would not use RAID or LVM for data disks. Instead each disk has a partition that consumes the entire disk and a filesystem is written to that partition. In the case of NavEnc, after partitioning, each disk is first encrypted and then has the filesystem written on top of the encrypted volume. Tying together multiple disks into one large filesystem is the opposite of what Kafka or Hadoop expect you to do and you lose out on the advantages of parallelism.
... View more
10-10-2017
05:15 AM
@sridharm Hue is not written in Java, thus the Oracle connector jar will not work. You want the Oracle Instant Client for Hue Parcel.
... View more
07-24-2017
08:23 AM
Can you give me the exact AMI IDs that you are using to try out?
... View more