Member since
09-29-2015
286
Posts
601
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11457 | 03-21-2017 07:34 PM | |
2882 | 11-16-2016 04:18 AM | |
1608 | 10-18-2016 03:57 PM | |
4265 | 09-12-2016 03:36 PM | |
6213 | 08-25-2016 09:01 PM |
11-16-2015
08:32 PM
1 Kudo
I had couple of questions on the file compression. We plan on using ORC format for a data zone that will be heavily accessed by the end-users via Hive/JDBC. What is the recommendation when it comes to compressing ORC files? Do you think Snappy is a better option (over ZLIB) given Snappy’s better read-performance? (Snappy is more performant in a read-often scenario, which is usually the case for Hive data.) When would you choose zlib?
As a side note: Compression is a double-edged sword, as you can go also have performance issue going from larger file sizes spread among multiple nodes to the smaller size & HDFS block size interactions. You can blunt this by using compression strategy.
... View more
Labels:
- Labels:
-
Apache Hive
11-13-2015
03:30 PM
So I understand there are two Hadoop environment variables for impersonation:
HADOOP_USER_NAME for non-kerberos secured clusters HADOOP_PROXY_USER for clusters secured with Kerberos. Will the same issue arise with HADOOP_PROXY_USER?
@Ali Bajwa @Neeraj
... View more
11-12-2015
08:16 PM
1 Kudo
@Neeraj I thought I got it to work. It does not. Did the Kylin Dev Team provide an answer?
... View more
11-06-2015
07:06 PM
1 Kudo
No you don't have to use Kerberos. You technically can go with AD/ LDAP authentication. You can do LDAP SSL Authentication also. However why won't you use Kerberos? Ranger will work without Kerberos However your Storm plug in will not work. You need Kerberos for Storm plugin.
... View more
11-06-2015
06:10 PM
Yes you need to have a Kerberos secured cluster for Storm plug in to be fully functional Ranger cannot set policies for Storm unless it is secured with Kerberos. Kerberos needs to be configured. Coincidentally @Ali Bajwa has an example for Storm here at Setup Storm Plugin for Ranger HDP 2.3
... View more
10-28-2015
06:24 PM
10 Kudos
Requirement: Currently we have /hadoop/hdfs/data and /hadoop/hdfs/data1 as datanode directories. I have new mountpoint (/hadoop/hdfs/data/datanew) with faster disk and I want to keep only this mountpoint as datanode directory. Steps: Stop the cluster. Go to the ambari HDFS configuration and edit the datanode directory configuration: Remove /hadoop/hdfs/data and /hadoop/hdfs/data1. Add /hadoop/hdfs/datanew save. Login into each datanode VM and copy the contents of /data and /data1 into /datanew Change the ownership of /datanew and everything under it to “hdfs”. Start the cluster.
... View more
Labels:
10-27-2015
05:41 PM
@carter@hortonworks.com
Yes, the only way it worked is when I used the -D settings. However I have since been told that in order for Hadoop to use the cert, we should import into $JAVA_HOME/jre/lib/security/cacerts instead of /etc/pki/java/cacerts which we thought was the default. So apparently if you are using any trustStore besides $JAVA_HOME/jre/lib/security/cacerts you would need the -D settings. I haven't had a chance to test this as the folks I am working with got it to work with the -D settings, using /etc/java/cacerts and do not want to make any further changes.
... View more
10-27-2015
04:18 AM
The purpose of Configuration Groups are to allow and admin to override certain properties and parameters which can then be applied to certain nodes.
You can tell which parameters can be overidden by the green plus side next to them. Then you can decide which nodes the overidden parameters applied to.
... View more
10-23-2015
12:58 AM
1 Kudo
You have generated a certificate file: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout test.key -out test.pem
After deploying the key, you try to ssh into the instance but get prompted for a password. ssh -vvv -i test.pem <user>@<host> ————— This is an issue with an updated openssl version, > openssl versionOpenSSL 1.0.1k 8 Jan 2015 This is a new version of openssl. The new version does not create the key with RSA at the begin and end. So you have to use a separate command to convert the key file to old version of ssh openssl rsa -in test.key -out test_new.key Once that is done, use the new file for ssh. ssh -vv -i test_new.key <user>@<host>
... View more