Member since
05-29-2017
43
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4576 | 10-02-2017 08:40 AM | |
2642 | 10-02-2017 08:13 AM | |
4877 | 09-20-2017 09:24 AM | |
11575 | 09-18-2017 06:35 AM | |
22303 | 09-06-2017 04:52 AM |
10-02-2017
08:40 AM
Hello Awisawe, In order for Hue to work over HTTPS, it needs a TLS certificate and a corresponding private key. These are configured via ssl_certificate, ssl_private_key, and ssl_password variables. We've documentation about setting up Hue as TLS/SSL server for Cloudera Enterprise, which you may be able to adapt to your distribution. Best regards, Zsolt
... View more
10-02-2017
08:13 AM
1 Kudo
Hello kangkang0705, You can achieve that via setting up rules for "Audit Event Filter" according to Configuring Service Auditing Properties. Please note that the audit events discarded this way will never reach the Cloudera Navigator database, because of that you'll have no way to review them in the future. Let me know if that helps. Best regards, Zsolt
... View more
09-21-2017
03:14 AM
Hi stumilius, We have a blogpost about storing Avro data inside HBase tables. In short, HBase does not know about the avro content, treats it as binary blob. The client which is reading/writing the data has to deal with the avro schemas, after HBase delivered the raw data to it. Zsolt
... View more
09-20-2017
09:24 AM
1 Kudo
Hi I think you're referring to Cloudera Navigator Encrypt which was formerly known as Gazzang zNcrypt. Our install guide is available in the documentation as well. Zsolt
... View more
09-19-2017
08:46 AM
Hi Shushrut, Which type of encryption are you interested in? We've an overview of Cloudera Navigator Data Encryption and Encryption in CDH in our documentation. Do you have any specific questions about these? Zsolt
... View more
09-18-2017
06:35 AM
1 Kudo
Hello Creaping, As HDFS is not a standard unix filesystem, it is not possible to read it with native python IO libraries. As HDFS is open-source, there are plenty of connectors out there. You can also acces HDFS via HttpFS on a REST interface. In case you'd like to parse large amount of data, none of that will be suitable, as the script itself still runs on a single computer. To solve that, you can use pyspark to rewrite your script and use the spark-provided utils to manipulate data. This'll solve both HDFS access and the distribution of the workload for you. Zsolt
... View more
09-06-2017
04:52 AM
Hi, Can you check if the port is reachable from the Tableau computer? You might have a firewall or other network restriction in between. For beeline migration, you can check our blogpost: https://blog.cloudera.com/blog/2014/02/migrating-from-hive-cli-to-beeline-a-primer/ You may use beeline on the computer tableau is installed, to verify if hive is reachable.
... View more
09-05-2017
08:26 AM
Hello HN, Do you have the Hive ODBC driver installed for tableau? Please also check that the configured IP address is points to a HiveServer2 instance, and the required ports on that ip is reachable from the tableau node. Zsolt
... View more
08-28-2017
07:37 AM
Hello, Receiving "Results Expired" error in Hue for Impala queries, is usually a symptom of having a misconfigured load-balancer in front of Impala. Impala has the assumption, that the query results will be downloaded from the same impalad where the query was issued. A load-balancer can break that assumption, if it sendis sequential connections of the same client to different impalad daemons. If Hue connects two a different impalads to run the query and download the results, the "Results Expired" error will appear. To avoid the above issue, it's required that the proxy routes subsequent connections of the same client to the same impalad. This proxy behavior is usually called sticky sessions. Our documentation has detailed information about impala load-balancer setup, includeing the above information as well: https://www.cloudera.com/documentation/enterprise/latest/topics/impala_proxy.html#proxy_overview
... View more
06-07-2017
07:34 AM
2 Kudos
I would double-check network and CM configuration before considering a database issue. It seems that cloudera manager is ignoreing your proxy configuration. I tried to reproduce your error, and I found that specifing http:// before the proxy servers domain name could cause this. Could you confirm that you have it set to abcd.net and not http://abcd.net? If that doesn't help, please specify your CM, OS and database version, so I can try that configuration.
... View more