Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3294 | 06-11-2020 02:45 PM | |
5011 | 05-01-2020 12:23 AM | |
2815 | 04-21-2020 03:38 PM | |
2617 | 04-14-2020 12:26 AM | |
2313 | 02-27-2020 05:51 PM |
12-11-2019
08:43 PM
@kvinod, There won't be issues after license expired, it just that the Enterprise features won't be available, but cluster should still function as before. Please see below doc: https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_ag_licenses.html#cmug_topic_13_7__section_ed1_nz1_wr When a Cloudera Enterprise license expires, the following occurs: -- Cloudera Enterprise Cloudera Enterprise Trial - Enterprise features are disabled. --Cloudera Enterprise - Most enterprise features such as Navigator, reports, Auto-TLS, and fine-grained permissions are disabled. Key Trustee KMS and the Key Trustee server will continue to function on the cluster, but you cannot change any configurations or add any services. Navigator will not continue to collect audits, but existing audits will remain on disk in the Cloudera Manager audit table. On license expiration, the license expiration banner displays which enterprise features have been disabled. In terms of downgrading to express edition, can you try to restart CM and management services and see if it will do it? Cheers Eric
... View more
12-09-2019
03:29 PM
Sorry, forgot to answer questions: Doesn't Managed cluster start proxy server? >> No Do I have to configure proxy? >> Yes Managed cluster means Enterprise Edition? >> It means managed by CM, does not have to be Enterprise. If that is, Can I use L4 instead of haproxy? >> Proxy server is unmanaged by CM, so you can use whatever 3rd party that can act as proxy server. Just need to make sure it can be reachable by the Load Balancer config you set in CM for Hive. Cheers Eric
... View more
12-09-2019
03:07 PM
@avengers CM does not manage HAProxy server, you need to manage it yourself, as the Proxy Server can be any 3rd that provides the service, like HAProxy or F5. At the bottom of the page you referenced, there is an example of how to setup HAProxy: https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/admin_ha_hiveserver2.html#example_ha_proxy_config It is your responsibility to make sure that 3rd HA service is configured properly and up and running. Cheers Eric
... View more
12-09-2019
03:04 PM
@arunkumarc Is there any particular reason you need to keep schema between Hive and ORC out of sync? Can you ALTER the Hive table schema to match with ORC data? Spark might be more restrictive on this checking. Cheers Eric
... View more
11-28-2019
05:19 PM
1 Kudo
Hi @parthk, No problems. Impala + Ranger is under construction for CDP release. From what I can see Phase one is done and there are a few more phases to go through. So it is still early stage and I do not have ETA. You probably just have to wait and ask the question again a few months down the track. Cheers Eric
... View more
11-27-2019
04:05 AM
@Amn_468, Can you please explain a bit more on "run through impala shell via jdbc connection"? Which JDBC driver and how did you set it up? The article is generic, as the setting is set at the impala server level, so session will timeout from impala daemon and when that happens, all clients' connections will be closed, together with the queries in those sessions. Cheers Eric
... View more
11-27-2019
04:00 AM
@sow, Sorry, did not noticed your question here. I am wondering if you have tried using "--map-column-hive" option of Sqoop, which is mentioned in the doc here: https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html#_controlling_type_mapping So in your case, the command will look like below: sqoop import --driver 'com.microsoft.sqlserver.jdbc.SQLServerDriver' --connect 'jdbc:sqlserver://IP:PORT;database=DB;' --connection-manager 'org.apache.sqoop.manager.SQLServerManager' --username <> -password <> --as-parquetfile --delete-target-dir --target-dir '/user/test/' --query "select GUID,Name,Settings FROM my_table_name where \$CONDITIONS; " --m 1 --map-column-hive Settings=binary Note that you do not need to CAST anymore from MSSQL side. See if it helps. Cheers Eric
... View more
11-24-2019
02:45 PM
@vincent2, Apache Phoenix is only available to CDH from 5.16.x onwards, which was mentioned here: https://blog.cloudera.com/apache-phoenix-for-cdh/ It will not work for CDH5.7.0. Please upgrade your CDH first if you want to use it. Cheers Eric
... View more
11-20-2019
09:00 PM
2 Kudos
@wert_1311 So based on 1GB fsimage file, you need to have at least 4GB+ for RM to function properly. Please try to increase and observe if there is improvement. Cheers Eric
... View more
11-20-2019
06:28 PM
1 Kudo
@ChineduLB Have you tried to create another DF and cast the values to integer first before the JOIN? Cheers Eric
... View more