Member since
08-13-2014
47
Posts
4
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2124 | 03-02-2020 08:35 AM | |
1016 | 09-13-2018 09:10 AM | |
1724 | 07-24-2018 06:09 AM | |
1416 | 04-18-2018 08:17 AM | |
1299 | 01-07-2015 12:01 AM |
03-19-2020
01:28 PM
Sorry for the late response. After giving up for a few days, I realized my mistake was that in the .odbc.ini file where I have my Hive and Impala DSNs, Hive host was pointing to dev and Impala host to prod. Silly me. Thank you for the quick response community.
... View more
07-31-2019
01:54 PM
@Dominic_kim, It might be more work, but it would be better to have a cluster where trust can be established. Clients expect that the server they connected to (whether FQDN, short name, or IP) will be included in the Subject Alternative Name extension or in the CN subject. Note that recent releases of CM and CDH do support wildcard certificates so I'm not sure what the problem is in your case... we would need some more specific info. That said, you can turn off validation in some places like Hue, but it is not so easily done in others. Depends on the client. For Hue, I think you can turn off all validation by setting: [desktop] ssl_validate=False If you don't have ssl_cert_ca_verify or other configuration in other sections, then they will look to the global "desktop" section setting. Restart Hue after making the change.
... View more
01-11-2019
07:04 AM
Thanks, it does work now if I store files into hdfs. However, if I do mapreduce, it seems hadoop is not following the native zlib path shown in the results from "hadoop checknative", i.e., it does NOT use my library but still the system's library. When of course, if I force hadoop use my library by " ln -s -f /home/smjohn/lib/my_libz.so.1.1 libz.so.1" then it does end up using my library But that is not what I want, as I just need hadoop to use my zlib, but not other applications. I know I can always recompile from hadoop source, but that is not a solution, as my cluster has quite a lot of nodes, and they do have different environments. So any suggestions of how to make hadoop/hive to use my zlib library for all mapreduce tasks? Thanks in advance for any help.
... View more
11-16-2018
11:30 AM
You can edit the baseurl in the /etc/yum.repos.d/cloudera-manager.repo file to point to 5.15.1 (latest as of today for c5).For example: FROM: baseurl=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5/ TO: baseurl=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15.1/ Run: yum clean all Then the installation will work
... View more
10-16-2018
02:22 AM
Can you test an LDAP query from the Hue server using a client such as ldapsearch? It would be good to see how long it takes to get a response from the LDAP server. Check the value you set for base_dn. If this is overly broad it may take a long time to find users and groups in LDAP. Look to narrow this down as much as possible to reduce the size of the LDAP search. I'd also recommend reading the documentation as this provides information on how you can configure and test LDAP authentication in Hue. https://www.cloudera.com/documentation/enterprise/latest/topics/hue_sec_ldap_auth.html Regards, Jim
... View more
10-10-2018
04:47 AM
Hi.I am deploying my cluster and encounter the same problem. And I checked the /var/log/cloudera-scm-server/cloudera-scm-server.log as follow: 2018-10-10 19:32:26,313 INFO CommandPusher:com.cloudera.cmf.model.DbCommand: Command 84(AmonTestDatabaseConnection) has completed. finalstate:FINISHED, success:true, msg:Successful 2018-10-10 19:32:27,062 INFO scm-web-333:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/dbTestConn/checkConnectionResult 2018-10-10 19:32:27,066 INFO scm-web-333:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/dbTestConn/checkConnectionResult, Status:200 2018-10-10 19:32:28,394 INFO CommandPusher:com.cloudera.cmf.service.AbstractOneOffHostCommand: Unsuccessful 'HueTestDatabaseConnection' 2018-10-10 19:32:28,394 INFO CommandPusher:com.cloudera.cmf.service.AbstractDbConnectionTestCommand: Command exited with code: 1 2018-10-10 19:32:28,394 INFO CommandPusher:com.cloudera.cmf.service.AbstractDbConnectionTestCommand: File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/core/checks/registry.py", line 81, in run_checks new_errors = check(app_configs=app_configs) File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/core/checks/urls.py", line 16, in check_url_config return check_resolver(resolver) File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/core/checks/urls.py", line 26, in check_resolver return check_method() File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/urls/resolvers.py", line 254, in check for pattern in self.url_patterns: File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/urls/resolvers.py", line 405, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/urls/resolvers.py", line 398, in urlconf_module return import_module(self.urlconf_name) File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/usr/lib/hue/desktop/core/src/desktop/urls.py", line 41, in <module> from desktop.auth import views as desktop_auth_views File "/usr/lib/hue/desktop/core/src/desktop/auth/views.py", line 38, in <module> from desktop.auth import forms as auth_forms File "/usr/lib/hue/desktop/core/src/desktop/auth/forms.py", line 30, in <module> from useradmin.hue_password_policy import hue_get_password_validators ImportError: No module named useradmin.hue_password_policy 2018-10-10 19:32:28,394 ERROR CommandPusher:com.cloudera.cmf.model.DbCommand: Command 86(HueTestDatabaseConnection) has completed. finalstate:FINISHED, success:false, msg:Unexpected error. Unable to verify database connection. so how can I fix this ImportError: No module named useradmin.hue_password_policy
... View more
10-09-2018
06:34 AM
@kundansonuj Did you try for clear your browser all cookies, history and data. After that close your browser and open new session of browser. Also try with different browser.
... View more
09-27-2018
01:37 AM
Edit the file /etc/hosts. It maps the hostname to the IP address. In most cases the quickstart VM gets this right, if you experience an issue restarting usually fixes it. Regards, Jim
... View more
09-24-2018
02:45 AM
Issue fixed thanks for your continious support @bgooley Appreciate!
... View more
09-13-2018
09:10 AM
No, restarting Cloudera Manager server and agents to pick up the new certificates should not affect any of the Hadoop cluster services. Regards, Jim
... View more
08-01-2018
02:06 AM
Hello Jim, That seems to have been the problem. Although the krb5.conf files were effectively identical the two Cloudera Managers had been configured by specifying the KDC by name and by IP address. We now have BDR working between two different Cloudera Managers, but not between two clusters with the same Cloudera Manager.
... View more
06-21-2018
03:14 AM
* Do you have the e3base_kfapp in /etc/krb5.conf? * You are using a short hostname for the zookeeper node, not a FQDN. This means it's likely going to use the default kerberos realm. * Is there a domain mapping for the zookeeper address to the correct realm in /etc/krb5.conf
... View more
06-21-2018
02:06 AM
If you want to be able to run any SQL as hive you would need to create a role, grant privileges to that role and grant the role to hive. Hive is a superuser insofar as it can grant roles, but by default it has no Sentry roles assigned to it. Regards, Jim
... View more
06-21-2018
01:25 AM
A CREATE TABLE statement will not start a YARN job, so allocating more memory for YARN is unlikely to fix this. What happens when you run the statement? Is there anything in the logs that give a clue what's happening? Regards, Jim
... View more
05-03-2018
05:45 AM
Hi, Cloudera Director uses the IP address of the Cloudera Manager server to communicate with it. This means you need the IP address of the server in the TLS certificate for this to work. You can find more information on this in the Cloudera documentation: https://www.cloudera.com/documentation/ director /latest/topics/director_tls_enable.html#concept_dcl_2dt_kbb If you can add the private IP address for the Cloudera Manager as a Subject Alternative Name (SAN) in the certificate then this should work around the issue. Regards, Jim
... View more
04-19-2018
09:13 AM
Try running "yum clean all" and try running the installer again. This will clear your yum cache and will often fix this kind of issue. Jim
... View more
04-19-2018
09:11 AM
Glad to hear it!
... View more
04-18-2018
08:25 AM
Hi, it sounds like your BI tools is a 3rd party product. I would recommend you contact the vendor of the BI tool for support. Regards, Jim
... View more
08-02-2017
02:53 AM
Hi, Amazon Linux is not currently supported and I am not aware of any plans to include it in the supported platforms. You'll find the details of the supported operating systems in the Cloudera documentation link below. https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html Regards, Jim
... View more
03-29-2017
09:17 AM
Now cloudera work Fine but in Host Monitor log file I get an error : Could not fetch descriptor after 5 tries, exiting. And i can't restart this service, and when i'm trying to restart the Cloudera Management Service i get : Cannot restart service when Host Monitor (master) is in STOPPING state.
... View more
01-12-2017
08:35 AM
Hi, The speed of the compression codec is only part of the story, you should also consider the support for the codec in different parts of the Hadoop stack. Gaining slightly faster compression at the expense of compatibility is probably not a good trade off. Snappy is supported by pretty much all of the stack for example, whereas LZ4 is not currently supported by Impala. If in doubt I would stick with Snappy since it is a reasonably fast and splittable codec. If performance is an issue you're likely to find greater benefit focusing on other parts of the stack rather than data compression. Regards, Jim
... View more
01-07-2015
04:26 AM
Hei Jim, Thank you so much, I wil follow this to d omy job. 🙂 Yibin
... View more
01-07-2015
12:45 AM
It works. Thank you so much.
... View more