Member since
08-13-2014
47
Posts
4
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2121 | 03-02-2020 08:35 AM | |
1013 | 09-13-2018 09:10 AM | |
1715 | 07-24-2018 06:09 AM | |
1415 | 04-18-2018 08:17 AM | |
1297 | 01-07-2015 12:01 AM |
03-02-2020
02:25 PM
Hi Luis, If you run SHOW TABLES in the Impala ODBC session do you see the list of tables? Are only the external tables missing? Can you try running SHOW CURRENT ROLES to see if the lost of Sentry roles matches what you expect. I would also recommend checking the Impala logs, these will shed some more light on whether the table is not visible or if you are being denied access. Kind regards, Jim
... View more
03-02-2020
08:35 AM
Hi Luis, There are a couple of suggestions I can make to help narrow down what the cause may be. Are you able to run queries against these tables in Hue using both Impala and Hive? More than just seeing the tables in the navigation on the left are you actually able to SELECT from them using both engines? Are you using the same user in both Hue and the ODBC connection? I ask because if user is not allowed to access a table by Sentry then that table will not appear when you run SHOW TABLES. Have you run SELECT <db_name> in your ODBC SEL session prior to running SHOW TABLES or running a SQL query? Have you tried a SELECT statement on the table even if the metadata is not showing when you run SHOW TABLES? Try prefixing the database name to the table e.g. db_name.table_name. As always you should also check the logs to see if there are any clues as to why it is not working. Check to ensure that the version of the ODBC driver is compatible with the version of Impala you are using. Kind regards, Jim
... View more
07-18-2019
02:36 AM
Hi, In both of these cases it looks like the certificate validation has failed. This typically happens when the Certificate Authority (CA) certificate is missing, has incorrect permissions set or does not have the correct password set. Check that you have added any CA certificates required to your trust store; in this case you need th CA certificate for GlobalSign. You will find details on how to configure TLS for these services in the Cloudera documentation. https://www.cloudera.com/documentation/enterprise/5-16-x/topics/impala_ssl.html https://www.cloudera.com/documentation/enterprise/5/latest/topics/cm_sg_ssl_hue.html Regards, Jim
... View more
10-16-2018
02:22 AM
Can you test an LDAP query from the Hue server using a client such as ldapsearch? It would be good to see how long it takes to get a response from the LDAP server. Check the value you set for base_dn. If this is overly broad it may take a long time to find users and groups in LDAP. Look to narrow this down as much as possible to reduce the size of the LDAP search. I'd also recommend reading the documentation as this provides information on how you can configure and test LDAP authentication in Hue. https://www.cloudera.com/documentation/enterprise/latest/topics/hue_sec_ldap_auth.html Regards, Jim
... View more
09-27-2018
08:57 AM
Is that a message you see in the browser or in the logs? Check the log file /var/log/cloudera-scm-server/cloudera-scm-server.log as well, this might give you a better idea what is going on.
... View more
09-27-2018
08:41 AM
What do the Cloudera Manager logs say about the failed logins? They might give a clue as to why authentication is failing.
... View more
09-27-2018
07:30 AM
Hi, You can use the command `hadoop checknative` to see which native libraries are being loaded. In the example below I check the native libraries and then change LD_LIBRARY_PATH and run the command again. In the second example you can see I'm loading libz from a different location. [cloudera@quickstart ~]$ hadoop checknative 18/09/27 14:06:47 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native 18/09/27 14:06:47 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0 zlib: true /lib64/libz.so.1 snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1 lz4: true revision:10301 bzip2: true /lib64/libbz2.so.1 openssl: true /usr/lib64/libcrypto.so [cloudera@quickstart ~]$ cp /lib64/libz.so.1 lib/ [cloudera@quickstart ~]$ export LD_LIBRARY_PATH=$PWD/lib [cloudera@quickstart ~]$ hadoop checknative 18/09/27 14:07:29 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native 18/09/27 14:07:29 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0 zlib: true /home/cloudera/lib/libz.so.1 snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1 lz4: true revision:10301 bzip2: true /lib64/libbz2.so.1 openssl: true /usr/lib64/libcrypto.so Hope this helps, Jim
... View more
09-27-2018
01:37 AM
Edit the file /etc/hosts. It maps the hostname to the IP address. In most cases the quickstart VM gets this right, if you experience an issue restarting usually fixes it. Regards, Jim
... View more
09-18-2018
05:22 AM
It looks like a problem with your SSL/TLS configuration. Check that the trust store contains the correct certificates. When you upgraded the JDK did did you remember to add your certificates to the cacerts truststore? Regards, Jim
... View more
09-13-2018
09:10 AM
No, restarting Cloudera Manager server and agents to pick up the new certificates should not affect any of the Hadoop cluster services. Regards, Jim
... View more
07-24-2018
06:09 AM
Hi, You're probably seeing this because the KDC host configured in Cloudera Manager is different on the two clusters. To check this log into Cloudera Manager and go to Administration -> Settings and then search for KDC Server Host. Regards, Jim
... View more
06-21-2018
03:14 AM
* Do you have the e3base_kfapp in /etc/krb5.conf? * You are using a short hostname for the zookeeper node, not a FQDN. This means it's likely going to use the default kerberos realm. * Is there a domain mapping for the zookeeper address to the correct realm in /etc/krb5.conf
... View more
06-21-2018
02:06 AM
If you want to be able to run any SQL as hive you would need to create a role, grant privileges to that role and grant the role to hive. Hive is a superuser insofar as it can grant roles, but by default it has no Sentry roles assigned to it. Regards, Jim
... View more
06-21-2018
01:25 AM
A CREATE TABLE statement will not start a YARN job, so allocating more memory for YARN is unlikely to fix this. What happens when you run the statement? Is there anything in the logs that give a clue what's happening? Regards, Jim
... View more
06-21-2018
01:20 AM
1 Kudo
Do you have the address quickstart.cloudera in /etc/hosts? If this is set incorrectly then trying to access the cluster using the hostname will always attempt to use the old address. Update the address in the hosts file and try again, see if that helps. Regards, Jim
... View more
05-03-2018
05:45 AM
Hi, Cloudera Director uses the IP address of the Cloudera Manager server to communicate with it. This means you need the IP address of the server in the TLS certificate for this to work. You can find more information on this in the Cloudera documentation: https://www.cloudera.com/documentation/ director /latest/topics/director_tls_enable.html#concept_dcl_2dt_kbb If you can add the private IP address for the Cloudera Manager as a Subject Alternative Name (SAN) in the certificate then this should work around the issue. Regards, Jim
... View more
04-19-2018
09:13 AM
Try running "yum clean all" and try running the installer again. This will clear your yum cache and will often fix this kind of issue. Jim
... View more
04-19-2018
09:13 AM
Try running "yum clean all" and try running the installer again. This will clear your yum cache and will often fix this kind of issue. Jim
... View more
04-19-2018
09:11 AM
Glad to hear it!
... View more
04-18-2018
08:25 AM
Hi, it sounds like your BI tools is a 3rd party product. I would recommend you contact the vendor of the BI tool for support. Regards, Jim
... View more
04-18-2018
08:17 AM
Try running "yum clean all" and try running the installer again. Let me know if that helps. Jim
... View more
08-02-2017
02:53 AM
Hi, Amazon Linux is not currently supported and I am not aware of any plans to include it in the supported platforms. You'll find the details of the supported operating systems in the Cloudera documentation link below. https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html Regards, Jim
... View more
03-29-2017
08:47 AM
Hi, What do your Navigator logs say? The log files will tell you why the server crashed. What did you set the heap size to? Regards, Jim
... View more
03-29-2017
05:19 AM
Hi, Cloudera Navigator provides auditing and data management. Removing it will not stop you from being able to run jobs on your cluster but you will not have fine grained auditing, metadata tagging etc. Regards, Jim
... View more
03-29-2017
04:39 AM
2 Kudos
Hi, The .hprof files are memory dumps created when a Java process fails due to lack of memory. It could be that either the server itself has insufficient memory or that the Navigator configuration does not allocate enough memory to the JVM. How much RAM does the VM running the master have? Regards, Jim
... View more
03-29-2017
04:34 AM
Hi, Can you check the log file /var/log/cloudera-scm-server/cloudera-scm-server.log, it should provide more details about the error you are seeing. Regards, Jim
... View more
01-12-2017
08:35 AM
Hi, The speed of the compression codec is only part of the story, you should also consider the support for the codec in different parts of the Hadoop stack. Gaining slightly faster compression at the expense of compatibility is probably not a good trade off. Snappy is supported by pretty much all of the stack for example, whereas LZ4 is not currently supported by Impala. If in doubt I would stick with Snappy since it is a reasonably fast and splittable codec. If performance is an issue you're likely to find greater benefit focusing on other parts of the stack rather than data compression. Regards, Jim
... View more
01-07-2015
12:01 AM
If you only wish to avoid running both the NameNode and DataNode services on one host it might be easier to migrate the DataNode role. Add the new server and assign the DataNode role to it and then decomission the DataNode on the first server to avoid any data loss. If you do wish to move the NameNode there are instructions on how to do so in the Cloudera documentation. http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v5-0-0/Cloudera-Manager-Managing-Clusters/cm5mc_move_nn.html
... View more
01-06-2015
11:55 PM
In Chrome when you add another language to the list they will be used in the order they appear. If you have added English to the bottom of the list and Chinese appears first then Chinese will be the language used. Try reordering the languages in Chrome so that English appears first and see if that helps.
... View more