Member since
01-25-2019
75
Posts
10
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2434 | 02-25-2021 02:10 AM | |
1173 | 02-23-2021 11:31 PM | |
2238 | 02-18-2021 10:18 PM | |
3180 | 02-11-2021 10:08 PM | |
15878 | 02-01-2021 01:47 PM |
05-03-2021
10:50 PM
1 Kudo
In Beeline, the command-line options such as sslTrustStore, trustStorePassword, showDbInPromp etc are case sensitive.
For example, below is a working connection string from a test bed:
beeline -u "jdbc:hive2://host-A-fqdn:21051/default;principal=impala/host-A-fqdn@COE.CLOUDERA.COM;ssl=true;sslTrustStore=/opt/cloudera/security/truststore.jks"
In the above example, the common mistakes are principal mentioned as Principal and sslTrustStore mentioned as ssltruststore.
Here, if the case sensitivity is not followed, Beeline silently ignores the command line options and drops them:
//Sample string
beeline -u "jdbc:hive2://host-A-fqdn:21051/default;Principal=impala/host-A-fqdn@COE.CLOUDERA.COM;ssl=true;ssltruststore=/opt/cloudera/security/truststore.jks"
If you use the above connection string, at first, you will encounter a Kerberos issue as the property "principal" will be dropped and the actual Kerberos authentication will fail. If you fix the Kerberos issue, then you would encounter SSL related error as the ssltruststore needs to be written as "sslTrustStore".
You can find the other command-line options under Beeline Command Options.
... View more
Labels:
04-23-2021
12:04 PM
Hello Team, First thing, are you able to connect to HS2 from any of the edge node? If that is connecting successfully, could you share the same to ensure we form the right connection string here. Also, could you attach the trace logs here and HS2 logs parallelly at the same as well.
... View more
02-25-2021
09:42 AM
1 Kudo
Hello @marccasajus
Yes, this has been documented internally as a BUG (OPSAPS-53043) and is currently not fixed.
Also, it looks you have already applied the changes which would address this.
... View more
02-25-2021
02:10 AM
Hello @SajawalSultan It seems you are running the job via user cloudera_user and it needs access to /user/<username> directory to create scratch directories which it is unable to create because user "cloudera_user" does not has permissions. hdfs:supergroup:drwxr-xr-x /user Run hdfs dfs -chmod 777 /user from hdfs user to ensure you get proper access to /user directory. Let me know if this solves your sqoop import.
... View more
02-25-2021
02:05 AM
Hello @Sample If we don't have hadoop ecosystem, hive and impala would not exist in first place. If you have hive on one side(basically hadoop ecosystem) and mysql on other end. Now if you want to import data into hive from mysql, you will have to make use of sqoop to perform the same and vice-versa. Let me know if the above answers all your questions.
... View more
02-25-2021
12:27 AM
Hello @saamurai Thanks for the confirmation. Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
02-24-2021
11:44 PM
Hello @saamurai We have separate drivers for Impala and hive and I am not sure why you intent on using hive driver for Impala. We do connect to Impala from edge nodes via beeline which is jdbc but the sole purpose is to perform some tests whether connectivity works fine or not. We do not recommend to use beeline for Impala as we have impala-shell designed for the same. Cloudera recommends to use specific drivers along with version compatibility for each components.
... View more
02-23-2021
11:31 PM
1 Kudo
hello @Benj1029 You need to go to the below path on the host which is hosting HIveserver2 process. cd /var/log/hive/ vi hiveserver2.log file and just before the shutdown try looking at the stack trace, the would help you with some pointers.
... View more
02-18-2021
10:18 PM
1 Kudo
Well @ryu, My understanding is when you are storing things on HDFS and that too things related to hive, it is best to use managed table considering in mind that CDP is now coming up with compaction features where in small file issue would automatically get addressed. Compaction will not happen on external tables. one would prefer to choose external tables if the data is stored outside HDFS like S3. This is my understanding, but again it could vary on customer to customer based on their use cases.
... View more
02-18-2021
10:38 AM
Hello @ryu There is no such path as best path but obviously not /tmp location. You can create some path under /user/external_tables and further create the tables here. Again it totally depends upon you how you are designing and your use case.
... View more