Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3350 | 06-11-2020 02:45 PM | |
5063 | 05-01-2020 12:23 AM | |
2860 | 04-21-2020 03:38 PM | |
3562 | 04-14-2020 12:26 AM | |
2360 | 02-27-2020 05:51 PM |
09-18-2019
12:01 AM
You can do alter like I mentioned before: ALTER TABLE test CHANGE col1 col1 int COMMENT 'test comment'; But I do not think you can remove it, but rather to just empty it. Cheers Eric
... View more
09-16-2019
03:14 PM
@ChineduLB No you can't, you can only save data into temp tables, or simply use sub-query instead. Cheers Eric
... View more
09-13-2019
05:45 AM
@kvinod It seems to me you first changed the memory parameters and when you switch user the environmental settings are lost. Could you try these steps below $ sudo -u hbase $ export _JAVA_OPTIONS="-Xmx2048m -Xms2048m" $ hbase hbck Please adjust the java options according to the memory available on your cluster.
... View more
09-12-2019
04:02 PM
Try to update password using kadmin.local after you log into the KDC server, reference here: http://web.mit.edu/KERBEROS/krb5-1.4/krb5-1.4.1/doc/krb5-admin/Changing-Passwords.html Cheers
... View more
09-11-2019
02:08 AM
@EricL Hi Eric it was pointing to wrong spark conf so I replaced it with the new one. But now its giving me another error. I will open anew thread for another error.
... View more
09-10-2019
02:55 AM
use below command to resolve this issue, it will create system link for that service. alternatives --config spark
... View more
09-10-2019
12:06 AM
@DataMike, I am afraid that there is no such option that I am aware of, you would have to stop and start one by one manually. Cheers Eric
... View more
09-08-2019
10:53 PM
Hi @EricL, Our IT Architecture team has closed off all access to our data storage other than the Hue web interface. So we can submit Hive and Impala queries. But my job often requires pulling data to local storage, usually more than 100000 rows. When looking at the results of a Hive query, on the left side of the table is the option to export the data. You can save the table on the cluster (not helpful in this case) or download the first 100000 rows as a CSV file or an Excel file. My question refers to the possibility of downloading the table in another format, different from CSV or Excel. But I suppose the more pressing question is how can we download more than 100000 rows? I know this is a horrible way to handle data transfer, but it is the only option left open to us by our architecture team. Cheers
... View more
09-08-2019
04:28 PM
@ilia987, The message "Driver grid-05.test.com:36315 disassociated! Shutting down" sounds like AM had trouble getting back to Driver, can you share below info: - did you run spark in cluster or client mode? - what is the full command? - what's the error from client side where you ran spark-submit? - what's the error in yarn side? As suggested by @AKR to share the entire application logs Cheers Eric
... View more