Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2337 | 08-28-2018 02:00 AM | |
2187 | 07-31-2018 06:55 AM | |
5102 | 07-26-2018 03:02 AM | |
2460 | 07-19-2018 02:30 AM | |
5911 | 05-21-2018 03:42 AM |
05-23-2017
06:35 PM
Managed to solve it by copying the config.ini file from other servers form the same cluster and install the cloudera-scm-agent using apt-get install cloudera-scm-agent
... View more
05-10-2017
04:08 AM
"I am still exploring how to get external DB done on hive, oozie and those monitoring DB " Can you please elborate more sorry , I can certainly give you more information.
... View more
05-08-2017
03:31 AM
hi! where could I find the new syllabus materials (vm and pdfs) ??? I have pending my exam since january... thanks in advance.
... View more
05-05-2017
05:42 AM
Got it fixed! Directory '/dfs/nn' has been removed before, but 'dfs/dn' is still there! After I removed all these directories, it works!
... View more
05-01-2017
11:31 AM
@bdgreek I tried the below queries and it shows the column as created (Sales$). Not sure it is due to your version create table mydb.test1 (`Sales$` float)
describe formatted mydb.test1
describe mydb.test1
... View more
04-26-2017
02:01 AM
My bad, that was the only thing i didn't try lol. BTW, this is the solution. Thanks
... View more
04-25-2017
08:36 AM
It is an internal table. The creation process was using the HUE GUI to 'Create a new table manually' in the Metastore Manager for the Hive default database. I didn't choose the 'Create a new table from a file' option, which allows a user to specify if it should be an external table. I updated my reply to saranvisa's use cases, and the underlying HDFS files were deleted only if the HUE user who dropped the table was its creator. Fortunately, I do have access to HDFS superuser via the command line and was able to delete the table from my prior incident. Thanks for providing an alternative in the event that is not the case, especially since when deployed most users won't have command line access let alone HDFS superuser. Sounds like the trade-off is ease of use vs. level of security.
... View more
04-24-2017
02:06 PM
Thanks a lot for the above steps, i really was wondering how i should install different version of connector.. I'll soon follow the steps and inform the results. 🙂
... View more
04-21-2017
06:13 AM
That is a fair question of what is 'appropriate'. I was hoping there would be an option to select a default behavior to do so. For example, upon 'usr1' creating an index, the following permission would be generated: collection='the_new_idx"->user=usr1->action=* I imagine other global default behaviors could exist such that the auto-generated permission sets access for new collections at a role level instead of user level.
... View more
04-18-2017
05:38 PM
On the Impala dev team we do plenty of testing on machines with 16GB-32GB RAM (e.g. my development machine has 32GB RAM). So Impala definitely works with that amount of memory. It's just that with that amount of memory it's not too hard to run into capacity problems if you have a reasonable number of concurrent queries with larger data sizes or more complex queries. It sounds like maybe the smaller memory instances work well for your workload.
... View more