Member since
01-19-2015
25
Posts
0
Kudos Received
0
Solutions
01-28-2015
07:28 AM
I can download gigs of data from google drive or file hosting websites using my browser, why wouldn't it be possible here? This means my only alternative is to tell users to install hive and tell to run something like beeline -u jdbc:hive2://bla:10000 -n user -p password -f yourscript.q > yourresults.txt which is a bit crap... (not to mention until Hive 13 beeline doesnt report any progress on the operation). Or let them log to my server directly and wreak havoc there 😕 All that Hue gives you already is awesome, but it needs to do more!
... View more
01-28-2015
07:05 AM
But i dont need to see that data in a browser, i just want to download it on my PC...
... View more
01-28-2015
05:59 AM
I often make dypos, and then there is no way to correct them, but write another post....
... View more
01-28-2015
05:58 AM
I read the documentation about permissioning in hdfs and it says: 1) by default Hadoop checks "groups" output of linux command for a user 2) "supergroup" is default super users group My root directory looks like this: drwxr-xr-x - hdfs supergroup 0 2015-01-27 23:08 / So i would assume only hdfs and users belonging to supergroup would be able to create directory under it. But there is no "supergroup" group on any of my boxes! There is ony "hadoop" one, that contains hdfs, yarn and mapred. And basically any user that i create, can execute hdfs commands, like hdfs dfs -mkdir /blabla and do whatever he wants. The files created will have set him as an owner, and supergroup as a group. Even though he doesnt belong to "supergroup" neither to "hadoop". How does it work then? And is there some simple way to prevent it and make it work as in docs, i.e. make hadoop listen to linux permissions (the only access to cluster is through box managed by us anyway, so this would be enough).
... View more
Labels:
- Labels:
-
HDFS
01-28-2015
05:34 AM
Hi,
If I run query in Hue that returns huge amount of rows, is it possible to download them through UI? I tried it using Hive query and .csv, download was succesful, but it turned out the file had exactly 100000001 rows, while actual result should be bigger. Is 100 milion some kind of limit - if so could it be lifted?
I was also thinking about storing results in HDFS and downloading them through file browser, but the problem is that when you click "save in HDFS", the whole query runs again from scratch, so effectively you need to run it twice to be able to do it (and i haven't checked if result would be stored as one file and if Hue could download it).
In short, is such a use case possible in Hue?
... View more
Labels:
- Labels:
-
Cloudera Hue
01-27-2015
05:46 AM
ah, tarballs section. Thanks!
... View more
01-26-2015
01:37 PM
Hi, I want to check which versions of every hadoop component (i.e. which version of Hive, HDFS, HBase etc.) come in newest CDH, but I can't seem to find it. I looked into release notes, but no luck, is it written down anywhere maybe?
... View more
Labels:
01-26-2015
06:52 AM
I see. I was asking because in 0.14 they fixed a bug, which was apparently introduced in 12 or earlier (https://issues.apache.org/jira/browse/PIG-3985) and I spent whole day fighting with. I tried to upgrade myself, built 14 with flag for Hadoop 2, but got a lots of warnings and then ant test wasn't passing. Therefore I decided for now we will stick to Cloudera-approved 12 and just use the workaround describe in that JIRA. Thanks for your help!
... View more
- « Previous
-
- 1
- 2
- Next »