Member since
11-02-2015
4
Posts
0
Kudos Received
0
Solutions
12-05-2016
05:09 PM
We're using Hue 3.11 in our project. We want to give user's links that allows them to access specific HDFS folders after they login. We found this works very well if the user is already logged onto to Hue, but it doesn't work when the user needs to login after clicking on the link. For example, clicking on this link before the user logs on: https://hue.ourwebserver.com/filebrowser/#/data/projectfoldername/ The user is redirected to this address to logon: https://hue.ourwebserver.com/accounts/login/?next=/filebrowser/#/data/projectfoldername/ After successful login, the /data/projectname is unfortunately forgotten by the system, and the user is instead sent to his home directory at: https://hue.ourwebserver.com/filebrowser/ Is this a workaround for this? If it's a few simple lines of python code change, we can do that. Thanks!
... View more
Labels:
12-05-2016
04:58 PM
Thanks. Ulimately, we decided to specify a specific yarn queue for each user that's fixed by using the Hive hook. This is because our application requires hive.server2.enable.doAs=false, which doesn't play nicely with a dynamic tez yarn queue setting.
... View more
11-07-2016
04:26 PM
How can I submit a Hive query to a specific Tez queue using Hue? (either Oozie or the Hive Notebook).
I tried:
set tez.queue.name="myqueue"; select mycolumn, count(1) from mytable group by mycolumn;
Unfortunately this didn't work. The select query still ran as the default queue, which Hue takes as the name of the primary Linux group name of my user that I used to login to Hue.
I also tried to set tez.queue.name as myqueue under the "Settings" screen before running my query. Again, Hue accepted the setting, but the queue name didn't change.
Thanks!
... View more
11-02-2015
11:21 AM
I'm trying to connect a Cloudera installed Hadoop to Amazon's S3 service in Amazon GovCloud. I'm using Cloudera 5.4. I set the govcloud endpoint in hdfs-site.xml to: <property> <name>fs.s3a.endpoint</name> <description>AWS S3 endpoint to connect to. An up-to-date list is provided in the AWS Documentation: regions and endpoints. Without this property, the standard region (s3.amazonaws.com) is assumed. </description> <value>s3-us-gov-west-1.amazonaws.com</value> </property> I also put in the appropriate access key and secret key into hdfs-site.xml. Then, when I run "hadoop fs -ls s3a://bucketname/", I get the error "The AWS Access Key Id you provided does not exist in our records." By default, fs.s3a.endpoint points to the non-GovCloud endpoint of the S3 service. I have tested the non-GovCloud endpoint to be working. (If I put in a access key and secret key and bucket name that exists in my non-GovCloud account, then the connetion is fine and I can list files). So, basically, Hadoop is ignoring the s3-us-gov-west-1.amazonaws.com endpoint I specified in hdfs-site.xml, and always just going directly to the non-GovCloud endpoint. It is reading hdfs-site.xml file though, since it successfully reads the access key and secret keys from there. Any thoughts on how to fix this? Thanks!
... View more