Member since
04-13-2015
11
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3449 | 10-25-2015 09:21 PM | |
2875 | 07-15-2015 08:37 PM |
11-14-2016
03:34 AM
It should go to both of those valves with the same value currently, until HDFS-10289 gets done in future.
... View more
11-06-2015
03:44 PM
Thankyou @venu123 wrote: Thank yuo, i will second option @schuberth wrote: Hello Venu, The spill messages and the log snippet indicate that the Hive's MapReduce task is using disk to sort data because the buffer allocated for sorting is full. There's couple of things that you can tune: 1. Increase the container memory allocated to Map tasks (remember to increase the heap size of the map task too!) 2. Increase the sort buffer size (mapreduce.task.io.sort.mb) Hope this helps. i will try second option
... View more
10-21-2015
08:34 AM
Try adding shell script in distributed cache <workflow-app name="Locus_Ingest_to_Staging_to_HDFS_-_Shell" xmlns="uri:oozie:workflow:0.5">
<start to="shell-5cb7"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="shell-5cb7">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<exec>ssh_local_to_hdfs.sh</exec> <argument>"/opt/ingest/locus/"</argument>
<argument>"/data/ingest/locus/"</argument>
<argument>"vax;1"</argument>
<argument>"vaxtoken"</argument>
<file>hdfspath/ssh_local_to_hdfs.sh</file> <capture-output/>
</shell>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
... View more
09-15-2015
04:57 PM
But this error will not cause a role crash as your title describes. It likely went down either cause framed transport is not enabled and a bad request reached the thrift server, or maybe an OOME (see stdout of the crashed process).
... View more
07-28-2015
05:54 AM
Hey venu123, Hive does not pass though sentry so it will not adhere to any rules you set directly in sentry, it only looks at facl's. To manage hdfs permissions with sentry you have to enable the plugin for hdfs/sentry sync and configure it appropriately. With the sync enabled hive checks the configuration then references the group in sentry but the group will be applied authentically as a facl by sentry. To get items working use the "hadoop fs -setfacl" command to add the user as a facl. To have make the user add authentically as files are deleted and created add them to the default ACL on the root folder. (Please note this was hit and miss for me, sometimes worked other times did not) Example add to default ACL hadoop fs -setfacl -m -R default:username:r-x /<path>
... View more
07-15-2015
08:37 PM
Thank you Romain for the quick reply and solution
... View more