Member since
05-20-2016
155
Posts
220
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7180 | 03-23-2018 04:54 AM | |
2628 | 10-05-2017 02:34 PM | |
1465 | 10-03-2017 02:02 PM | |
8373 | 08-23-2017 06:33 AM | |
3195 | 07-27-2017 10:20 AM |
12-16-2016
06:20 AM
this is good, but we need the complete exception trace to find the root cause i.e below errors complete logs error: execution error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.teztask
... View more
12-16-2016
05:16 AM
@Rajesh AJ Can you please share the complete stack strace ?
... View more
12-09-2016
01:16 PM
1 Kudo
Can you check the browser network trace to see if there are any errors in calls ?
... View more
12-09-2016
11:59 AM
1 Kudo
HDP takes the hive-site.xml from /etc/hive/conf/hive-site.xml This is a sym link. The actual path can be found by issuing command as below
ls -lrt /usr/hdp/current/hive-client/conf
lrwxrwxrwx. 1 root root 23 Dec 6 06:48 /usr/hdp/current/hive-client/conf -> /etc/hive/2.5.0.0-154/0
... View more
12-09-2016
10:13 AM
@justlearning I believe comments in below link provides you the answer ? https://community.hortonworks.com/questions/70618/checklist-to-get-started-with-oozie-on-hadoop.html
... View more
12-09-2016
08:32 AM
2 Kudos
@Ramesh M you can use the YARN resource manager UI to see what applications are running. http://<RMHOSTNAME>:8088/cluster/apps/RUNNING
... View more
11-24-2016
05:58 AM
Thanks @Sagar Shimpi for looking into this. I did follow the HDFS ACL link, however could not get it working for the newly created file, basically file doesn't inherit the default ACL of parent directory ( as per my comment#1 ). Not sure whether ACLs will be inherited by files, is Ranger the only solution now ?
... View more
11-21-2016
03:04 PM
Can you please share the snippet which does the kafka spout configuration. Also please refer to Storm UI to get details about the kafka spout spawned host and the corresponding logs for the same.
... View more
11-17-2016
09:11 AM
I did try below command hdfs dfs -setfacl -m default:user:santhosh:rwx /user/santhosh/another1 which basically means user "santhosh" will have "rwx" for sub-directories. created a sub-directory under this dir and a file as hive user using below command hdfs dfs -mkdir /user/santhosh/another1/test1
hdfs dfs -put sample1 /user/santhosh/another1/test1
However get returns me below result. [hrt_qa@santhosh-blueprint-test-13 ~]$ hdfs dfs -getfacl -R /user/santhosh/another1
# file: /user/santhosh/another1
# owner: santhosh
# group: hadoop
user::rwx
group::rwx
other::r-x
default:user::rwx
default:user:santhosh:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
# file: /user/santhosh/another1/test1
# owner: hive
# group: hadoop
user::rwx
user:santhosh:rwx #effective:r-x
group::r-x
mask::r-x
other::r-x
default:user::rwx
default:user:santhosh:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
# file: /user/santhosh/another1/test1/sample
# owner: hive
# group: hadoop
user::rw-
user:santhosh:rwx #effective:r--
group::r-x #effective:r--
mask::r--
other::r--
Where for file "sample1" it says as below, so user santhosh has "effective" read permission only and hence unable to delete. user:santhosh:rwx #effective:r--
... View more
11-17-2016
08:22 AM
Hello, I need to provide privilege to a directory in hdfs where any sub-directories and files under it are created so that specific user or ( other user ) has write privilege. i.e. suppose there is a directory /home/santhosh/work -- now any new files/sub-directory that gets created under this directory are created with privileges so that anybody ( other user) can read/write to it. Can this is achieved with HDFS ACLs, if so how ?
... View more
Labels:
- Labels:
-
Apache Hadoop