Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23793 | 10-16-2018 11:27 AM | |
7872 | 09-29-2018 06:59 AM | |
1206 | 07-17-2018 08:44 AM | |
6666 | 04-18-2018 08:59 AM |
07-11-2018
09:41 AM
@Jay Kumar SenSharma Thanks jay for your response , in the lecture where I am experiencing the issue fs.permissions.umask-mode is set 022 but in other cluster where we are not getting the issue value if the same is 077 .
... View more
07-11-2018
09:03 AM
I am creating a directory under /user with some name say abcd , but permission for this user is getting set 750 but same operation when is being performed in the different environment permission is getting set 700 , however permission for the directory /user is drwxr-xr-x in both environment . please find below commands for the operation performed while creating the hdfs directory . hdfs dfs -mkdir /user/abcd
hdfs dfs -chown abcd:hdfs Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera DataFlow (CDF)
07-10-2018
10:16 AM
I am trying to access the hive from the beeline with below command : !connect jdbc:hive2://<host>:10000/default;principal=hive/<host>@SOLON.PRD but it is throwing below error : WARN jdbc.HiveConnection: Failed to connect to host:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://host:10000/default;principal=hive/host@SOLON.PRD: GSS initiate failed (state=08S01,code=0)
... View more
Labels:
- Labels:
-
Apache Hive
07-09-2018
07:11 AM
is there cli command to list all the yarn queues , however I can check queues from the ambari . but is there command i can check with that ?
... View more
Labels:
- Labels:
-
Apache YARN
07-06-2018
06:08 AM
while running a hive query i am experiencing below error however simple select works but when i execute query with group by clause it is throwing an error : java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 8, vertexId=vertex_1530387194612_0030_4_00, diagnostics=[Vertex vertex_1530387194612_0030_4_00 [Map 8] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: tbtbotabel_db2_orc initializer failed, vertex=vertex_1530387194612_0030_4_00 [Map 8], java.io.IOException: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 214816 for hive) can't be found in cache
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:502)
at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateOldSplits(MRInputHelpers.java:446)
at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateInputSplitsToMem(MRInputHelpers.java:300)
at org.apache.tez.mapreduce.common.MRInputAMSplitGenerator.initialize(MRInputAMSplitGenerator.java:123)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 214816 for hive) can't be found in cache
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:490)
... 14 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 214816 for hive) can't be found in cache
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy13.getListing(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:620)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
at com.sun.proxy.$Proxy14.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2143)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2126)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:919)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:985)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:981)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:981)
at org.apache.hadoop.hive.ql.io.AcidUtils.isAcid(AcidUtils.java:459)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.shouldSkipCombine(OrcInputFormat.java:179)
at
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera DataFlow (CDF)
07-05-2018
07:40 AM
@Sandeep Nemuri Hi Sandeep , Thanks for your response , but could you please explain what is the root cause of this issue , since from the falcon user also I tried to read hdfs and I was getting the same issue as I am doing this from hive user . however in falcon reading hdfs through webhdfs had solved the issue . I was creating a falcon cluster entity from falcon cli . I wanted to understand the root cause because I am experiencing the same issue from different users as well. Thanks in Advance.
... View more
07-04-2018
01:17 PM
i am executing the hive query but when i do so i ma getting below error : java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 8, vertexId=vertex_1530387194612_0030_4_00, diagnostics=[Vertex vertex_1530387194612_0030_4_00 [Map 8] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: tbtbotabel_db2_orc initializer failed, vertex=vertex_1530387194612_0030_4_00 [Map 8], java.io.IOException: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 214816 for hive) can't be found in cache I can see error is because of HDFS_DELEGATION_TOKEN for hive can not be found what are the resoluton steps and what lead to this issue ?
... View more
Labels:
- Labels:
-
Apache Hive
06-22-2018
05:40 AM
Is there some rest api by using that I can fetch list of users who has not logged in ambari since some days let's say n number of days ? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
05-31-2018
07:58 AM
What are prerequisites i should complete before importing ranger policies from different cluster . for e.g I have exported the ranger policies say clusterA_hdfs from clusterA and now I am importing this policy in clusterB to clusterB_hdfs . Is it required to have present all the users who are belonging to clusteA rnager policy for hdfs ? what could lead to failure of the policies while importing like If policy id or name is same ? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
04-27-2018
08:35 AM
Hi , I have created a partitioned table in source cluster and loaded data into the table in the hdfs location , with the help of distcp I am moving this file (hdfs ) to destination cluster and trying to fetch the record in the destination cluster but I am not able to fetch nay record it shows me 0 rows . however other than partitioned table when I do the same I am able to fetch the information . Please find commands for the same : source cluster : create external table parti( id int, name string )
partitioned by (dept string)
row format delimited
fields terminated by ','
location '/part1' load data inpath local '/tmp/hv1.txt' into table parti partition(dept='sat') load data inpath local '/tmp/hv2.txt' into table parti partition(dept='jbp') select *from parti where dept='sat' able to get the records Destination cluster : create external table parti( id int, name string ) partitioned by (dept string) row format delimited fields terminated by ',' location '/part2' distcp /part1 (source cluster ) to / (destination cluster ) hdfs dfs -mv /part2 /part2_old hdfs dfs -mv /part1 /part2 now in destination cluster I am trying to fetch record : select *from parti where dept='sat' ; no records Note : This issue I am facing only for partitioned table other tables like external /managed table non partitioned I do not face such issue while doing the same activity renaming folder .
... View more
Labels:
- Labels:
-
Apache Hive