Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23761 | 10-16-2018 11:27 AM | |
7819 | 09-29-2018 06:59 AM | |
1198 | 07-17-2018 08:44 AM | |
6643 | 04-18-2018 08:59 AM |
10-01-2018
12:09 PM
when we kerberized cluster from ambari we see keytabs are generated automatically for the user , we do not provide any password but ambari does , I want to know how does ambari does this . for e.g if I have user for whom i want to generate keytab I will do the following steps : kadmin.local: addprinc user1@TEST.COM WARNING:no policy specified for user1@TEST.COM; defaulting to no policy Enter password for principal "user1@TEST.COM": // here we are providing the password but when ambari does the same for the service user like hdfs what password does it set and how it does the same ? is there some script in the server which enables the same. Re-enter password for principal "user1@TEST.COM": Principal"user1@TEST.COM" created.
... View more
Labels:
- Labels:
-
Apache Ambari
09-29-2018
07:01 AM
Please follow below link : http://hbase.apache.org/0.94/book/ops_mgt.html#copytable
... View more
09-29-2018
06:59 AM
1 Kudo
Dhiraj There are many method to achieve the same like copytable , import/export utility , snapshot . I would prefer snapshot method .But snapshot method will work if Hbase is of same version in both the cluster .If your both cluster hbase versions are different then you can use Copytable method. snapshot method : STEP 1: Go to hbase-shell and Take a snapshot of table >hbase shell >snapshot "SOURCE_TABLE_NAME","SNAPSHOT_TABLE_NAME" Step 3 : Export that snapshot to other cluster >bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot SNAPSHOT_TABLE_NAME -copy-to hdfs://DESTINATION_CLUSTER_ACTIVE_NAMENODE_ADDRESS:8020/hbase -mappers 16 STEP 4: restore the table on DESTINATION Cluster : >hbase shell >disable "DEST_TABLENAME" >restore_snapshot "SNAPSHOT_TABLE_NAME"
... View more
09-24-2018
06:50 AM
I am trying to run the distcp command in the secure cluster , My purpose is to move hdfs files from insecure cluster to secure cluster but i am getting errors . hadoop distcp -Dipc.client.fallback-to-simple-auth-allowed=true hdfs://<in-secure-hdfsnamenode>:8020/distsecure/f1.txt hdfs://<securenamenode>:8020/distdest/ java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1537527981132_0012 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: <insecurenamenode>:8020, Ident: (HDFS_DELEGATION_TOKEN token 0 for hdfs)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:317)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:193)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1537527981132_0012 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: <insecurehdfs>:8020, Ident: (HDFS_DELEGATION_TOKEN token 0 for hdfs)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:272)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:302) I do not understand why in the error log i see failed to renew token for the insecure cluster . also I have added ipc.client.fallback-to-simple-auth-allowed=true in the custom hdfs site. in the secure cluster .
... View more
Labels:
- Labels:
-
Apache Hadoop
09-16-2018
03:41 PM
thanks for your reply , I have already mentioned in the post value of hive.execution.engine is tez , also as it is taking more time definitely it seems to be resource issue . I am more curious about how to best tune hive with given configurations.
... View more
09-16-2018
08:59 AM
but why does it take too much of time , even this is the first insert , is there something wrong with the memory tuning ?
... View more
09-15-2018
06:57 AM
I have created a simple table and trying to insert data but it is taking too much of time , even more than 5 min. create table command : hive> create table poc(id int);
OK
Time taken: 1.578 seconds but when try to insert data it is taking so much of time : hive> create table poc(id int);
OK
Time taken: 1.578 seconds
hive> insert into poc values(1);
Query ID = hive_20180915064819_83183eef-8dcc-463a-872e-fd8c58453af5
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1536988895253_0006)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 INITED 1 0 0 1 0 0
--------------------------------------------------------------------------------
VERTICES: 00/01 [>>--------------------------] 0% ELAPSED TIME: 159.96 s execution engine : Tez Tez container size : 5120 MB Memory allocated for all YARN containers on a node : 15GB Minimum Container Size (Memory) : 1024 MB Maximum Container Size (Memory) : 15GB Single node cluster snap of RM UI: Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes 7 1 2 4 2 6 GB 15 GB 0 B 2 6 0 1 0 0 0 0
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
-
Apache YARN
09-03-2018
08:54 AM
1 Kudo
it started working i forgot to start ambari-agent .
... View more
09-03-2018
08:03 AM
@Jay Kumar SenSharma Thank for the reply !!! /usr/bin/hdp-select is working now after changing the python version but still services which are installed in the ambari-server nodes are not starting : fro zeppelin : 2018-09-03 09:58:17,713 - Could not determine stack version for component zeppelin-server by calling '/usr/bin/hdp-select status zeppelin-server > /tmp/tmps5ZaqL'. Return Code: 1, Output: .
2018-09-03 09:58:17,754 - The 'zeppelin-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1245). This is the version that will be reported.
2018-09-03 09:58:17,902 - Could not determine stack version for component zeppelin-server by calling '/usr/bin/hdp-select status zeppelin-server > /tmp/tmpMyhYQS'. Return Code: 1, Output: .
2018-09-03 09:58:17,937 - The 'zeppelin-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1245). This is the version that will be reported. for hbase phoenix : 2018-09-03 09:55:49,747 - Could not determine stack version for component phoenix-server by calling '/usr/bin/hdp-select status phoenix-server > /tmp/tmp2TCR7n'. Return Code: 1, Output: .
2018-09-03 09:55:49,785 - The 'phoenix-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1245). This is the version that will be reported.
2018-09-03 09:55:49,915 - Could not determine stack version for component phoenix-server by calling '/usr/bin/hdp-select status phoenix-server > /tmp/tmpekQ6yI'. Return Code: 1, Output: .
2018-09-03 09:55:49,953 - The 'phoenix-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1
245). This is the version that will be reported. /usr/bin/hdp-select status phoenix-server : phoenix-server - 2.5.0.0-1245 however /usr/hdp/current/phoenix-server is pointing to the /usr/hdp/2.5.0.0-1245/phoenix/bin
... View more
09-03-2018
07:28 AM
in other nodes hdp-select is working only on the ambari server node it is creating an issue .
... View more