Member since
04-05-2016
18
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6770 | 07-05-2016 07:47 PM |
08-12-2019
02:31 PM
mapreduce job created by the sqoop task is stuck at 0% , any ideas what i am doing wrong?
2019-08-12 07:12:20,253 INFO [Socket Reader #1 for port 37566] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 37566
2019-08-12 07:12:20,256 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-08-12 07:12:20,257 INFO [IPC Server listener on 37566] org.apache.hadoop.ipc.Server: IPC Server listener on 37566: starting
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2019-08-12 07:12:20,284 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: 0% of the mappers will be scheduled using OPPORTUNISTIC containers
2019-08-12 07:12:20,311 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at vm-cloudera-6x.dag.com/192.168.42.44:8030
2019-08-12 07:12:20,468 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: maxContainerCapability: <memory:8192, vCores:2>
2019-08-12 07:12:20,468 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: queue: root.users.cloudera
2019-08-12 07:12:20,473 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2019-08-12 07:12:20,473 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2019-08-12 07:12:20,497 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1565618944746_0001Job Transitioned from INITED to SETUP
2019-08-12 07:12:20,522 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2019-08-12 07:12:20,575 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1565618944746_0001Job Transitioned from SETUP to RUNNING
2019-08-12 07:12:20,605 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1565618944746_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2019-08-12 07:12:20,607 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1565618944746_0001_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-08-12 07:12:20,619 INFO [Thread-58] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:4096, vCores:2>
2019-08-12 07:12:20,627 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1565618944746_0001, File: hdfs://vm-cloudera-6x.dag.com:8020/user/cloudera/.staging/job_1565618944746_0001/job_1565618944746_0001_1.jhist
2019-08-12 07:12:21,476 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2019-08-12 07:12:21,542 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1565618944746_0001: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:5120, vCores:1> knownNMs=1
... View more
Labels:
- Labels:
-
Apache Sqoop
08-12-2019
11:52 AM
thanks
... View more
08-12-2019
10:36 AM
Can't open /var/run/cloudera-scm-agent/process/67-hdfs-NAMENODE-format/supervisor_status: Permission denied.
+ make_scripts_executable
+ find /var/run/cloudera-scm-agent/process/67-hdfs-NAMENODE-format -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' format-namenode ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ KERBEROS_PRINCIPAL=
+ '[' '!' -z '' ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = format-namenode ']'
+ '[' file-operation = format-namenode ']'
+ '[' bootstrap = format-namenode ']'
+ '[' failover = format-namenode ']'
+ '[' transition-to-active = format-namenode ']'
+ '[' initializeSharedEdits = format-namenode ']'
+ '[' initialize-znode = format-namenode ']'
+ '[' format-namenode = format-namenode ']'
+ '[' -z /dfs/nn ']'
+ for dfsdir in '$DFS_STORAGE_DIRS'
+ '[' -e /dfs/nn ']'
+ '[' '!' -d /dfs/nn ']'
+ CLUSTER_ARGS=
+ '[' 2 -eq 2 ']'
+ CLUSTER_ARGS='-clusterId cluster19'
+ '[' 3 = 6 ']'
+ '[' -3 = 6 ']'
+ exec /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/67-hdfs-NAMENODE-format namenode -format -clusterId cluster19 -nonInteractive
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_NAMENODE_OPTS has been replaced by HDFS_NAMENODE_OPTS. Using value of HADOOP_NAMENODE_OPTS.
Running in non-interactive mode, and data appears to exist in Storage Directory /dfs/nn. Not formatting.
... View more
08-08-2019
05:53 PM
The hdfs installation fails with the following error while installing cdh 6.2.
Can't open /var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE-format/supervisor_status:Permission denied
what would cause this error?
... View more
Labels:
07-05-2016
07:47 PM
The below code worked. @Shishir Saxena package hadoop.test;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hive.hcatalog.api.HCatClient; import org.apache.hive.hcatalog.api.HCatTable; import org.apache.hive.hcatalog.common.HCatConstants; import org.apache.hive.hcatalog.common.HCatException; import org.apache.hive.hcatalog.data.schema.HCatFieldSchema; import org.apache.hive.hcatalog.data.schema.HCatSchema;
public class ListDBs1 {
publicstaticvoid main(String[] args) { HCatClient hcatClient = null; try { String principal ="hive/quickstart.cloudera@XXX.COM"; String keytab = "E:\\apps\\metacenter_home\\hadoop\\hive.keytab"; System.setProperty("sun.security.krb5.debug", "true"); System.setProperty("java.security.krb5.conf", "E:\\apps\\hadoop\\krb5.conf"); System.setProperty("java.security.auth.login.config", "E:\\apps\\hadoop\\jaas.conf"); HiveConf hcatConf = new HiveConf(); hcatConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://server:9083"); hcatConf.set("hadoop.security.authentication", "kerberos"); hcatConf.set(HCatConstants.HCAT_HIVE_CLIENT_DISABLE_CACHE, "true"); hcatConf.setVar(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL, principal); hcatConf.setVar(HiveConf.ConfVars.METASTORE_KERBEROS_KEYTAB_FILE, keytab); hcatConf.setVar(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL, "true"); UserGroupInformation.setConfiguration(hcatConf); UserGroupInformation.loginUserFromKeytab(principal, keytab); hcatClient = HCatClient.create(new Configuration(hcatConf)); HiveMetaStoreClient hiveMetastoreClient = new HiveMetaStoreClient(hcatConf); list(hcatClient,hiveMetastoreClient); } catch (Throwable t) { t.printStackTrace(); } finally { if (hcatClient != null) try { hcatClient.close(); } catch (HCatException e) { } } }
privatestaticvoid list(HCatClient hcatClient, HiveMetaStoreClient hiveMetastoreClient) throws Exception { List<String> dbs = hcatClient.listDatabaseNamesByPattern("*"); for (String db : dbs) { System.out.println(db); List<String> tables = hcatClient.listTableNamesByPattern(db, "*"); for (String tableString: tables) { HCatTable tbl = hcatClient.getTable(db, tableString); String tableType = tbl.getTabletype(); String tableName = tbl.getTableName(); System.out.println(tableType + " - " + tableName); System.out.println("Table Name is: " + tableName); System.out.println("Table Type is: " + tbl.getTabletype()); System.out.println("Table Props are: " + tbl.getTblProps()); List<HCatFieldSchema> fields = tbl.getCols(); for (HCatFieldSchema f: fields) { System.out.println("Field Name is: " + f.getName()); System.out.println("Field Type String is: " + f.getTypeString()); System.out.println("Field Type Category is: " + f.getTypeString()); if (f.getCategory().equals(HCatFieldSchema.Category.STRUCT)) { HCatSchema schema = f.getStructSubSchema(); List<String> structFields = schema.getFieldNames(); for (String fieldName: structFields) { System.out.println("Struct Field Name is: " + fieldName); } } }
if (tableType.equalsIgnoreCase("View") || tableType.equalsIgnoreCase("VIRTUAL_VIEW")) { org.apache.hadoop.hive.metastore.api.Table viewMetastoreObject = hiveMetastoreClient.getTable(db, tableName); String sql = viewMetastoreObject.getViewOriginalText(); System.out.println(sql); } } } } }
... View more
03-22-2016
04:38 PM
Current Error: 12:14:39,073 ERROR TSaslTransport:296 - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:336)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:154)
at org.apache.hive.hcatalog.common.HiveClientCache.getNonCachedHiveClient(HiveClientCache.java:80)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveClient(HCatUtil.java:557)
at org.apache.hive.hcatalog.api.HCatClientHMSImpl.initialize(HCatClientHMSImpl.java:595)
at org.apache.hive.hcatalog.api.HCatClient.create(HCatClient.java:66)
at .....
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
... 23 more
... View more
03-21-2016
08:54 PM
@Shishir Saxena Do I keep the original properties? package com.dag.mc.biz.activelinx.emf.snapshot.hadoop;
//import javax.jdo.JDOException;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hive.hcatalog.api.HCatClient;
import org.apache.hive.hcatalog.api.HCatTable;
import org.apache.hive.hcatalog.common.HCatConstants;
import org.apache.hive.hcatalog.common.HCatException;
public class ListDBs {
/**
* @param args
*/
public static void main(String[] args) {
HCatClient hcatClient = null;
try {
String principal = "hive/_HOST@EXAMPLE.COM";
String keytab = "<keytab location>";
HiveConf hcatConf = new HiveConf();
hcatConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://192.168.42.154:9083");
hcatConf.set("hadoop.security.authentication", "Kerberos");
hcatConf.set(HCatConstants.HCAT_HIVE_CLIENT_DISABLE_CACHE, "true");
hcatConf.addResource(new Path("c:/temp/hive-site.xml"));
hcatConf.setVar(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL, principal);
hcatConf.setVar(HiveConf.ConfVars.METASTORE_KERBEROS_KEYTAB_FILE, keytab);
hcatConf.setVar(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL, "true");
hcatClient = HCatClient.create(new Configuration(hcatConf));
UserGroupInformation.setConfiguration(hcatConf);
UserGroupInformation.loginUserFromKeytab(principal, keytab);
HiveMetaStoreClient hiveMetastoreClient = new HiveMetaStoreClient(hcatConf);
List<String> dbs = hcatClient.listDatabaseNamesByPattern("*");
for (String db : dbs) {
System.out.println(db);
List<String> tables = hcatClient.listTableNamesByPattern(db, "*");
for (String tableString: tables) {
HCatTable tbl = hcatClient.getTable(db, tableString);
String tableType = tbl.getTabletype();
String tableName = tbl.getTableName();
if (tableType.equalsIgnoreCase("View")) {
org.apache.hadoop.hive.metastore.api.Table viewMetastoreObject = hiveMetastoreClient.getTable(db, tableName);
String sql = viewMetastoreObject.getViewOriginalText();
System.out.println(sql);
}
}
}
} catch (Throwable t) {
t.printStackTrace();
} finally {
if (hcatClient != null)
try {
hcatClient.close();
} catch (HCatException e) {
}
}
}
}
... View more
03-16-2016
07:42 PM
I am running my program from a windows machine. I used -Djava.security.auth.login.config="path-to-jaas-file" -Djava.security.krb5.conf="path-to-krb5.ini" SEVERE: Error creating Hive objects: Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:221)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:297)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:336)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:154) Error in hivemetastore.log 2016-03-16 13:31:09,808 ERROR [pool-5-thread-200]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... View more
03-14-2016
09:14 PM
We are currently getting this error... 16:28:11,820 INFO metastore:297 - Trying to connect to metastore with URI thrift://192.168.42.154:9083
16:28:11,851 ERROR TSaslTransport:296 - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:336)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:154)
......
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
... View more