Member since
12-16-2018
23
Posts
0
Kudos Received
0
Solutions
06-21-2019
12:58 PM
Hi ALL, I am facing the issue logs.txt when we kill the Yarn job the spark2 thrift server went down. @Jay Kumar SenSharma,@Geoffrey Shelton Okot ERROR TransportClient: Failed to send RPC 6490982396871519029 to /10.237.14.26:56156: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException Please help. Regards, Vishal Bohra
... View more
Labels:
06-21-2019
07:25 AM
@Sergey Sheypak Did you find the above solution ? I am also facing the same issue.
... View more
06-11-2019
12:35 PM
Hi All, @Jay Kumar SenSharma, @Geoffrey Shelton Okot , @Chiran Ravani Please help on this. I am struggling for this issue since long. I am facing the issue while setting up the Hive ODBC DSN connection with kerberos. I am able to ping the server from Developer Desktop. We have MIT service manger on the developer desktop to create the service. hive.server2.transport.mode ........binary HS2 is running on PORT 10001 hive.server2.authentication : Kerberos we are getting the below error:- Logs attached too. hive_logs.txt [Hortonworks][DriverSupport] (1170) Unexpected response received from server. Please ensure the server host and port specified for the connection are correct. Thanks, Vishal Bohra
... View more
Labels:
06-10-2019
01:33 PM
@Jay Kumar SenSharma The spark2 thrift server is started. But now i am getting the below errors in the spakr2 logs. Logs attached too. java.lang.RuntimeException: Could not load shims in class org.apache.hadoop.hive.schshim.FairSchedulerShim Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.schshim.FairSchedulerShim If i follow the below link then again i will be in same situation like above. Please help. https://community.hortonworks.com/content/supportkb/150164/errorjavalangruntimeexception-could-not-load-shims.html spark2_logs.txt
... View more
06-10-2019
10:50 AM
@Jay Kumar SenSharma Thanks for the reply. hive-exec-1.21.2.2.6.5.0-292.jar is present at the specified location. could you please suggest the next step ? lhdcsi02v spark2]# locate hive-exec | grep jar /u01/tmp/hadoop-unjar9078871880138055540/META-INF/maven/org.apache.hive/hive-exec /u01/tmp/hadoop-unjar9078871880138055540/META-INF/maven/org.apache.hive/hive-exec/pom.properties /u01/tmp/hadoop-unjar9078871880138055540/META-INF/maven/org.apache.hive/hive-exec/pom.xml /usr/hdp/2.6.5.0-292/hive/lib/hive-exec-1.2.1000.2.6.5.0-292.jar /usr/hdp/2.6.5.0-292/hive/lib/hive-exec.jar /usr/hdp/2.6.5.0-292/hive2/lib/hive-exec-2.1.0.2.6.5.0-292.jar /usr/hdp/2.6.5.0-292/hive2/lib/hive-exec.jar /usr/hdp/2.6.5.0-292/oozie/oozie-server/webapps/oozie/WEB-INF/lib/hive-exec-1.2.1000.2.6.5.0-292.jar /usr/hdp/2.6.5.0-292/pig/lib/hive-exec-1.2.1000.2.6.5.0-292-core.jar /usr/hdp/2.6.5.0-292/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/hive/hive-exec-1.2.1000.2.6.5.0-292.jar /usr/hdp/2.6.5.0-292/spark2/jars/hive-exec-1.21.2.2.6.5.0-292.jar
... View more
06-08-2019
02:00 PM
Hi All, when we start spark2 thrift server , its start for ashort time - 30 sec and then fail back. I have attched the spark2 logs.
19/06/07 11:22:16 INFO HiveThriftServer2: HiveThriftServer2 started
19/06/07 11:22:16 INFO UserGroupInformation: Login successful for user hive/lhdcsi02v.production.local@production.local using keytab file /etc/security/keytabs/hive.service.keytab
19/06/07 11:22:16 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
java.lang.NoSuchMethodError: org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.startDelegationTokenSecretManager(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/Object;Lorg/apache/hadoop/hive/thrift/HadoopThriftAuthBridge$Server$ServerMode;)V
at org.apache.hive.service.auth.HiveAuthFactory.<init>(HiveAuthFactory.java:125)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:57)
at java.lang.Thread.run(Thread.java:748)
19/06/07 11:22:16 INFO HiveServer2: Shutting down HiveServer2
19/06/07 11:22:16 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
19/06/07 11:22:16 INFO AbstractService: Service:OperationManager is stopped.
19/06/07 11:22:16 INFO AbstractService: Service:SessionManager is stopped.
19/06/07 11:22:16 INFO SparkUI: Stopped Spark web UI at http://lhdcsi02v.production.local:4041
19/06/07 11:22:26 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67)
19/06/07 11:22:26 ERROR Utils: Uncaught exception in thread pool-1-thread-1
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.spark.scheduler.AsyncEventQueue.stop(AsyncEventQueue.scala:133)
at org.apache.spark.scheduler.LiveListenerBus$$anonfun$stop$1.apply(LiveListenerBus.scala:219)
at org.apache.spark.scheduler.LiveListenerBus$$anonfun$stop$1.apply(LiveListenerBus.scala:219)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.scheduler.LiveListenerBus.stop(LiveListenerBus.scala:219)
at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1922)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1921)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.stop(SparkSQLEnv.scala:66)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$$anonfun$main$1.apply$mcV$sp(HiveThriftServer2.scala:82)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19/06/07 11:22:26 INFO AbstractService: Service:CLIService is stopped.
19/06/07 11:22:26 INFO AbstractService: Service:HiveServer2 is stopped. @Jay Kumar SenSharma,@Geoffrey Shelton Okot,@Neeraj Sabharwal,@Akhil S Naik Plaese help. yarn-site.xml.txt Thanks, Vishal Bohra spark2_logs.txt
... View more
05-13-2019
10:32 AM
@Chiran Ravani, @Jay Kumar SenSharma , @Geoffrey Shelton Okot , @Vipin Rathor @Neeraj Sabharwal i am also facing the issue while setting up the Hive ODBC DSN connection. Error:- Logs attached logs.txt [Hortonworks][DriverSupport] (1170) Unexpected response received from server. Please ensure the server host and port specified for the connection are correct. Ambari host is SSL enable. And using MIT kerberos. HDP 2.6.5 Hive ODBC DSN 64 bit Please help.
... View more
04-25-2019
12:50 PM
Hi All, I am facing issue while running any program in zeppelin. zeppelin 0.7.3 HDP 2.6 %pyspark print sc.version java.lang.NullPointerException at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:348) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:337) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:142) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:790) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) at org.apache.zeppelin.spark.PySparkInterpreter.getSparkInterpreter(PySparkInterpreter.java:567) at org.apache.zeppelin.spark.PySparkInterpreter.createGatewayServerAndStartScript(PySparkInterpreter.java:210) at org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:163) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:493) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Please assit. @Jay Kumar SenSharma , @Akhil S Naik, @mrizvi, @geoffery and @Ankit Singhal Regards, zeppelin_logs.txtzeppelin_env.sh.txtVishal Bohra
... View more
Labels:
03-29-2019
06:26 PM
Hi All, I am getting this error while enbaling the kerberos from UI. Please assit. stderr: 2019-03-29 13:15:42,565 - Failed to create principal, pruuk_cluster-032919@production.local - Failed to create service principal for pruuk_cluster-032919@production.local STDOUT: Authenticating as principal admin/admin@production.local with password. Password for admin/admin@production.local: Enter password for principal "pruuk_cluster-032919@production.local": Re-enter password for principal "pruuk_cluster-032919@production.local": STDERR: WARNING: no policy specified for pruuk_cluster-032919@production.local; defaulting to no policy add_principal: Operation requires ``add'' privilege while creating "pruuk_cluster-032919@production.local". stdout: 2019-03-29 13:15:42,348 - Processing identities... 2019-03-29 13:15:42,511 - Processing principal, pruuk_cluster-032919@production.local Steps done:- [root@lhdcsi02v ~]# /usr/sbin/kadmin.local -q "addprinc admin/admin" Authenticating as principal root/admin@production.local with password. WARNING: no policy specified for admin/admin@production.local; defaulting to no policy Enter password for principal "admin/admin@production.local": Re-enter password for principal "admin/admin@production.local": Principal "admin/admin@production.local" created. [root@lhdcsi02v ~]# [x1224789@lhdcsi02v ~]$ cat /etc/krb5.conf [libdefaults] renew_lifetime = 7d forwardable = true default_realm = production.local ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [domain_realm] production.local = production.local [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] production.local = { admin_server = lhdcsi02v.production.local kdc = lhdcsi02v.production.local } [x1224789@lhdcsi02v ~]$ cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 renew_lifetime = 7d [realms] production.local = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } [x1224789@lhdcsi02v ~]$ kdb5_util create -s [x1224789@lhdcsi02v ~]$ cat /var/kerberos/krb5kdc/kadm5.acl #*/admin@EXAMPLE.COM * /admin@lhdcsi02v.production.local #*x1224789/admin@lhdcsi02v.production.local* Regards, Vishal Bohra
... View more
03-22-2019
10:18 PM
Hi ALL, I am getting the below error while running HIve query. 0: jdbc:hive2://lhdcsi03v.production.local:21 > select count(*) from ff_ops_mi_sal_journey_data_out; INFO : Number of reduce tasks determined at compile time: 1 INFO : In order to change the average load for a reducer (in bytes): INFO : set hive.exec.reducers.bytes.per.reducer=<number> INFO : In order to limit the maximum number of reducers: INFO : set hive.exec.reducers.max=<number> INFO : In order to set a constant number of reducers: INFO : set mapreduce.job.reduces=<number> INFO : number of splits:1 INFO : Submitting tokens for job: job_1553148534656_0004 INFO : The url to track the job: http://lhdcsi04v.production.local:8088/proxy/application_1553148534656_0004/ INFO : Starting Job = job_1553148534656_0004, Tracking URL = http://lhdcsi04v.production.local:8088/proxy/application_1553148534656_0004/ INFO : Kill Command = /usr/hdp/2.6.5.0-292/hadoop/bin/hadoop job -kill job_1553148534656_0004 INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 INFO : 2019-03-21 09:52:09,621 Stage-1 map = 0%, reduce = 0% INFO : 2019-03-21 09:52:26,019 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.82 sec INFO : 2019-03-21 09:53:26,419 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.81 sec INFO : 2019-03-21 09:53:49,207 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 7.54 sec INFO : 2019-03-21 09:53:54,661 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.82 sec INFO : 2019-03-21 09:54:55,575 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 7.56 sec INFO : 2019-03-21 09:55:11,821 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 8.12 sec INFO : 2019-03-21 09:55:17,259 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.82 sec INFO : 2019-03-21 09:56:17,416 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 7.49 sec INFO : 2019-03-21 09:56:29,207 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 7.76 sec INFO : 2019-03-21 09:56:34,514 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.82 sec INFO : 2019-03-21 09:57:35,146 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 7.67 sec INFO : 2019-03-21 09:57:47,033 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 8.05 sec INFO : MapReduce Total cumulative CPU time: 8 seconds 50 msec ERROR : Ended Job = job_1553148534656_0004 with errors Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2) PFB the hive configuration file. [hadoop@lhdcsi02v ~]$ cat /etc/spark/2.6.5.0-292/0/hive-site.xml <configuration> <property> <name>hive.metastore.client.connect.retry.delay</name> <value>5</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>1800</value> </property> <property> <name>hive.metastore.uris</name> <value> thrift://lhdcsi02v.production.local:9083 </value> </property> <property> <name>hive.server2.enable.doAs</name> <value>false</value> </property> <property> <name>hive.server2.thrift.port</name> <value>10015</value> </property> <property> <name>hive.server2.transport.mode</name> <value>binary</value> </property> </configuration>[hadoop@lhdcsi02v ~]$ [hadoop@lhdcsi02v ~]$ cat /etc/hive/2.6.5.0-292/0/hive-site.xml <configuration> <property> <name>ambari.hive.db.schema.name</name> <value>hive</value> </property> <property> <name>atlas.hook.hive.maxThreads</name> <value>1</value> </property> <property> <name>atlas.hook.hive.minThreads</name> <value>1</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>false</value> </property> <property> <name>datanucleus.cache.level2.type</name> <value>none</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>true</value> </property> <property> <name>hive.auto.convert.join</name> <value>true</value> </property> <property> <name>hive.auto.convert.join.noconditionaltask</name> <value>true</value> </property> <property> <name>hive.auto.convert.join.noconditionaltask.size</name> <value>858993459</value> </property> <property> <name>hive.auto.convert.sortmerge.join</name> <value>false</value> </property> <property> <name>hive.auto.convert.sortmerge.join.to.mapjoin</name> <value>false</value> </property> <property> <name>hive.cbo.enable</name> <value>true</value> </property> <property> <name>hive.cli.print.header</name> <value>false</value> </property> <property> <name>hive.cluster.delegation.token.store.class</name> <value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value> </property> <property> <name>hive.cluster.delegation.token.store.zookeeper.connectString</name> <value> lhdcsi03v.production.local:2181,lhdcsi02v.production.local:2181,lhdcsi04v.production.local:2181 </value> </property> <property> <name>hive.cluster.delegation.token.store.zookeeper.znode</name> <value>/hive/cluster/delegation</value> </property> <property> <name>hive.compactor.abortedtxn.threshold</name> <value>1000</value> </property> <property> <name>hive.compactor.check.interval</name> <value>300L</value> </property> <property> <name>hive.compactor.delta.num.threshold</name> <value>10</value> </property> <property> <name>hive.compactor.delta.pct.threshold</name> <value>0.1f</value> </property> <property> <name>hive.compactor.initiator.on</name> <value>true</value> </property> <property> <name>hive.compactor.worker.threads</name> <value>1</value> </property> <property> <name>hive.compactor.worker.timeout</name> <value>86400L</value> </property> <property> <name>hive.compute.query.using.stats</name> <value>true</value> </property> <property> <name>hive.conf.restricted.list</name> <value>hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role</value> </property> <property> <name>hive.convert.join.bucket.mapjoin.tez</name> <value>false</value> </property> <property> <name>hive.default.fileformat</name> <value>TextFile</value> </property> <property> <name>hive.default.fileformat.managed</name> <value>TextFile</value> </property> <property> <name>hive.enforce.bucketing</name> <value>true</value> </property> <property> <name>hive.enforce.sorting</name> <value>true</value> </property> <property> <name>hive.enforce.sortmergebucketmapjoin</name> <value>true</value> </property> <property> <name>hive.exec.compress.intermediate</name> <value>false</value> </property> <property> <name>hive.exec.compress.output</name> <value>false</value> </property> <property> <name>hive.exec.dynamic.partition</name> <value>true</value> </property> <property> <name>hive.exec.dynamic.partition.mode</name> <value>nonstrict</value> </property> <property> <name>hive.exec.failure.hooks</name> <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value> </property> <property> <name>hive.exec.max.created.files</name> <value>100000</value> </property> <property> <name>hive.exec.max.dynamic.partitions</name> <value>10000</value> </property> <property> <name>hive.exec.max.dynamic.partitions.pernode</name> <value>1000</value> </property> <property> <name>hive.exec.orc.compression.strategy</name> <value>SPEED</value> </property> <property> <name>hive.exec.orc.default.compress</name> <value>SNAPPY</value> </property> <property> <name>hive.exec.orc.default.stripe.size</name> <value>67108864</value> </property> <property> <name>hive.exec.orc.encoding.strategy</name> <value>SPEED</value> </property> <property> <name>hive.exec.parallel</name> <value>false</value> </property> <property> <name>hive.exec.parallel.thread.number</name> <value>8</value> </property> <property> <name>hive.exec.post.hooks</name> <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value> </property> <property> <name>hive.exec.pre.hooks</name> <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value> </property> <property> <name>hive.exec.reducers.bytes.per.reducer</name> <value>550397542</value> </property> <property> <name>hive.exec.reducers.max</name> <value>1009</value> </property> <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> </property> <property> <name>hive.exec.submit.local.task.via.child</name> <value>true</value> </property> <property> <name>hive.exec.submitviachild</name> <value>false</value> </property> <property> <name>hive.execution.engine</name> <value>mr</value> </property> <property> <name>hive.fetch.task.aggr</name> <value>false</value> </property> <property> <name>hive.fetch.task.conversion</name> <value>more</value> </property> <property> <name>hive.fetch.task.conversion.threshold</name> <value>1073741824</value> </property> <property> <name>hive.limit.optimize.enable</name> <value>true</value> </property> <property> <name>hive.limit.pushdown.memory.usage</name> <value>0.04</value> </property> <property> <name>hive.map.aggr</name> <value>true</value> </property> <property> <name>hive.map.aggr.hash.force.flush.memory.threshold</name> <value>0.9</value> </property> <property> <name>hive.map.aggr.hash.min.reduction</name> <value>0.5</value> </property> <property> <name>hive.map.aggr.hash.percentmemory</name> <value>0.5</value> </property> <property> <name>hive.mapjoin.bucket.cache.size</name> <value>10000</value> </property> <property> <name>hive.mapjoin.optimized.hashtable</name> <value>true</value> </property> <property> <name>hive.mapred.reduce.tasks.speculative.execution</name> <value>false</value> </property> <property> <name>hive.merge.mapfiles</name> <value>true</value> </property> <property> <name>hive.merge.mapredfiles</name> <value>false</value> </property> <property> <name>hive.merge.orcfile.stripe.level</name> <value>true</value> </property> <property> <name>hive.merge.rcfile.block.level</name> <value>true</value> </property> <property> <name>hive.merge.size.per.task</name> <value>256000000</value> </property> <property> <name>hive.merge.smallfiles.avgsize</name> <value>16000000</value> </property> <property> <name>hive.merge.tezfiles</name> <value>true</value> </property> <property> <name>hive.metastore.authorization.storage.checks</name> <value>false</value> </property> <property> <name>hive.metastore.cache.pinobjtypes</name> <value>Table,Database,Type,FieldSchema,Order</value> </property> <property> <name>hive.metastore.client.connect.retry.delay</name> <value>5s</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>1800s</value> </property> <property> <name>hive.metastore.connect.retries</name> <value>24</value> </property> <property> <name>hive.metastore.execute.setugi</name> <value>true</value> </property> <property> <name>hive.metastore.failure.retries</name> <value>24</value> </property> <property> <name>hive.metastore.kerberos.keytab.file</name> <value>/etc/security/keytabs/hive.service.keytab</value> </property> <property> <name>hive.metastore.kerberos.principal</name> <value>hive/_HOST@EXAMPLE.COM</value> </property> <property> <name>hive.metastore.pre.event.listeners</name> <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value> </property> <property> <name>hive.metastore.sasl.enabled</name> <value>false</value> </property> <property> <name>hive.metastore.server.max.threads</name> <value>100000</value> </property> <property> <name>hive.metastore.uris</name> <value> thrift://lhdcsi02v.production.local:9083 </value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/apps/hive/warehouse</value> </property> <property> <name>hive.optimize.bucketmapjoin</name> <value>true</value> </property> <property> <name>hive.optimize.bucketmapjoin.sortedmerge</name> <value>false</value> </property> <property> <name>hive.optimize.constant.propagation</name> <value>true</value> </property> <property> <name>hive.optimize.index.filter</name> <value>true</value> </property> <property> <name>hive.optimize.metadataonly</name> <value>true</value> </property> <property> <name>hive.optimize.null.scan</name> <value>true</value> </property> <property> <name>hive.optimize.reducededuplication</name> <value>true</value> </property> <property> <name>hive.optimize.reducededuplication.min.reducer</name> <value>4</value> </property> <property> <name>hive.optimize.sort.dynamic.partition</name> <value>false</value> </property> <property> <name>hive.orc.compute.splits.num.threads</name> <value>10</value> </property> <property> <name>hive.orc.splits.include.file.footer</name> <value>false</value> </property> <property> <name>hive.prewarm.enabled</name> <value>false</value> </property> <property> <name>hive.prewarm.numcontainers</name> <value>3</value> </property> <property> <name>hive.security.authenticator.manager</name> <value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</value> </property> <property> <name>hive.security.authorization.enabled</name> <value>true</value> </property> <property> <name>hive.security.authorization.manager</name> <value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value> </property> <property> <name>hive.security.metastore.authenticator.manager</name> <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value> </property> <property> <name>hive.security.metastore.authorization.auth.reads</name> <value>true</value> </property> <property> <name>hive.security.metastore.authorization.manager</name> <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value> </property> <property> <name>hive.server2.allow.user.substitution</name> <value>true</value> </property> <property> <name>hive.server2.authentication</name> <value>NONE</value> </property> <property> <name>hive.server2.authentication.spnego.keytab</name> <value>HTTP/_HOST@EXAMPLE.COM</value> </property> <property> <name>hive.server2.authentication.spnego.principal</name> <value>/etc/security/keytabs/spnego.service.keytab</value> </property> <property> <name>hive.server2.enable.doAs</name> <value>false</value> </property> <property> <name>hive.server2.logging.operation.enabled</name> <value>true</value> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/tmp/hive/operation_logs</value> </property> <property> <name>hive.server2.max.start.attempts</name> <value>5</value> </property> <property> <name>hive.server2.support.dynamic.service.discovery</name> <value>true</value> </property> <property> <name>hive.server2.table.type.mapping</name> <value>CLASSIC</value> </property> <property> <name>hive.server2.tez.default.queues</name> <value>default</value> </property> <property> <name>hive.server2.tez.initialize.default.sessions</name> <value>false</value> </property> <property> <name>hive.server2.tez.sessions.per.default.queue</name> <value>1</value> </property> <property> <name>hive.server2.thrift.http.path</name> <value>cliservice</value> </property> <property> <name>hive.server2.thrift.http.port</name> <value>10001</value> </property> <property> <name>hive.server2.thrift.max.worker.threads</name> <value>500</value> </property> <property> <name>hive.server2.thrift.port</name> <value>10000</value> </property> <property> <name>hive.server2.thrift.sasl.qop</name> <value>auth</value> </property> <property> <name>hive.server2.transport.mode</name> <value>binary</value> </property> <property> <name>hive.server2.use.SSL</name> <value>false</value> </property> <property> <name>hive.server2.zookeeper.namespace</name> <value>hiveserver2</value> </property> <property> <name>hive.smbjoin.cache.rows</name> <value>10000</value> </property> <property> <name>hive.start.cleanup.scratchdir</name> <value>false</value> </property> <property> <name>hive.stats.autogather</name> <value>true</value> </property> <property> <name>hive.stats.dbclass</name> <value>fs</value> </property> <property> <name>hive.stats.fetch.column.stats</name> <value>true</value> </property> <property> <name>hive.stats.fetch.partition.stats</name> <value>true</value> </property> <property> <name>hive.support.concurrency</name> <value>true</value> </property> <property> <name>hive.tez.auto.reducer.parallelism</name> <value>true</value> </property> <property> <name>hive.tez.container.size</name> <value>3072</value> </property> <property> <name>hive.tez.cpu.vcores</name> <value>-1</value> </property> <property> <name>hive.tez.dynamic.partition.pruning</name> <value>true</value> </property> <property> <name>hive.tez.dynamic.partition.pruning.max.data.size</name> <value>104857600</value> </property> <property> <name>hive.tez.dynamic.partition.pruning.max.event.size</name> <value>1048576</value> </property> <property> <name>hive.tez.input.format</name> <value>org.apache.hadoop.hive.ql.io.HiveInputFormat</value> </property> <property> <name>hive.tez.java.opts</name> <value>-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps</value> </property> <property> <name>hive.tez.log.level</name> <value>INFO</value> </property> <property> <name>hive.tez.max.partition.factor</name> <value>2.0</value> </property> <property> <name>hive.tez.min.partition.factor</name> <value>0.25</value> </property> <property> <name>hive.tez.smb.number.waves</name> <value>0.5</value> </property> <property> <name>hive.txn.manager</name> <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value> </property> <property> <name>hive.txn.max.open.batch</name> <value>1000</value> </property> <property> <name>hive.txn.timeout</name> <value>300</value> </property> <property> <name>hive.user.install.directory</name> <value>/user/</value> </property> <property> <name>hive.vectorized.execution.enabled</name> <value>true</value> </property> <property> <name>hive.vectorized.execution.reduce.enabled</name> <value>true</value> </property> <property> <name>hive.vectorized.groupby.checkinterval</name> <value>4096</value> </property> <property> <name>hive.vectorized.groupby.flush.percent</name> <value>0.1</value> </property> <property> <name>hive.vectorized.groupby.maxentries</name> <value>100000</value> </property> <property> <name>hive.warehouse.subdir.inherit.perms</name> <value>true</value> </property> <property> <name>hive.zookeeper.client.port</name> <value>2181</value> </property> <property> <name>hive.zookeeper.namespace</name> <value>hive_zookeeper_namespace</value> </property> <property> <name>hive.zookeeper.quorum</name> <value> lhdcsi03v.production.local:2181,lhdcsi02v.production.local:2181,lhdcsi04v.production.local:2181 </value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value> jdbc:mysql://lhdcsi02v.production.local/hive </value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> </configuration>[hadoop@lhdcsi02v ~]$ Regards, Vishal Bohra
... View more
Labels:
01-08-2019
08:31 AM
ambari-serverout.txt @Jay SenSharma File attched. I have done the changes and restarted the ambari server.
... View more
01-08-2019
05:38 AM
ambari-serverlogs.txt namenodelogstxt.txt namnode-logs.txt @Jay SenSharma , @Geoffrey Shelton Okot Hi All, HDP-2.6.5 Ambari 2.6.2.2 openjdk version "1.8.0_181" This is regarding the SSL configuration in all the server. My Ambari server is working fine with HTTPS. I am using only .key file and .cer file (i am using this as a cert file) Hadoop components are not going into HTTPS. Namanode UI, Yarn Resource manager, mapreduce jobHistory UI, zeppelin UI. [root@xxxxxxxx ~]# ambari-server setup-security Using python /usr/bin/python Security setup options... =========================================================================== Choose one of the following options: [1] Enable HTTPS for Ambari server. [2] Encrypt passwords stored in ambari.properties file. [3] Setup Ambari kerberos JAAS configuration. [4] Setup truststore. [5] Import certificate to truststore. =========================================================================== Enter choice, (1-5): 1 Do you want to configure HTTPS [y/n] (y)? y SSL port [8443] ? 8443 Enter path to Certificate: /hadoop/certs/xxxxx.localhost.cer Enter path to Private Key: /hadoop/certs/xxxxx.localhost.key Please enter password for Private Key: Importing and saving Certificate...done. Ambari server URL changed. To make use of the Tez View in Ambari please update the property tez.tez-ui.history-url.base in tez-site Adjusting ambari-server permissions and ownership... NOTE: Restart Ambari Server to apply changes ("ambari-server restart|stop+start") [root@lhdcsi02v ~]# ambari-server restart Using python /usr/bin/python Restarting ambari-server Waiting for server stop... Ambari Server stopped Ambari Server running with administrator privileges. Organizing resource files at /var/lib/ambari-server/resources... Ambari database consistency check started... Server PID at: /var/run/ambari-server/ambari-server.pid Server out at: /var/log/ambari-server/ambari-server.out Server log at: /var/log/ambari-server/ambari-server.log Waiting for server start............................................................ DB configs consistency check found warnings. See /var/log/ambari-server/ambari-server-check-database.log for more details. ERROR: Exiting with exit code 1. REASON: Server not yet listening on http port 8443 after 50 seconds. Exiting. ------------------------------------------------------------------------------------------------------------------------------------------------- keytool -import -noprompt -alias OwnCA -file xxxx.localhost.cer –storepass changeit -keystore /etc/pki/java/cacerts ------------------------------------------------------------------------------------------------------------------------------------------------------ Setup truststore [root@xxxxxxxx ~]# ambari-server setup-security Using python /usr/bin/python Security setup options... =========================================================================== Choose one of the following options: [1]Enable HTTPS for Ambari server. [2] Encrypt passwords stored in ambari.properties file. [3] Setup Ambari kerberos JAAS configuration. [4] Setup truststore. [5] Import certificate to truststore. =========================================================================== Enter choice, (1-5): 4 Do you want to configure a truststore [y/n] ? y The truststore is already configured. Do you want to re-configure the truststore [y/n] ? y TrustStore type [jks/jceks/pkcs12] (jks):jks Path to TrustStore file : /etc/pki/java/cacerts Password for TrustStore: changeit Re-enter password: changeit Ambari Server 'setup-security' completed successfully. [root@xxxxx ~]# [root@xxxxx conf]# keytool -import -noprompt -alias OwnCA -file /hadoop/certs/xxxx.localhost.cer -storepass changeit -keypass changeit -keystore /etc/hadoop/conf/hadoop-private-keystore.jks Certificate was added to keystore /hadoop/certs/hadoop-private-keystore.jks i have copy in all the datanodes as well ERROR:- NameNode Web UI Connection failed to https://xxxxxxx.localhost:50470 (<urlopen error EOF occurred in violation of protocol (_ssl.c:579)>) Ambari server host is having the ceritifcates it is showing Datanode [hdfs@xxxxx hdfs]$ openssl s_client -connect xxxx.localhost:50470 -tls1_2 CONNECTED(00000003) 140047696471952:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:659: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1546675079 Timeout : 7200 (sec) Verify return code: 0 (ok) Below are the things i have tried /etc/amabri-agent/conf/ambari-agent.ini" in all the hosts in the cluster [security] (done) force_https_protocol=PROTOCOL_TLSv1_2 ambari.properties (done) security.server.disabled.protocols=SSL|SSLv2|SSLv2Hello|SSLv3|TLSv1 python /tmp/testPythonProtocols.py PROTOCOL_SSLv2 PROTOCOL_SSLv23 PROTOCOL_SSLv3 PROTOCOL_TLSv1 PROTOCOL_TLSv1_1 PROTOCOL_TLSv1_2 --- Namenode. [root@xxxxx certs]# openssl s_client -connect xxx.production.local:50470
CONNECTED(00000003)
140713499117456:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 289 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1546923639
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
... View more
Labels:
12-26-2018
11:28 AM
Still its not installing. [root@lhdcsi02v ~]# ambari-server install-mpack --mpack=/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz --purge --verbose Using python /usr/bin/python Installing management pack INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Installing management pack /tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.1.2.0-7/ INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: AMBARI_SERVER_LIB is not set, using default /usr/lib/ambari-server INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: Loading properties from /etc/ambari-server/conf/ambari.properties INFO: about to run command: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.checks.MpackInstallChecker --mpack-stacks HDF INFO: process_pid=9492 ERROR: This Ambari instance is already managing the cluster pruuk_cluster that has the HDP-2.6 stack installed on it. The management pack you are attempting to install only contains stack definitions for [HDF]. Since this management pack does not contain a stack that has already being deployed by Ambari, the --purge option would cause your existing Ambari installation to be unusable. Due to that we cannot install this management pack. Mpack installation checker failed! ERROR: Exiting with exit code 1. REASON: This Ambari instance is already managing the cluster pruuk_cluster that has the HDP-2.6 stack installed on it. The management pack you are attempting to install only contains stack definitions for [HDF]. Since this management pack does not contain a stack that has already being deployed by Ambari, the --purge option would cause your existing Ambari installation to be unusable. Due to that we cannot install this management pack. Mpack installation checker failed!
... View more
12-23-2018
05:07 PM
@Akhil S Naik Do i need to intsall the http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.1.0/HDF-3.1.1.0-centos6-tars-tarball.tar.gz over HDP 2.6 and then http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.1.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.1.1.0-35.tar.gz ? Or just adding the mpack 3.1 will work ?
... View more
12-21-2018
02:30 PM
HDF 3.2 is not compatible for HDP2.6.5 so i have donwnload HDF 3.1 mpack but still getting the above error.
... View more
12-21-2018
02:30 PM
Thanks for the reply... I am geeting the error while installing hdf-ambari-mpack-3.1.2.0-7.tar.gz.tar [root@lhdcsi02v ~]#
/usr/sbin/ambari-server install-mpack
--mpack=/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz.tar --purge --verbose Using python
/usr/bin/python Installing management pack INFO: Loading properties from
/etc/ambari-server/conf/ambari.properties INFO: Installing management pack
/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz.tar INFO: Loading properties from
/etc/ambari-server/conf/ambari.properties INFO: Download management pack
to temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz.tar INFO: Loading properties from
/etc/ambari-server/conf/ambari.properties Traceback (most recent call
last): File
"/usr/sbin/ambari-server.py", line 952, in <module> mainBody() File
"/usr/sbin/ambari-server.py", line 922, in mainBody main(options,
args, parser) File
"/usr/sbin/ambari-server.py", line 874, in main
action_obj.execute() File
"/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs) File
"/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 896,
in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir,
mpack_archive_path) = _install_mpack(options, replay_mode) File
"/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 697,
in _install_mpack tmp_root_dir
= expand_mpack(tmp_archive_path) File
"/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 150,
in expand_mpack
archive_root_dir = get_archive_root_dir(archive_path) File
"/usr/lib/ambari-server/lib/resource_management/libraries/functions/tar_archive.py",
line 85, in get_archive_root_dir if
archive.endswith('.tar.gz') or path.endswith('.tgz'): NameError: global name 'path' is
not defined [root@lhdcsi02v ~]
... View more
12-21-2018
07:24 AM
How did you fix the issue java.lang.RuntimeException: Error in library("knitr"): there is no package called ‘knitr’ ?
... View more
12-21-2018
01:24 AM
Hi All, Could anybody please let me know the steps to install HDF 3.2 over HDP 2.6. I am using hdp 2.6.. Do i need to install the whole cluster or just to install hdf mpach 3.2 ? Please clarify
... View more
12-20-2018
01:35 PM
Hi All, Could anyone please answer. I am running the command in zeppelin. Zeppeljne version 0.7 spark 2 is working from command line. Knitr library is alsready there. /usr/hdp/current/spark2-client/R/lib/SparkR drwxrwxr-x. 9 zeppelin zeppelin 4096 Dec 10 23:00 knitr spark version 2.0 %spark2.r 6+6 INFO [2018-12-20 13:08:52,807] ({pool-3-thread-9} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1545311332806 started by scheduler org.apache.zeppelin.spark.SparkRInterpreter1498089066
INFO [2018-12-20 13:08:52,810] ({pool-3-thread-9} ZeppelinR.java[createRScript]:353) - File /tmp/zeppelin_sparkr-5383790228296778176.R created
ERROR [2018-12-20 13:09:02,817] ({pool-3-thread-9} Job.java[run]:188) - Job failed
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not responding
R version 3.5.0 (2018-04-23) -- "Joy in Playing"
Copyright (C) 2018 The R Foundation for Statistical Computing
Platform: x86_64-redhat-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> #
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements. See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership. The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License. You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> args <- commandArgs(trailingOnly = TRUE)
>
> hashCode <- as.integer(args[1])
> port <- as.integer(args[2])
> libPath <- args[3]
> version <- as.integer(args[4])
> rm(args)
>
> print(paste("Port ", toString(port)))
[1] "Port 34239"
> print(paste("LibPath ", libPath))
[1] "LibPath /usr/hdp/current/spark2-client//R/lib"
>
> .libPaths(c(file.path(libPath), .libPaths()))
> library(SparkR)
Attaching package: âSparkRâ
>
>
> SparkR:::connectBackend("localhost", port, 6000)
The following objects are masked from âpackage:statsâ:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from âpackage:baseâ:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
A connection with
description "->localhost:34239"
class "sockconn"
mode "wb"
text "binary"
opened "opened"
can read "yes"
can write "yes"
>
> # scStartTime is needed by R/pkg/R/sparkR.R
> assign(".scStartTime", as.integer(Sys.time()), envir = SparkR:::.sparkREnv)
>
> # getZeppelinR
> .zeppelinR = SparkR:::callJStatic("org.apache.zeppelin.spark.ZeppelinR", "getZeppelinR", hashCode)
at org.apache.zeppelin.spark.ZeppelinR.waitForRScriptInitialized(ZeppelinR.java:285)
at org.apache.zeppelin.spark.ZeppelinR.request(ZeppelinR.java:227)
at org.apache.zeppelin.spark.ZeppelinR.eval(ZeppelinR.java:176)
at org.apache.zeppelin.spark.ZeppelinR.open(ZeppelinR.java:165)
at org.apache.zeppelin.spark.SparkRInterpreter.open(SparkRInterpreter.java:90)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:493)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
INFO [2018-12-20 13:09:02,820] ({pool-3-thread-9} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1545311332806 finished by scheduler org.apache.zeppelin.spark.SparkRInterpreter1498089066
... View more
Labels:
12-18-2018
10:40 AM
export HADOOP_CONF_DIR={{hadoop_conf_dir}}is alreday present in the Advanced slider-env section. still getting the same error.
... View more
12-17-2018
03:22 PM
We are using HDP 2.6.
1) There are no logs for HiveServer2 Interactive service in hive or hive2 folder.
2) we have enable LLAP.
3) we dont have configure any slider client
4) HA not enabled
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/SLIDER/0.60.0.2.2/package/scripts/service_check.py", line 60, in <module>
SliderServiceCheck().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/SLIDER/0.60.0.2.2/package/scripts/service_check.py", line 55, in service_check
logoutput=True,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of ' /usr/hdp/current/slider-client/bin/slider list' returned 56. 2018-12-17 07:08:33,748 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:33,756 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:33,757 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:33,759 [main] INFO util.ExitUtil - Exiting with status 56
stdout:
2018-12-17 07:08:15,313 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-12-17 07:08:15,318 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-12-17 07:08:15,324 - Called copy_to_hdfs tarball: slider
2018-12-17 07:08:15,324 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-12-17 07:08:15,324 - Tarball version was calcuated as 2.6.5.0-292. Use Command Version: True
2018-12-17 07:08:15,324 - Source file: /usr/hdp/2.6.5.0-292/slider/lib/slider.tar.gz , Dest file in HDFS: /hdp/apps/2.6.5.0-292/slider/slider.tar.gz
2018-12-17 07:08:15,325 - HdfsResource['/hdp/apps/2.6.5.0-292/slider'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.5.0-292/hadoop/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://lhdcsi04v.production.local:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0555}
2018-12-17 07:08:15,331 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://lhdcsi04v.production.local:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/slider?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpMONGEX 2>/tmp/tmpC954Tc''] {'logoutput': None, 'quiet': False}
2018-12-17 07:08:15,937 - call returned (0, '')
2018-12-17 07:08:15,941 - HdfsResource['/hdp/apps/2.6.5.0-292/slider/slider.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.5.0-292/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/2.6.5.0-292/slider/lib/slider.tar.gz', 'default_fs': 'hdfs://lhdcsi04v.production.local:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0444}
2018-12-17 07:08:15,944 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://lhdcsi04v.production.local:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/slider/slider.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpdLrlDi 2>/tmp/tmpN7x12b''] {'logoutput': None, 'quiet': False}
2018-12-17 07:08:16,378 - call returned (0, '')
2018-12-17 07:08:16,382 - DFS file /hdp/apps/2.6.5.0-292/slider/slider.tar.gz is identical to /usr/hdp/2.6.5.0-292/slider/lib/slider.tar.gz, skipping the copying
2018-12-17 07:08:16,382 - Will attempt to copy slider tarball from /usr/hdp/2.6.5.0-292/slider/lib/slider.tar.gz to DFS at /hdp/apps/2.6.5.0-292/slider/slider.tar.gz.
2018-12-17 07:08:16,384 - Execute[' /usr/hdp/current/slider-client/bin/slider list'] {'logoutput': True, 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
2018-12-17 07:08:18,749 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:18,771 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:18,772 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:18,774 [main] INFO util.ExitUtil - Exiting with status 56
2018-12-17 07:08:18,877 - Retrying after 5 seconds. Reason: Execution of ' /usr/hdp/current/slider-client/bin/slider list' returned 56. 2018-12-17 07:08:18,749 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:18,771 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:18,772 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:18,774 [main] INFO util.ExitUtil - Exiting with status 56
2018-12-17 07:08:26,259 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:26,266 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:26,267 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:26,270 [main] INFO util.ExitUtil - Exiting with status 56
2018-12-17 07:08:26,487 - Retrying after 5 seconds. Reason: Execution of ' /usr/hdp/current/slider-client/bin/slider list' returned 56. 2018-12-17 07:08:26,259 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:26,266 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:26,267 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:26,270 [main] INFO util.ExitUtil - Exiting with status 56
2018-12-17 07:08:33,748 [main] INFO service.AbstractService - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state INITED; cause: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
2018-12-17 07:08:33,756 [main] INFO service.AbstractService - Service Slider Client failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
2018-12-17 07:08:33,757 [main] ERROR main.ServiceLauncher - Exception: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
org.apache.hadoop.service.ServiceStateException: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.slider.client.SliderClient.initHadoopBinding(SliderClient.java:495)
at org.apache.slider.client.SliderClient.serviceInit(SliderClient.java:319)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:182)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: java.net.BindException: Invalid yarn.resourcemanager.address value:0.0.0.0:8032 - see https://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.slider.client.SliderYarnClientImpl.serviceInit(SliderYarnClientImpl.java:81)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 8 more
2018-12-17 07:08:33,759 [main] INFO util.ExitUtil - Exiting with status 56
Command failed after 1 tries
... View more
- Tags:
- Data Processing
- hiveserver2
- hiveserver2-ha
- hiveserver2-loadbalance
- hiveserver2-ssl
- hiveserver2-ssl-kerberos
Labels: