Member since
07-05-2018
119
Posts
3
Kudos Received
0
Solutions
10-17-2018
09:12 AM
1 Kudo
@Tomas79 If i run below command before running spark-shell its working. export SPARK_DIST_CLASSPATH=`hadoop classpath` But i do not want above command to be run every time, Kindly suggest? - VIjay M
... View more
10-16-2018
08:35 AM
@bgooley, I have TLS enabled hiveserver2 with 2 instance running on 2 different hosts. haproxy installed and configured on same server where 1 hive instance running. Kindly confirm below. 1. DO i need to define TLS cert anywhere in haproxy config, If yes any documentation for it? 2. Does haproxy also needs to be configured with TLS? Any documentation for installing and conifuring load balancer for TLS enabled hiveserver2. - VIjay Mishra
... View more
10-15-2018
07:58 AM
Hello Team, We have CDH 5.15.3 installed and Spark2 version installed of 2.3. Kindly confirm below: 1. Does CDh 5.15.3 support Spark2 2.3? 2. When running spark-shell on server where Gateway Role installed throws below error [cloudera-scm@a302-0144-2944 cloudera-scm-agent]$ spark-shell Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:123) at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:123) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkProperties(SparkSubmitArguments.scala:123) at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:109) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 3. Any configuration to be done by using Spark on YArn? Kindly help to fix the issue? - VIjay M
... View more
Labels:
- Labels:
-
Apache Spark
10-08-2018
12:08 AM
1 Kudo
Hello Team,
We have CDH 5.15 cluster running and have kerberos and TLS enabled for all services in the cluster.
We would like to enable for Hiveserver2 using haproxy load balancer.
We have enable HA for hivemetastore using below link. 2 instance of hive metastore is up and running.
https://www.cloudera.com/documentation/enterprise/5-15-x/topics/admin_ha_hivemetastore.html
Refering below link for hiveserver2 ha.
https://www.cloudera.com/documentation/enterprise/5-15-x/topics/admin_ha_hiveserver2.html
haproxy, 1 instance of hive metastore, 1 instance of hiveserver2 installed on same node.
beeline throws below error.
beeline> !connect jdbc:hive2://abc:10001/default;ssl=true;sslTrustStore=/app/bds/security/pki/cloudera_truststore.jks;sslTrustPassword=xxxxx;principal=hive/aabc@REALM Connecting to jdbc:hive2://abc:10001/default;ssl=true;sslTrustStore=/app/bds/security/pki/cloudera_truststore.jks;sslTrustPassword=xxxxx;principal=hive/aabc@REALM Unknown HS2 problem when communicating with Thrift server. Error: Could not open client transport with JDBC Uri: jdbc:hive2://abc:10001/default;ssl=true;sslTrustStore=/app/bds/security/pki/cloudera_truststore.jks;sslTrustPassword=xxxxxx;principal=hive/aabc@REALM: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake (state=08S01,code=0)
Below snap for haproxy config
# This is the setup for HS2. beeline client connect to load_balancer_host:10001. # HAProxy will balance connections among the list of servers listed below. listen hiveserver2 :10001 mode tcp option tcplog balance source server hiveserver2_1 abc:10000 server hiveserver2_2 xyz:10000
Kindly suggest?
- Vijay M
... View more
Labels:
- Labels:
-
Apache Hive
-
Kerberos
07-05-2018
06:35 AM
Hello team, Integrated Cloudera Manager with Windows AD and provided few admin users as Admin permissions. Now i want to disable admin account as admin account not present on Openldap. How can i disable admin account which still does local authentication? I can not create admin account on Openldap server? - Vijay Mishra
... View more
Labels:
- Labels:
-
Cloudera Manager
02-14-2018
04:36 AM
@Sandeep Nemuri I have done enable and disable the kerberos which has fixed the issue for kafka not coming up post disabling the kerberos. Your solution also looks good, i will try the same if i will get the error. Other problem i have is kafka principals and keytab not getting created after enabling the cluster on same cluster, Is there anything you can suggest? - Vijay Mishra
... View more
02-13-2018
08:46 AM
@ Sandeep Nemuri, No there is no imp data inside kafka brokers. Kindly suggest. - Vijay Mishra
... View more
02-13-2018
04:56 AM
Team, Post disabling kerberos kafka brokers failed to start with below error. I am using HDP 2.6 and ambari 2.6 Error: [2018-02-13 04:44:11,112] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.SecurityException: zookeeper.set.acl is true, but the verification of the JAAS login file failed.
at kafka.server.KafkaServer.initZk(KafkaServer.scala:314)
at kafka.server.KafkaServer.startup(KafkaServer.scala:200)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2018-02-13 04:44:11,113] INFO shutting down (kafka.server.KafkaServer) Below find server.properties from kafka broker [root@vijayhdf-1 conf]# cat server.properties
# Generated by Apache Ambari. Tue Feb 13 04:53:24 2018
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.rack=/default-rack
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec,kafka.server.KafkaServer.ClusterId
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.hosts=vijayhdp-1.novalocal
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.protocol=http
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=bigdata
kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks
kafka.timeline.metrics.truststore.type=jks
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://vijayhdf-1.novalocal:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
port=6667
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](ambari-qa-hdptest@NOVALOCAL.COM)s/.*/ambari-qa/,RULE:[1:$1@$0](hbase-hdptest@NOVALOCAL.COM)s/.*/hbase/,RULE:[1:$1@$0](hdfs-hdptest@NOVALOCAL.COM)s/.*/hdfs/,RULE:[1:$1@$0](.*@NOVALOCAL.COM)s/@.*//,RULE:[2:$1@$0](activity_analyzer@NOVALOCAL.COM)s/.*/activity_analyzer/,RULE:[2:$1@$0](activity_explorer@NOVALOCAL.COM)s/.*/activity_explorer/,RULE:[2:$1@$0](amshbase@NOVALOCAL.COM)s/.*/ams/,RULE:[2:$1@$0](amszk@NOVALOCAL.COM)s/.*/ams/,RULE:[2:$1@$0](dn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](hbase@NOVALOCAL.COM)s/.*/hbase/,RULE:[2:$1@$0](hive@NOVALOCAL.COM)s/.*/hive/,RULE:[2:$1@$0](jhs@NOVALOCAL.COM)s/.*/mapred/,RULE:[2:$1@$0](jn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](knox@NOVALOCAL.COM)s/.*/knox/,RULE:[2:$1@$0](nifi@NOVALOCAL.COM)s/.*/nifi/,RULE:[2:$1@$0](nm@NOVALOCAL.COM)s/.*/yarn/,RULE:[2:$1@$0](nn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](rangeradmin@NOVALOCAL.COM)s/.*/ranger/,RULE:[2:$1@$0](rangertagsync@NOVALOCAL.COM)s/.*/rangertagsync/,RULE:[2:$1@$0](rangerusersync@NOVALOCAL.COM)s/.*/rangerusersync/,RULE:[2:$1@$0](rm@NOVALOCAL.COM)s/.*/yarn/,RULE:[2:$1@$0](yarn@NOVALOCAL.COM)s/.*/yarn/,DEFAULT
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=vijayhdp-3.novalocal:2181,vijayhdp-2.novalocal:2181,vijayhdp-1.novalocal:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.set.acl=true
zookeeper.sync.time.ms=2000 Kindly help me to fix the issue
... View more
Labels:
02-06-2018
04:14 AM
@Aditya Anything more you can suggest to fix the issue. None of the hive tables and kafka topics visible in Atlas. Also import_hive script also not present on cluster. - Vijay Mishrs
... View more