Member since
09-10-2016
82
Posts
6
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3831 | 08-28-2019 11:07 AM | |
3539 | 12-21-2018 05:59 PM | |
1373 | 12-10-2018 05:16 PM | |
1249 | 12-10-2018 01:03 PM | |
856 | 12-07-2018 08:04 AM |
11-30-2019
05:02 AM
@ManuelCalvo Changed WARN to DEBUG and ran the kafka producer. Please find the details below: jaas.conf KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/user.keytab"
storeKey=true
useTicketCache=false
debug=true
serviceName="kafka"
principal="user@domain.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/<user>/user.keytab"
storeKey=true
useTicketCache=false
debug=true
serviceName="zookeeper"
principal="user@domain.COM";
}; client.propertis security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka kafka-producer: [<user>@server ~]$export KAFKA_OPTS="-Djava.security.auth.login.config=/home/<user>/jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"
[<user>@server ~]$ $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list $BROKER_LIST --producer.config /home/<user>/client.properties --topic testtopic full error log: [<user>@server ~]$ $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list $BROKER_LIST --producer.config /home/<user>/client.properties --topic testtopic
[2019-11-30 12:52:57,917] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-11-30 12:52:57,977] INFO ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [server1:9092, server2:9092, server3:9092, server4:9092]
buffer.memory = 33554432
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SASL_PLAINTEXT
send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2019-11-30 12:52:57,997] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics)
[2019-11-30 12:52:58,000] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics)
[2019-11-30 12:52:58,191] DEBUG Updated cluster metadata version 1 to Cluster(id = null, nodes = [server1:9092 (id: -2 rack: null), server2:9092 (id: -1 rack: null), server3:9092 (id: -4 rack: null), server4:9092 (id: -3 rack: null)], partitions = [], controller = null) (org.apache.kafka.clients.Metadata)
[2019-11-30 12:52:58,210] INFO [Producer clientId=console-producer] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-11-30 12:52:58,211] DEBUG [Producer clientId=console-producer] Kafka producer has been closed (org.apache.kafka.clients.producer.KafkaProducer)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:457)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:304)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:45)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:153)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:414)
... 3 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:940)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:103)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:65)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:125)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:142)
... 7 more
[<user>@server ~]$
... View more
11-25-2019
11:44 AM
Hi @ManuelCalvo , Yes, keytab has right permission to get the valid ticket. I tried taking the ticket manually & it works fine. What I observed here was environment variable KAFKA_OPTS was ignored by kafka clients.The console producer/consumer should work with the KAFKA_OPTS environment variable that is expected to have priority over the system variables; I exported KAFKA_OPTS pointing to the JAAS file and Kerberos client configuration file, but it's not working!!!!! Kafka-version : 2.0.0.3 export KAFKA_OPTS="-Djava.security.auth.login.config=/home/<user>/jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf" error: Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:153) If I pass SASL parameters as below in the client.properties, I'm able to produce/consume data from Topics without any issue. $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list $BROKER_LIST --topic testtopic --producer.config /home/<user>/client.properties $cat client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
#sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/home/<user>/<user>.keytab" \
useTicketCache=false \
serviceName="kafka" \
principal="user@domain.COM"; Any idea why export KAFKA_OPTS is not working here? Thank you
... View more
11-23-2019
02:57 AM
I'm getting below error message while trying to produce data from kafka topic in the kerberized HDP cluster.
Error:
DEBUG [Producer clientId=console-producer] Kafka producer has been closed (org.apache.kafka.clients.producer.KafkaProducer) org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:457) Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
Stack:
HDP 3.1.0
Kafka 1.0.0.3.1 $KAFKA_HOME="/usr/hdp/3.1.0.0-78/kafka"
$BROKER_LIST="<broker-list>"
$ZK_HOSTS="<zk-host-list>:2181/kafka"
$export KAFKA_OPTS="-Djava.security.auth.login.config=/home/<user>/jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=true -Dsun.security.krb5.debug=true"
$export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/home/<user>/jaas.conf -Dsun.security.krb5.debug=true"
$cat jaas.conf
---using user keytab & principal for authentication and disabled useTicketCache---
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/<user>/user.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="user@domain.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/<user>/user.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="user@domain.COM";
};
$cat client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
$klist
~]$ klist
klist: No credentials cache found (filename: /tmp/krb5cc_121852)
$kafka-console-producer.sh
$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list <broker-list>:9092 --topic testtopic --producer.config /home/<user>/client.properties
full error log:
[2019-11-23 10:05:45,614] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics) [2019-11-23 10:05:45,617] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics) [2019-11-23 10:05:45,620] DEBUG Updated cluster metadata version 1 to Cluster(id = null, nodes = [sl975iaehdp0401.visa.com:9092 (id: -1 rack: null)], partitions = [], controller = null) (org.apache.kafka.clients.Metadata) [2019-11-23 10:05:45,637] INFO [Producer clientId=console-producer] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-11-23 10:05:45,638] DEBUG [Producer clientId=console-producer] Kafka producer has been closed (org.apache.kafka.clients.producer.KafkaProducer) org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:457) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:304) at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:45) at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala) Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:153) at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140) at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65) at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:414) ... 3 more Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:940) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60) at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:103) at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:65) at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:125) at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:142) ... 7 more
Could you please help on this.
Thank you.
... View more
Labels:
11-08-2019
05:58 AM
Can you please assist on this. Thanks
... View more
11-06-2019
02:58 AM
Hi,
We are getting the error while executing Hive query on spark as execution engine.
Hive version: 1.2.1, Spark version : 1.6
set hive.execution.engine=spark; set spark.home=/usr/hdp/current/spark-client; set hive.execution.engine=spark; set spark.master=yarn-client; set spark.eventLog.enabled=true; set spark.executor.memory=512m; set spark.executor.cores=2; set spark.driver.extraClassPath=/usr/hdp/current/hive-client/lib/hive-exec.jar;
Query ID = svchdpir2d_20191106105445_a9ebc8a2-9c28-4a3d-ac5e-0a8609e56fd5 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Spark Job = c6cc1641-20ad-4073-ab62-4f621ae595c8 Status: SENT Failed to execute spark task, with exception 'java.lang.IllegalStateException(RPC channel is closed.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
Could you please help on this.
Thank you
... View more
Labels:
08-28-2019
11:07 AM
Hi @jsensharma, After changing SPARK_HISTORY_OPTS in Advanced spark2-env as below, spark2 history server UI started working. Do you think below config change is the fix for this issue? please advise. Thanks From: export SPARK_HISTORY_OPTS='-Dspark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter -Dspark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.params="type=kerberos,kerberos.principal={{spnego_principal}},kerberos.keytab={{spnego_keytab}}"' To: export SPARK_HISTORY_OPTS='-Dspark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.params="type=kerberos,kerberos.principal={{spnego_principal}},kerberos.keytab={{spnego_keytab}}"'
... View more
08-28-2019
06:50 AM
Hi @jsensharma , This is the first time we are deploying spark2 in our cluster. Thanks
... View more
08-28-2019
06:19 AM
Hi @jsensharma , Please find the java version below: lrwxrwxrwx. 1 root root 28 Jul 31 16:20 latest -> /usr/java/jdk1.8.0_221-amd64 $ ps -ef | grep spark | grep -v grep
/usr/java/jdk1.8.0_221-amd64/bin/java -Dhdp.version=3.1.0.0-78 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/3.1.0.0-78/hadoop/conf/ -Dspark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter -Dspark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.params=type=kerberos,kerberos.principal=HTTP/xxxx.xxxx.com@CORPxx.xxx.COM,kerberos.keytab=/etc/security/keytabs/spnego.service.keytab -Xmx2048m org.apache spark.deploy.history.HistoryServer $java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b27, mixed mode) Thanks
... View more
08-28-2019
06:07 AM
We have installed Spark2 in HDP 3.1 but when we are trying to access spark2 history server UI,we are getting below issue. HTTP ERROR 403
Problem accessing /. Reason:
java.lang.IllegalArgumentException log: spark-spark-org.apache.spark.deploy.history.HistoryServer-1-xxxxxx.visa.com.out 19/08/28 13:01:21 DEBUG AuthenticationFilter: Request [http://xxxxxxxx:18081/favicon.ico] triggering authentication. handler: class org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler
19/08/28 13:01:21 DEBUG AuthenticationFilter: Authentication exception: java.lang.IllegalArgumentException
org.apache.hadoop.security.authentication.client.AuthenticationException: java.lang.IllegalArgumentException
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:306)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:536)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:448)
at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.spark_project.jetty.server.Server.handle(Server.java:539)
at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.hadoop.security.authentication.util.KerberosUtil$DER.<init>(KerberosUtil.java:365)
at org.apache.hadoop.security.authentication.util.KerberosUtil$DER.<init>(KerberosUtil.java:358)
at org.apache.hadoop.security.authentication.util.KerberosUtil.getTokenServerName(KerberosUtil.java:291)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:285)
... 22 more HDP - HDP 3.1
Spark 2.3
Kerberized cluster Could you please help on this. Thank you
... View more
Labels:
07-15-2019
04:13 PM
Hi, The below properties are working in hive shell but not in beeline and getting Error: Error while processing statement: Cannot modify mapred.compress.map.output at runtime. It is not in list of params that are allowed to be modified at runtime SET mapred.output.compress SET mapred.compress.map.output How to whitelist these properties for beeline. Thank you.
... View more
Labels:
- Labels:
-
Apache Hive
07-15-2019
04:09 PM
Hi, SET hive.warehouse.data.skipTrash = true; Query returned non-zero code: 1, cause: hive configuration hive.warehouse.data.skipTrash does not exists hive version : Hive 1.2 Is this property deprecated? Please let me know. Thanks
... View more
Labels:
- Labels:
-
Apache Hive
07-12-2019
06:38 PM
Hi @Shu - Thanks for your response. Is there any way where we can enable the DFS commands where Ranger authorization is enabled? As of now, dfs commands are working in hive shell but not in beeline. Thank you.
... View more
07-11-2019
02:03 PM
Hi, In the kerberized cluster, after enabling Ranger Hive plug-in in HDP 2.6.5, not able to run dfs commands in beeline. jdbc:hive2://test.xyz.com:10000> dfs -ls /; Error: Error while processing statement: Permission denied: user [user1] does not have privilege for [DFS] command (state=,code=1) Please help on this. Thank you.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
06-20-2019
06:55 PM
1 Kudo
Hi, We are getting below warnings while starting hive shell from CLI. $hive 19/06/20 07:53:14 WARN conf.HiveConf: HiveConf of name hive.mapred.strict does not exist 19/06/20 07:53:14 WARN conf.HiveConf: HiveConf of name hive.mapred.supports.subdirectories does not exist log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender hive> Hive version - 1.2 As per I know, we can safely ignore these warnings but, we need to get rid off these. Could you please help on this. Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
06-18-2019
06:43 PM
@Shu: Thank you. Could you please let us know, If we are using file format other than Textformat, CSV, how to handle null for timestamp filed?
... View more
06-15-2019
05:38 PM
Hi, Data is in files in the following format 2019-06-15T15:43:12 ( yyyy-MM-ddThh:mm:ss ) When I do a select * on the table , it is displayed as NULL Hive version - 1.2.1 ALTER TABLE table SET SERDEPROPERTIES ("timestamp.formats"="yyyy-MM-dd'T'HH:mm:ss"); - did not help Followed the links, https://issues.apache.org/jira/browse/HIVE-9298 https://www.ericlin.me/2017/03/alternative-timestamp-support-in-hive-iso-8601/ Could you please help on this. Thanks
... View more
Labels:
04-01-2019
09:29 AM
Hi, We have to reset the admin ambari password and have tried "ambari-admin-password-reset" but I do not see it on the server. Please advice how can I reset the password. Ambari database - MySQL, Ambari version - 2.7.3 and HDP 3.1 Thank you Sampath
... View more
Labels:
- Labels:
-
Apache Ambari
02-16-2019
08:57 PM
@Jay Kumar SenSharma : thanks for the info. This helps a lot.
... View more
02-14-2019
07:05 AM
Hi, Is it possible to perform Namenode and Resource Manager failover through rest api in Ambari? Any help and suggestions will be greatly appreciated. Thank you.
... View more
Labels:
02-13-2019
08:30 PM
okay sure @Geoffrey Shelton Okot, will talk to AD team on this and let you know the status. Thanks.
... View more
02-11-2019
03:20 PM
It's just a workaround @Geoffrey Shelton Okot. Thanks.
... View more
02-11-2019
06:17 AM
Hi @Geoffrey Shelton Okot, Thanks for your time. I have set the below two properties in core-site.xml from Ambari. Now, NN, RM and History server UI is working fine. hadoop.http.authentication.simple.anonymous.allowed=true
hadoop.http.authentication.type=simple Regards, Sampath
... View more
02-09-2019
03:01 PM
Hi @Geoffrey Shelton Okot, Thanks for the response. I have updated the krb5.conf with the below properties # grep "enctypes" /etc/krb5.conf
default_tgs_enctypes= des3-cbc-sha1 aes256-cts-hmac-sha1-96 arcfour-hmac aes128-cts-hmac-sha1-96 des-cbc-md5
default_tkt_enctypes = des3-cbc-sha1 aes256-cts-hmac-sha1-96 arcfour-hmac aes128-cts-hmac-sha1-96 des-cbc-md5 # klist -aef
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/hostname_fqdn@realm
Valid starting Expires Service principal
02/09/2019 14:44:22 02/10/2019 00:44:22 krbtgt/realm@realm
renew until 02/16/2019 14:44:22, Flags: FRIA
Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
Addresses: (none)
I don't have access to check the encryption types mapped in AD server. Is there any way I can check this from my linux host? Thank you.
... View more
02-09-2019
01:20 PM
Hi, [Ambari 2.7.3, HDP 3.1] In Active Directory Kerberized environment, I'm getting below issue when I try to access Namenode UI, RM UI and Job histroy UI from Ambari Error: HTTP ERROR 403
problem accessing /index.html. Reason:
GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS mode with HMAC SHA1-96) krb5.conf: max_life = 30d
default_tgs_enctypes = aes128-cts arcfour-hmac-md5 des-cbc-crc des-cbc-md5 des-hmac-sha1 aes256-cts
default_tkt_enctypes = aes128-cts arcfour-hmac-md5 des-cbc-crc des-cbc-md5 des-hmac-sha1 aes256-cts
permitted_enctypes = aes256-cts-hmac-sha1-96 des3-cbc-sha1 arcfour-hmac-md5 des-cbc-crc des-cbc-md5 des-cbc-md4
allow_weak_crypto = yes klist: $ls -lrt /etc/security/keytabs/spnego.service.keytab
-r--r-----. 1 root hadoop 433 Feb 9 11:59 /etc/security/keytabs/spnego.service.keytab
$klist -ket /etc/security/keytabs/spnego.service.keytab
Keytab name: FILE:/etc/security/keytabs/spnego.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
0 02/09/2019 07:40:04 HTTP/hostname_fqdn@realm (arcfour-hmac)
0 02/09/2019 07:40:04 HTTP/hostname_fqdn@realm (des-cbc-md5)
0 02/09/2019 07:40:04 HTTP/hostname_fqdn@realm (aes256-cts-hmac-sha1-96)
0 02/09/2019 07:40:04 HTTP/hostname_fqdn@realm (des3-cbc-sha1)
0 02/09/2019 07:40:04 HTTP/hostname_fqdn@Crealm (aes128-cts-hmac-sha1-96) kinit: $kinit -kt /etc/security/keytabs/spnego.service.keytab $(klist -kt /etc/security/keytabs/spnego.service.keytab|sed -n "4p"|cut -d" " -f7)
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/hostname_fqdn@realm
Valid starting Expires Service principal
02/09/2019 12:53:05 02/09/2019 22:53:05 krbtgt/realm@realm
renew until 02/16/2019 12:53:05 I have re-generated the spnego keytab in all the hosts from ambari UI but did not help. Would you please help this. Thank you.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
02-07-2019
06:44 AM
Hi I'm getting below issue while starting Timeline Service V2.0 in HDP 3.1 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /atsv2-hbase-secure1/tokenauth/keys
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:549)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:528)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1199)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1177)
2019-02-07 06:10:29,463 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=6318 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sl73caeapd044.visa.com,17020,1549478310629 is not running yet
at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1487)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2443)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sl73caeapd044.visa.com,17020,1549462163537, seqNum=-1
2019-02-07 06:10:33,487 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=36, started=10342 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sl73caeapd044.visa.com,17020,1549478310629 is not running yet
at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1487)
Please help on this. Thank you. Regards, Sampath
... View more
Labels:
- Labels:
-
Apache YARN
02-04-2019
05:16 PM
Hi, To identify the zookeeper leader, I can use the below command in CLI. $echo stat | nc zk_hostname 2181 | grep Mode
Mode: follower
Is it possible to achieve this using REST API calls? Thank you. Regards, Sampath
... View more
12-26-2018
08:11 AM
Hi @Rajeswaran Govindan, Since Ambari is running a non-privileged user, it is possible that the chown for keytab file failed due to permission issues. Make sure that the sudoers file is setup properly. Please refer the below documentation for this. http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-security/content/sudoer_configuration_server.html Hope this helps!
... View more
12-26-2018
07:37 AM
Hi Vinay, From HDP 3.0 onwards, to work with hive databases you should use the HiveWarehouseConnector library. Please refer the below documentation. https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/integrating-hive/content/hive_configure_a_spark_hive_connection.html Hope this helps!
... View more
12-21-2018
05:59 PM
Hi @IMRAN KHAN, Could you please check the repos are correct in host where the installation is failing # grep 'baseurl' /etc/yum.repos.d/* | grep -i HDP Try cleaning the yum cache by running the command. # yum clean all
Please check if in case of any multiple "ambari-hdp-<repoid>.repo" files present inside the "/etc/yum.repos.d/" . If so, then move the unwanted files from there to back up folder. Please try the below commands from the host where it is failing to install "hdp-select" package # yum install hdp-select -y Hope this helps!
... View more
12-21-2018
05:40 PM
Hi @Andreas Kühnert, Could you refer the below thread. Hope this helps! https://community.hortonworks.com/questions/224670/yarn-registry-dns-start-failed-hortonworks-3.html
... View more