Member since
04-30-2015
61
Posts
4
Kudos Received
0
Solutions
12-18-2017
05:17 AM
Hiveserver2 interactive UI not showing up query summaries. active sessions, open queries, last max 25 queries are not showing up in HS2I UI. is there any properties need to be set for this ? thanks, sathish
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
09-22-2017
05:17 AM
using postgres database no logs has been captured in /var/log/superset. see the below warning and error message from amabri. any help much appreciated!! resource_management.core.exceptions.ExecutionFailed: Execution of 'source /etc/superset/conf/superset-env.sh ; /usr/hdp/current/druid-superset/bin/superset init' returned 1. /usr/hdp/2.6.2.0-205/superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead.
Trying to perform kerberos login via command: /usr/bin/kinit -r 3600s -kt /etc/security/keytabs/druid.service.keytab druid@HADOOP_xyz.COM
Exception in thread Kerberos-Login-Thread:
Traceback (most recent call last): 2017-09-22 04:54:28,483:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password 2017-09-22 04:54:28,577:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password 2017-09-22 04:54:28,606:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password File "/usr/hdp/2.6.2.0-205/superset/lib/python3.4/base64.py", line 90, in b64decode
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding
... View more
Labels:
- Labels:
-
Druid
07-17-2017
09:29 AM
this is the output and it's hung. its not moving further to the cli mode 17/07/17 09:28:29 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist 17/07/17 09:28:29 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist Logging initialized using configuration in file:/etc/hive/2.5.3.0-37/0/hive-log4j.properties this is the log from /tmp/userid/logs 2017-07-17 09:38:39,482 INFO[main]: hive.metastore (HiveMetaStoreClient.java:open(402)) - Trying to connect to metastore with URI thrift://hostname :9084 2017-07-17 09:38:39,657 INFO[main]: hive.metastore (HiveMetaStoreClient.java:open(498)) - Connected to metastore. its getting connected to the metastore but taking too much of time to launch the CLI thanks, sathish
... View more
07-17-2017
07:28 AM
Hive command line is hung and gives me below warnings WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist checked hive-site.xml and above parms are exist with "true". in logs not finding any details. please help me fixing the issue thanks, sathish
... View more
Labels:
- Labels:
-
Apache Hive
07-13-2017
05:59 AM
is there any docs/links which talks more about Hive statistics ? please let me know
... View more
07-04-2017
09:08 AM
thanks for reply.. running auto stats on hive tables.. calculating stats on table which is default while create or insert.(hive.stats.autogather=true) compute stats for table , calculates number of rows on that table by scanning the table and there wont be significant impact on the cluster or analyze job wont run for longer time. compute stats for columns, it has to calculate num of distinct, nulls,avg min/max lenght of column etc., so, analyze jobs are running for longer time with more num.of mappers and reducers (this depends on the size of the table and num of columns). In such situations the impact of the cluster or resource utilisation is high. Is there any best practices before running stats for table columns ? Even though the stats task is run as a batch job , we want it to be executed as efficiently as possible. Basically, we expect to compute statistics on terabytes of data or more num of columns at a given time also, as part of stats calculation what are the important metastore tables involved or referred or updated? thanks, Sathish
... View more
07-04-2017
08:17 AM
what will be the impact on the cluster if we turn on auto stats ? or how can we calculate the impact ?
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
01-03-2017
08:56 AM
i should have been more specific.. i'm looking for kafka connector class for rdbms as source. can you please let me know the connector class or any doc for reference... thanks, sathish
... View more
01-03-2017
08:48 AM
its not specific to mysql... connector class for "rdbms" source. thanks, sathish
... View more
01-03-2017
06:57 AM
@Sandeep Nemuri right now I'm testing with RDMS source(mysql) and kafka connect is failing for "connector.class". how can i find the correct connector class for rdms(mysql database). i've tried with org.apache.kafka.connect.jdbc.JdbcSourceConnector,io.confulent.connect.jdbc.JdbcSourceConnector and both are not exist. thanks, sathish
... View more
01-03-2017
04:35 AM
yes.. its was problem with the security protocol.. i've changed them now and it started working now. is there any link or doc for parameter reference ? thanks, sathish
... View more
01-02-2017
10:49 AM
@Sandeep Nemuri i've setup port 6667 with sec.protocol to plaintextsasl.. but kafka connect by default running with producer properties (security.protocol = PLAINTEXT) . how can i override the parms for kafka connect. i've updated the parms in standalone.properties but kafka connect is not taking the parm while starting it. how should change the producer properties for kafka connect ? thanks, sathish
... View more
01-02-2017
10:11 AM
see the below error msg's ..it says connection refused for the broker host. DEBUG Connection with tstr400367.abc-test.com/10.246.131.35 disconnected (org.apache.kafka.common.network.Selector:307) java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:54) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:72) at org.apache.kafka.common.network.Selector.poll(Selector.java:274) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) at java.lang.Thread.run(Thread.java:745) i've mentioned proper node name with port id, now am not sure what to check thanks, sathish
... View more
01-02-2017
10:02 AM
yes.. i've started in debug mode.. please gimme some time .. am going through the logs now
... View more
01-02-2017
09:58 AM
right now i see all below parms in connect-log4j properties log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n log4j.logger.org.apache.zookeeper=ERROR log4j.logger.org.I0Itec.zkclient=ERROR thanks, sathish
... View more
01-02-2017
09:57 AM
there are no such parms for kafak connect-log4j.properties.. can you please let me know the parameter thanks, sathish
... View more
01-02-2017
09:13 AM
@Sandeep Nemuri yup., i've tried that too.. but still same error thanks, sathish
... View more
01-02-2017
09:06 AM
yes.. its localhost:6667. initially it was localhost:9092 but i changed to 6667 thanks, sathish
... View more
01-02-2017
06:12 AM
below are my source and sink properties. producer: name=local-file-source connector.class=org.apache.kafka.connect.file.FileStreamSourceConnector tasks.max=1 file=/tmp/test.txt (with 777 mod) topic=newtest Sink: name=local-file-sink connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector tasks.max=1 file=/tmp/test.sink.txt (with 777 mod) topics=newtest
... View more
01-02-2017
06:05 AM
Hi - i'm trying for kafka file import and export but its failing with timed out. ERROR Failed to flush WorkerSourceTask{id=local-file-source-0}, timed out while waiting for producer to flush outstanding messages, 1 left ({ProducerRecord(topic=newtest, partition=null, key=[B@63d673d3, value=[B@144e54da=ProducerRecord(topic=newtest, partition=null, key=[B@63d673d3, value=[B@144e54da}) (org.apache.kafka.connect.runtime.WorkerSourceTask:239) [2017-01-02 05:51:08,891] ERROR Failed to commit offsets for WorkerSourceTask{id=local-file-source-0} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112) i check both kafka server and zookeeper and those are running fine. no other error am seeing in logs.. please help me in fixing the issue thanks,sathish
... View more
- Tags:
- Kafka
- kafka-spout
Labels:
- Labels:
-
Apache Kafka
12-30-2016
06:44 AM
@Rajkumar Singh i started receiving below error messages now... while initiating producer... 1.no config changes done, 2.keytab looks fine and kinit successful. 3. ive created new topic thinking that the old one's corrupt but its giving me same error. ERROR fetching topic metadata for topics [Set(newtest)] from broker [ArrayBuffer(BrokerEndPoint(0,localhost,6667))] failed (kafka.utils.CoreUtils$) kafka.common.KafkaException: fetching topic metadata for topics [Set(newtest)] from broker [ArrayBuffer(BrokerEndPoint(0,localhost,6667))] failed caused : java.nio.channels.ClosedChannelException: thanks, sathish
... View more
12-29-2016
06:50 AM
Exception from container-launch. Container id: container_e64_1481762217559_27152_01_000002 Exit code: 127 Stack trace: ExitCodeException exitCode=127: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Shell output: main : command provided 1 main : run as user is ajamwal main : requested yarn user is ajamwal Container exited with a non-zero exit code 127
... View more
12-29-2016
06:41 AM
also am seeing below messages from nodemanager 2016-12-28 08:28:26,005 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_STOP for appId application_1481762217559_27152 2016-12-28 08:28:26,005 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(189)) - Stopping container container_e64_1481762217559_27152_01_000008 2016-12-28 08:28:26,481 INFO ipc.Server (Server.java:saslProcess(1441)) - Auth successful for appattempt_1481762217559_27152_000001 (auth:SIMPLE) 2016-12-28 08:28:26,491 INFO authorize.ServiceAuthorizationManager (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful for appattempt_1481762217559_27152_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB 2016-12-28 08:28:26,491 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:stopContainerInternal(966)) - Stopping container with container Id: container_e64_1481762217559_27152_01_000008 2016-12-28 08:28:26,492 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=ajamwal IP=10.246.73.94 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1481762217559_27152 CONTAINERID=container_e64_1481762217559_27152_01_000008 2016-12-28 08:28:26,817 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:processHeartbeat(674)) - Unknown localizer with localizerId container_e64_1481762217559_27152_01_000008 is sending heartbeat. Ordering it to DIE 2016-12-28 08:28:26,818 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:processHeartbeat(674)) - Unknown localizer with localizerId container_e64_1481762217559_27152_01_000008 is sending heartbeat. Ordering it to DIE 2016-12-28 08:28:27,227 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:run(1131)) - Localizer failed java.io.IOException: java.lang.InterruptedException at org.apache.hadoop.util.Shell.runCommand(Shell.java:579) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:258) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1113) 2016-12-28 08:28:28,016 INFO nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:removeOrTrackCompletedContainersFromContext(529)) - Removed completed containers from NM context: [container_e64_1481762217559_27152_01_000008]
... View more
12-29-2016
06:15 AM
am seeing few thread messages too..
INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on node xyz_123.com
2016-12-28 08:28:34,579 INFO [Thread-77] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2016-12-28 08:28:34,582 INFO [Thread-77] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to Task failed task_1481762217559_27152_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
... View more
12-29-2016
06:08 AM
@gsharm
i dont see anything related to application specific error messages ... all i could see is below error messages. 2016-12-28 08:28:12,219 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1481762217559_27152_m_000000_1: Exception from container-launch.
Container id: container_e64_1481762217559_27152_01_000007
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
... View more
12-28-2016
11:04 AM
i'm facing frequent container exception on particular cluster but same set of jobs ran fine in other cluster. Exception from container-launch. Container id: container_e64_1481762217559_27152_01_000002 Exit code: 127 Stack trace: ExitCodeException exitCode=127: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Shell output: main : command provided 1 main : run as user is xyz main : requested yarn user is xyz
... View more
Labels:
- Labels:
-
Apache Hadoop
12-22-2016
09:14 AM
1 Kudo
@Rajkumar Singh can you please let me know if there is any doc which talks more about kafka ? also i wanted to understand more about data streaming between kafka clusters... and how kafka clusters communicates with each other ? thanks, sathish
... View more
12-22-2016
05:26 AM
1 Kudo
@Rajkumar Singh - how about kafka connect. kafka connect is the official one i believe... how those two (nifi & kafka connect) differs from one another.. ? thanks, sathish
... View more
12-22-2016
04:47 AM
@Rajkumar Singh i should have been more specific on my asks... sorry for that... actually i want to ingest/load/insert data from kafka producer to hdfs consumer... is there a way to do it with normal kafka commands or any tools available to do it ? thanks, sathish
... View more
12-21-2016
06:57 AM
Hi - i wanna transfer a complete file from one system to other using KAFKA ? can you please help in doing it ? thanks, sathish
... View more
Labels:
- Labels:
-
Apache Kafka