Member since
05-20-2016
155
Posts
220
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5937 | 03-23-2018 04:54 AM | |
2154 | 10-05-2017 02:34 PM | |
1140 | 10-03-2017 02:02 PM | |
7734 | 08-23-2017 06:33 AM | |
2470 | 07-27-2017 10:20 AM |
08-10-2016
06:50 PM
@Sriharsha Chintalapani thanks -- worked after adding acl permission for ANONYMOUS user !
... View more
08-10-2016
06:35 PM
found below text in doc.hortonworks.com "The broker can only accept SASL (Kerberos) connections, and there is no wire encryption applied. (Note: For a non-secure cluster, <protocol> should be set to PLAINTEXT.)" https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-config-options.html
... View more
08-10-2016
06:10 PM
2 Kudos
1] Can I configure both PLAINTEXT and PLAINTEXTSASL as the communication type with Kakfa broker in a kerberozied cluster ? If above is possible -- How I achieve the same from Ambari configuration ? 2] If above is not possible, can I do only PLAINTEXT in kerberozied cluster? Thanks Santhosh
... View more
Labels:
- Labels:
-
Apache Kafka
08-08-2016
05:30 PM
Below is the ulimit set ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 1030387 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1048576 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1048576 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
... View more
08-08-2016
05:26 PM
Any idea what does below error mean ? Job Commit failed with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: \"ctr-e20-1468887904486-0003-01-000002.hwx.site/172.27.27.0\"; destination host is: \"ctr-e20-1468887904486-0003-01-000003.hwx.site\":8020; )'\nFAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.tez.TezTask", "stdout": "Optimizing tables and computing stats\nSTART EXECUTE etl tpcds.realschema convert\n1470664
040.599955183\nFINISH EXECUTE etl tpcds.realschema convert\n1470673710.526750959\nSTART EXECUTE etl tpcds.realschema stats\n1470673710.539773402
... View more
Labels:
- Labels:
-
Apache Hive
08-04-2016
06:58 AM
1 Kudo
So this is what I did , since the datanode and zookeeper was writing to the same disk, the zookeeper writes was slowing down, due to which all the services dependent on zookeeper was going down. Soln: Brought down the datanode's on the zookeeper machines and started the job -- This has solved the problem for now.
... View more
08-04-2016
05:08 AM
1 Kudo
I was doing this from Ambari, after changing the directories namenode would not come up and starting complaining about namenode dir is not formated. Should I format the dir at the namenode machines ?
... View more
08-04-2016
05:05 AM
@Arpit Agarwal yes correct
... View more
08-03-2016
02:14 PM
1 Kudo
Currently I have an existing cluster with Namenode HA -- I want to change the dfs.namenode.name.dir and dfs.datanode.data.dir to different from what is currently configured.
... View more
Labels:
- Labels:
-
Apache Hadoop
08-03-2016
06:45 AM
I have a cluster with 10 nodes and each node having 2 TB diskspace and 250GB RAM. While writing 1TB data, namenode goes down [ HA NameNode ] with below error. I have ran this multiple time and everytime, it is the same issue. 016-08-03 05:56:43,002 WARN client.QuorumJournalManager (IPCLoggerChannel.java:call(406)) - Took 8783ms to send a batch of 4 edits (711 bytes) to remote journal 172.27.27.0:8485 2016-08-03 05:56:43,005 WARN client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal 172.27.29.0:8485 failed to write txns 330736-330807. Will try to write to this JN again after the next log roll. org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 33 is less than the last promised epoch 34 at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428) at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456) at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351) at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152) at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158) at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
... View more
Labels:
- Labels:
-
Apache Hadoop
- « Previous
- Next »