Member since
09-15-2015
16
Posts
43
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1139 | 07-13-2017 09:12 PM | |
36 | 05-25-2017 04:19 PM | |
79 | 04-17-2017 05:15 PM |
07-13-2017
09:12 PM
9 Kudos
In schema registry (HDF-3.0), a schema is immutable & you can't delete/modify a schema.
... View more
05-25-2017
04:19 PM
4 Kudos
I think you are using datanode url, you should be using namenode url.
... View more
04-18-2017
05:25 PM
@prachi bhadekar Glad that it worked.
... View more
04-17-2017
05:15 PM
4 Kudos
@prachi bhadekar in your execute() logic instead of throwing a RuntimeException try logging an error message. The problem with throwing an Exception is that it will bring down the worker.
... View more
04-03-2017
10:27 PM
2 Kudos
@yvora It will be great if you can post your findings here.
... View more
04-03-2017
10:27 PM
10 Kudos
@yvora Scoverage is a popular tool with scala community: http://scoverage.org/ https://github.com/scoverage I have worked with cobertura and have looked at JaCoCo and found that they are geared towards unit test coverage. With some effort I was able to modify cobertura to run in integration test setting. With some modifications I think you should be able to do use scoverage.
... View more
03-30-2017
06:29 PM
@Cheng Xu Are you trying to instrument spark jars ? In that case spark is primarily written in scala which is not supported in by cobertura. @yvora Please confirm.
... View more
03-23-2017
09:59 PM
2 Kudos
@Sunile Manjee Please check if host.name & advertised.host.name are same.
... View more
03-02-2017
06:47 PM
What version of kafka are you running ?
... View more
03-02-2017
06:45 PM
3 Kudos
kafka-consumer-offset-checker.sh has been deprecated in Kafka 0.9.0.0 and it does not support secure clusters. See: https://kafka.apache.org/documentation/
... View more
03-02-2017
02:38 AM
3 Kudos
Yes it does not. Default consumer.properties created by Ambari does not has any cluster specific information. Here is what I found in my setup. zookeeper.connect=127.0.0.1:2181
# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
#consumer group id
group.id=test-consumer-group
#consumer timeout
#consumer.timeout.ms=5000
If you wish to run kafka console consumer and are looking for arguments for a command such as: /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server node1.example.com:6667 --topic topic --new-consumer --security-protocol PLAINTEXTSASL --from-beginning --timeout-ms 2000
You will be able to find this using ambari.
... View more
02-08-2017
11:57 PM
1 Kudo
From the HDFS NFS logs I see the below exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access time for hdfs is not configured. Please set dfs.namenode.accesstime.precision configuration parameter.
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setTimes(FSDirAttrOp.java:105)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1953)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1360)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:926)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy12.setTimes(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:901)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
I have set the dfs.access.time.precision value to 360000 in hdfs-site.xml. Please let me know what am I missing here
... View more
02-08-2017
11:53 PM
5 Kudos
cp /tmp/Data_src/1MB_File /tmp/tmp_mnt/NFS_DIR/1MB_File cp: cannot create regular file ‘/tmp/tmp_mnt/NFS_DIR/1MB_File’: Input/output error
... View more
- Tags:
- Hadoop Core
- HDFS
- nfs
Labels:
06-18-2016
12:27 PM
One way to stop that would be to shutdown the supervisor. I am also wondering why is it that you don't want your worker to be restarted on the same node. Is there a problem with the node ? In that case you shouldn't have supervisor running on it. Does the topology jar has all the necessary libraries & configuration files ?
... View more
06-17-2016
02:48 PM
When a worker dies, it is restarted by supervisor. Only when the failures are on startup and it is unable to heartbeat to Nimbus, it will be reassigned to another machine. http://storm.apache.org/releases/current/Fault-tolerance.html
... View more
06-17-2016
02:44 PM
Output of bolts to go other bolts or other storages. For example HdfsBolt can write to HDFS. KafkaBolt can write to kafka and so on.
... View more