Member since
04-13-2017
12
Posts
1
Kudos Received
0
Solutions
02-03-2018
04:08 AM
Ok, Vamshi. Thanks for letting me know that. I will raise another ticket now. But they said that they will reschedule only once. If you face the same issue again, your money is gone, even if it's not an issue with your set-up. That's unreasonable. That's why I wanted to know if anybody faced the same issue second time as well. Let me know when you take the exam this time. Thanks a ton for your response, Vamshi!! 🙂
... View more
02-03-2018
03:32 AM
Hi @Vamshi Kondabathini Did you get any response yet? Hi @Amey Latkar please let me know if you could reschedule and take the exam successfully. Its almost a month since I had scheduled HDPCD exam, and no satisfactory response from them. It's really VERY FRUSTRATING!!!! Do you guys know any other way to raise this issue? Thanks in advance! -Prachi
... View more
01-27-2018
02:59 AM
Hi all, I received a response from them today after 15 days! But it only says that they can eCredit and I can reappear for the exam,and they want to know when. But I am not sure whether the issue has been fixed, as I see some people still facing it recently. @Amey Latkar, please let me know if you could reschedule and take the exam successfully. Would like to know your updates. Thanks in advance. -prachi
... View more
01-22-2018
11:30 AM
Hi @Vamshi Kondabathini Did you get any response? I am still waiting for the same.
... View more
01-20-2018
01:27 AM
Hi @Amey Latkar, Please let me know if your rescheduled exam went well without any issues, when you take it. Its already 10 days since I placed the request, but haven't received any response till date. Thanks, Prachi
... View more
01-18-2018
11:51 AM
Hi Vamshi, Did you get any response from HortonWorks people?
... View more
01-17-2018
03:57 AM
Hi @William Gonzalez, I have faced similar issue and my request id is 10287. It would be of great help if I could get an update on this from HortonWorks. Thanks in advance! - Prachi
... View more
01-17-2018
03:20 AM
Hi Ameya, I also went through similar issue. My exam date was 10th Jan. Proctor could not launch the exam after 2 hours of troubleshooting. I have contacted Hortonworks, but no reply from them for last 1 week. Really disappointing.
... View more
01-17-2018
03:11 AM
Hi, I had scheduled my HDPCD exam on 10th Jan 2018. I was facing issues, as a result of which I could not start the exam at all.
All prerequisites of hardware/software and network speed compliance (as specified by you ) were in place. PSI proctor confirmed the same.
The proctor also asked me to try out multiple things, but nothing worked. This went on for 2 hours.
I was not able to see the HortonWorks VM desktop. The error said " server IP address not found". PSI has not been able to resolve this issue. I have sent an email to certification@hortonworks.com 4 days ago, but still no response. I did not expect such things from them. My request id is 10287. Please advise what to do. There is no other way to contact them or get support. I want to take my exam ASAP. Thanks, Prachi
... View more
05-23-2017
06:42 AM
Hi, I have enabled Kerberos Security in my HDP 2.6 cluster, on which services- Kafka and Storm are installed prior to enabling Kerberos. The topology that I am running has kafka-spout followed by hdfs-bolt. So, CSV data from a specific Kafka topic is ingested using in-built Kafka Spout, and then transferred to HDFS directory using in-built HDFS Bolt. Versions: storm, storm-kafka, storm-hdfs -> 1.1.0.2.6.0.3-8 Kafka -> 0.10.1.2.6.0.3-8 When I submit this topology, I get the following error for HDFS Bolt --> Caused by: org.apache.hadoop.ipc.RemoteException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554) ~[stormjar.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1498) ~[stormjar.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1398) ~[stormjar.jar:?] I checked in all the config settings in Ambari UI. Everywhere security is set to Kerberos. Solutions already tried in this case are: 1) Modify the topology code to add -> List<String> auto_tgts = new ArrayList<String>(); auto_tgts.add("org.apache.storm.security.auth.kerberos.AutoTGT");
auto_tgts.add("org.apache.storm.hdfs.common.security.AutoHDFS"); Config conf = new Config(); conf.put(Config.TOPOLOGY_AUTO_CREDENTIALS, auto_tgts); 2) Modify the topology code to add -> Map<String, Object> map = new HashMap<String,Object>();
map.put("hdfs.keytab.file","/etc/security/keytabs/storm.headless.keytab"); map.put("hdfs.kerberos.principal","storm-hdp26ks@XYZ.COM"); map.put("hadoop.security.authentication", "kerberos");
Config conf = new Config();
conf.put("hdfs.config", map);
HdfsBolt hdfsbolt = new HdfsBolt()
.withFsUrl(hdfsUrl)
.withFileNameFormat(fileNameFormat).withRecordFormat(recordFormat)
.withRotationPolicy(rotationPolicy).withSyncPolicy(syncPolicy).withConfigKey("hdfs.config");
Please let me know if I need to do any other steps/config related changes for storm-hdfs connector to work with Kerberos. Thanks in advance!
... View more
04-18-2017
05:58 AM
Thanks @Raghav Kumar Gautam..using collector.reportError() solved this...
... View more
04-13-2017
12:29 PM
1 Kudo
Hi, I am running a Storm sample on HDP 2.5 cluster. It has Storm 1.0.1 installed through Ambari. The topology is as follows: Input Messages on Kafka topic --> ( KafkaSpout --> ProcessBolt --> HDFSBolt ) --> Processed data in HDFS Positive scenario is working fine. Messages published to a Kafka Topic are ingested using in-built KafkaSpout, processed using ProcessBolt, and then stored in HDFS using in-built HDFSBolt. But in case of negative scenario, I am facing one issue. The scenario is like this: ProcessBolt execute() function has a try-catch block. execute() { try { //process the tuple using Algorithm //call emit() and ack() } catch { //throw RuntimeException } } If we force the tuple processing to FAIL by purposely setting some wrong property in the Algorithm (all the tuples will fail in this case, even if data is correct), it will not call emit() and ack(). fail() will automatically get called on timeout, and sent to KafkaSpout, which will indefinitely keep retrying this failed tuple, as per documentation. This much is fine. But, when I run this topology, it creates multiple 0 byte files in HDFS, which are increasing in number as time progresses. This should not happen 'ideally', as no emit() method is getting called on ProcessBolt. So, why is HDFSBolt creating these files?? My aim is force HDFSBolt to stop creating these 0 byte files in HDFS, if no data is getting processed in ProcessBolt. How can this be achieved? Please let me know. I have also tried sending ACK explicitely in catch block just before throwing RuntimeException, so that KafkaSpout does not retry the same tuple again. But even after doing this change, topology still keeps generating 0 byte HDFS files. Is there any configuration for HDFS Bolt to avoid this? My HDFSBolt snippet: SyncPolicy syncPolicy = new CountSyncPolicy(100); FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(127,
Units.MB); RecordFormat format = new DelimitedRecordFormat()
.withFieldDelimiter(","); FileNameFormat fileNameFormat = new DefaultFileNameFormat().withExtension(".csv")
.withPath(hdfsOutputDir); HdfsBolt hdfsbolt = new HdfsBolt()
.withFsUrl(hdfsUrl)
.withFileNameFormat(fileNameFormat).withRecordFormat(format)
.withRotationPolicy(rotationPolicy).withSyncPolicy(syncPolicy); My Topology: TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("kafka-spout", kafkaSpout, 1).setNumTasks(1); builder.setBolt("process-bolt", processbolt, 1).setNumTasks(1) .shuffleGrouping("kafka-spout"); builder.setBolt("hdfs-bolt", hdfsbolt, 1).setNumTasks(1)
.shuffleGrouping("process-bolt"); Please let me know if any other information is needed.
... View more