Member since
05-23-2016
30
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
940 | 07-29-2016 01:37 AM | |
2037 | 07-13-2016 01:59 AM |
08-19-2016
12:12 PM
Hi @Tamas Bihari I think by default, the Control Plane must be given access to the cluster when the instances are being created. Otherwise, the Remote Access CIDR IP field is pointless right now. Or there needs to be an option to input multiple CIDR IPs. Thanks,
KC
... View more
08-19-2016
02:10 AM
Hi @Tamas Bihari, thanks for your assistance! It seems that the error was caused by me limiting the Remote Access CIDR IP during the setup to my own IP which may have prevented Cloudbreak on the Control Plane from accessing the instances. This though appears to me to be a design flaw. Please correct me if I am mistaken. Also, the error should be thrown much earlier rather than me having to wait four hours before the job fails. Let me know if this is the right place to highlight these issues or if there is another channel I should post them to. Best regards,
KC
... View more
08-17-2016
10:20 AM
Hi @Ashnee Sharma the whole installation process is supposed to be automated by Hortonworks Cloud so I'm confused about why there would be any firewall or ssh issues. What specifically should I check for and is there a more detailed log regarding the error?
... View more
08-17-2016
01:40 AM
Great catch! Thanks a lot for your assistance, it is working now.
... View more
08-16-2016
11:00 PM
Hi all, I'm trying out the new Hortonworks Data Cloud but am encountering an error "Infrastructure creation failed. Reason: Operation timed out. Could not reach ssh connection in time". Any advice on what is causing this issue? Also, is there a means to check / track the progress of the install? The information on the UI is quite limited and it took nearly 4 hours before the job failed. Thanks!
... View more
08-16-2016
01:32 PM
Hi @Artem Ervits I came across and tried the suggestion in that page but still encountered the same errors after commenting out the kill and shutdown statements... the error appears to be caused during the submission / startup stage?
... View more
08-16-2016
06:19 AM
I'm running a sample topology from storm-starter in local mode but an encountering an error 14:16:24.038 [Thread-16] ERROR o.a.s.e.EventManagerImp - {} Error when processing event
java.lang.RuntimeException: java.io.IOException: Unable to delete file: C:\Users\<UserId>\AppData\Local\Temp\ef9408c7-6ad6-432f-928c-e01ded7f4c33\supervisor\tmp\b44d290f-ff15-46cf-b39e-ab02cf93a451\stormconf.ser
at org.apache.storm.daemon.supervisor.SyncSupervisorEvent.run(SyncSupervisorEvent.java:173) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.event.EventManagerImp$1.run(EventManagerImp.java:54) [storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
Caused by: java.io.IOException: Unable to delete file: C:\Users\<UserId>\AppData\Local\Temp\ef9408c7-6ad6-432f-928c-e01ded7f4c33\supervisor\tmp\b44d290f-ff15-46cf-b39e-ab02cf93a451\stormconf.ser
at org.apache.storm.shade.org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.shade.org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.shade.org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.shade.org.apache.commons.io.FileUtils.moveDirectory(FileUtils.java:2916) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.daemon.supervisor.SyncSupervisorEvent.downloadLocalStormCode(SyncSupervisorEvent.java:354) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.daemon.supervisor.SyncSupervisorEvent.downloadStormCode(SyncSupervisorEvent.java:326) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at org.apache.storm.daemon.supervisor.SyncSupervisorEvent.run(SyncSupervisorEvent.java:122) ~[storm-core-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
... 1 more
Would appreciate any assistance to solve this. Thanks!
... View more
Labels:
- Labels:
-
Apache Storm
07-29-2016
01:37 AM
Ok turns out that core-site.xml, hbase-site.xml and hdfs-site.xml are embedded in the metron jars at /usr/metron/0.2.0BETA/lib so needed to update the hostname references in those files to get it working again.
... View more
07-27-2016
11:50 PM
I have Metron deployed on a single node on AWS. Recently had to update the hostname to use AWS private DNS instead of the public DNS (which changes with each reboot). I think I have got most of the services working after the update but I have still have some issues with Storm.
A sample of the storm worker logs are copied below. In particular, the o.a.h.i.Client seems to be still referring to the old public EC2 domain name but I have been unable to figure out where that config is specified. Could someone assist in pointing me to where that particular variable is stored? 2016-07-27 06:41:21.625 s.k.ZkCoordinator [INFO] Task [1/1] Deleted partition managers: []
2016-07-27 06:41:21.625 s.k.ZkCoordinator [INFO] Task [1/1] New partition managers: []
2016-07-27 06:41:21.625 s.k.ZkCoordinator [INFO] Task [1/1] Finished refreshing
2016-07-27 06:41:22.253 b.s.m.n.Server [INFO] Getting metrics for server on port 6704
2016-07-27 06:41:24.037 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 32 time(s); maxRetries=45
2016-07-27 06:41:44.058 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 33 time(s); maxRetries=45
2016-07-27 06:42:04.078 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 34 time(s); maxRetries=45
2016-07-27 06:42:21.626 s.k.ZkCoordinator [INFO] Task [1/1] Refreshing partition manager connections
2016-07-27 06:42:21.627 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=ip-10-0-0-21.us-west-2.compute.internal:6667}}
2016-07-27 06:42:21.627 s.k.KafkaUtils [INFO] Task [1/1] assigned [Partition{host=ip-10-0-0-21.us-west-2.compute.internal:6667, partition=0}]
2016-07-27 06:42:21.628 s.k.ZkCoordinator [INFO] Task [1/1] Deleted partition managers: []
2016-07-27 06:42:21.628 s.k.ZkCoordinator [INFO] Task [1/1] New partition managers: []
2016-07-27 06:42:21.628 s.k.ZkCoordinator [INFO] Task [1/1] Finished refreshing
2016-07-27 06:42:22.254 b.s.m.n.Server [INFO] Getting metrics for server on port 6704
2016-07-27 06:42:24.104 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 35 time(s); maxRetries=45
2016-07-27 06:42:44.121 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 36 time(s); maxRetries=45
2016-07-27 06:43:04.139 o.a.h.i.Client [INFO] Retrying connect to server: ec2-54-213-184-142.us-west-2.compute.amazonaws.com/54.213.184.142:8020. Already tried 37 time(s); maxRetries=45
2016-07-27 06:43:21.629 s.k.ZkCoordinator [INFO] Task [1/1] Refreshing partition manager connections
2016-07-27 06:43:21.630 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=ip-10-0-0-21.us-west-2.compute.internal:6667}}
2016-07-27 06:43:21.631 s.k.KafkaUtils [INFO] Task [1/1] assigned [Partition{host=ip-10-0-0-21.us-west-2.compute.internal:6667, partition=0}]
... View more
Labels:
- Labels:
-
Apache Metron
-
Apache Storm
07-13-2016
04:11 AM
3 Kudos
You can also add an additional storm worker in Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an additional port to the list
... View more