Member since
10-19-2016
151
Posts
59
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1788 | 03-22-2018 11:48 AM | |
2891 | 01-12-2018 06:25 PM | |
5132 | 01-12-2018 03:56 AM | |
7148 | 01-12-2018 03:38 AM | |
3701 | 01-02-2018 10:29 PM |
03-29-2017
09:55 PM
2 Kudos
@Chris Mata @Namit Maheshwari @Dennis Connolly As someone has pointed out, we updated several tutorials with new links. If any other issues are found feel free to ping me. Thanks!
... View more
03-28-2017
01:47 AM
1 Kudo
Introduction
The Hortonworks Sandbox running on Azure requires opening ports a bit differently than when the sandbox is running locally on Virtualbox or Docker. We’ll walk through how to open a port in Azure so that outside connections make their way into the sandbox, which is a Docker container inside an Azure virtual machine.
Note: There are multiple ways to open ports to a Docker container (i.e. the sandbox). This tutorial will cover the simplest method, which reinitializes the sandbox to its original state. In other words, you will lose all changes made to the sandbox.
Prerequisites
Deploying Hortonworks Sandbox on Microsoft Azure
Outline
SSH Into The Azure VM
Add Ports to the Docker Script
Remove the Current Sandbox Container)
Restart the Azure VM
(Optional) Add New Ports to the SSH Config
SSH Into the Azure VM
If you followed the previous tutorial, Deploying Hortonworks Sandbox on Microsoft Azure, this step is as easy as running:
ssh azureSandbox
Otherwise, follow whichever method you prefer to SSH into the Azure VM that is running the sandbox.
Add Ports to the Docker Script
The script in the Azure VM that is responsible for creating the dockerized Sandbox container is located at /root/start_scripts/start_sandbox.sh .
Note: You’re probably not logged in as root, so do not forget to sudo your commands.
Open /root/start_scripts/start_sandbox.sh to reveal the docker script, which looks something like the following:
docker run -v hadoop:/hadoop --name sandbox --hostname "sandbox.hortonworks.com" --privileged -d \
-p 6080:6080 \
-p 9090:9090 \
-p 9000:9000 \
-p 8000:8000 \
-p 8020:8020 \
-p 2181:2181 \
-p 42111:42111 \
...
Edit this file and add your desired port forward. In this example, we’re going to forward host port 15000 to sandbox port 15000. The file should now look something like the following:
docker run -v hadoop:/hadoop --name sandbox --hostname "sandbox.hortonworks.com" --privileged -d \
-p 15000:15000 \
-p 6080:6080 \
-p 9090:9090 \
-p 9000:9000 \
-p 8000:8000 \
-p 8020:8020 \
-p 2181:2181 \
-p 42111:42111 \
...
Remove the Current Sandbox Container
Terminate the existing sandbox container, and then remove it.
Warning: Be aware that this deletes the sandbox, changes are not saved.
sudo docker stop sandbox
sudo docker rm sandbox
Restart the Azure VM
We now restart the Azure VM. Upon restart, the script we modified above will be run in order to start the sandbox container. Since we removed the container in the previous step, the sandbox container is first rebuilt with your newly specified port forwards.
You may restart the Azure VM by stopping and starting via the Azure Portal, or you can execute the following while SSH’d in.
sudo init 6
(Optional) Add New Ports to the SSH Config
If you’re connecting to Azure via SSH tunneling, be sure to add new forwarding directives to your SSH config. See the Deploying Hortonworks Sandbox on Microsoft Azure tutorial for more information.
... View more
Labels:
03-22-2017
11:41 PM
@Satish Duggana @Sriharsha Chintalapani The configs were the issue, thanks! I had tracked this down before and made the appropriate change, though kept running into the same problem. Debugging showed that the Kafka serializer was being intialized/configured correctly, though the issue kept coming up. The registry webservice was accessible from Storm supervisor nodes, and serializers showed the correct configs. After reading these comments, I booted up a fresh cluster, deployed the same topology with the same configs ... and wallah. I'll have to keep an eye open for what changes could have caused this, or do in the future. Thanks for prompting me to double check all of this!
... View more
03-20-2017
07:40 PM
In Storm, my KafkaBolt is setup with the following value-serializer: com.hortonworks.registries.schemaregistry.serdes.avro.kafka.KafkaAvroSerializer I receive the following tracestack when data is ingested into the bolt. The same topology works fine if using the default StringSerializer from Kafka. The same environment is also able to successfully serializer/deserialize using the default Avro serdes via Schema Registry. In other words, this seems to be Kafka-serdes specific. 2017-03-20 19:02:19.984 o.a.s.d.executor [ERROR]
javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused (Connection refused)
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:287) ~[stormjar.jar:?]
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255) ~[stormjar.jar:?]
at org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:700) ~[stormjar.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) ~[stormjar.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:297) ~[stormjar.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:228) ~[stormjar.jar:?]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:444) ~[stormjar.jar:?]
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:696) ~[stormjar.jar:?]
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:448) ~[stormjar.jar:?]
at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:349) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.client.SchemaRegistryClient.postEntity(SchemaRegistryClient.java:541) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.client.SchemaRegistryClient.doRegisterSchemaMetadata(SchemaRegistryClient.java:228) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.client.SchemaRegistryClient.registerSchemaMetadata(SchemaRegistryClient.java:221) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.client.SchemaRegistryClient.addSchemaVersion(SchemaRegistryClient.java:250) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.serde.AbstractSnapshotSerializer.serialize(AbstractSnapshotSerializer.java:50) ~[stormjar.jar:?]
at com.hortonworks.registries.schemaregistry.serdes.avro.kafka.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) ~[stormjar.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:463) ~[stormjar.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:440) ~[stormjar.jar:?]
at org.apache.storm.kafka.bolt.KafkaBolt.execute(KafkaBolt.java:143) [stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__5211$tuple_action_fn__5213.invoke(executor.clj:728) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__5132.invoke(executor.clj:460) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.disruptor$clojure_handler$reify__4647.onEvent(disruptor.clj:40) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:453) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:432) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.daemon.executor$fn__5211$fn__5224$fn__5277.invoke(executor.clj:847) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_111]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_111]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_111]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_111]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_111]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_111]
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) ~[?:1.8.0_111]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) ~[?:1.8.0_111]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) ~[?:1.8.0_111]
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) ~[?:1.8.0_111]
at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[?:1.8.0_111]
at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316) ~[?:1.8.0_111]
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291) ~[?:1.8.0_111]
at org.glassfish.jersey.client.internal.HttpUrlConnector$4.getOutputStream(HttpUrlConnector.java:385) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.CommittingOutputStream.commitStream(CommittingOutputStream.java:200) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.CommittingOutputStream.commitStream(CommittingOutputStream.java:194) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:228) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299) ~[stormjar.jar:?]
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1982) ~[stormjar.jar:?]
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:995) ~[stormjar.jar:?]
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:932) ~[stormjar.jar:?]
at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:635) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[stormjar.jar:?]
at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130) ~[stormjar.jar:?]
at org.glassfish.jersey.client.ClientRequest.writeEntity(ClientRequest.java:502) ~[stormjar.jar:?]
at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:388) ~[stormjar.jar:?]
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:285) ~[stormjar.jar:?]
... 28 more
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Spark
-
Schema Registry
03-18-2017
02:29 AM
@Suzanne Dimant To add: There are added security measures to HDP 2.5 on Azure, so you can SSH in just fine, but by default all other ports on Azure are closed. You can either open up the ports, or SSH tunnel (recommended) by following the tutorial above.
... View more
03-18-2017
02:27 AM
@Suzanne Dimant In one of your posts you linked to: https://hortonworks.com/hadoop-tutorial/deploying-hortonworks-sandbox-on-microsoft-azure and brought up an issue with the default password. The password is the same one that you set up when originally deploying the sandbox on Azure. Additionally, the user ("azure" in the tutorial) is whatever user you specified during setup. Hope that helps! Edgar
... View more
03-09-2017
07:20 PM
@Sravani Yajamanam Is it possible that you're selecting the configuration for the sandbox you *deleted*, rather than the one you recently brought up? Also, when did you last try deploying HDP 2.5? A new version was published on Azure a week or two ago with some revamped tutorials on how to get going with Azure securely. Hope that helps!
... View more
03-03-2017
02:14 AM
@Adedayo Adekeye Not sure about the issue with HDFS, I've been focusing on getting you connected to the VM on Azure via SSH. In the topic, you mentioned having issues SSH'ing into localhost, but it sounds like you were able to open ports and SSH into the VM? That's a viable alternative to the linked tutorial, so that's great. If that's the case, are you able to navigate the sandbox freely and now just running into HDFS issues? Unless you open up several other ports or adjust Azure's security groups, you may run into issues later with other closed ports when connecting from the outside (i.e. your local machine). The tutorial linked above deals with that specifically.
... View more
03-03-2017
01:46 AM
@Adedayo Adekeye I'd be sure that the file isn't currently open or used anywhere. You mentioned before that you were able to get this file created and run the suggested SSH command, so something may have happened between then and now that's stopping you from making changes to this file.
... View more
03-03-2017
01:27 AM
@Adedayo Adekeye Have you tried writing to that file as superuser? E.g. "sudo vi ~/.ssh/config" ?
... View more