Member since
02-09-2016
559
Posts
422
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2139 | 03-02-2018 01:19 AM | |
3531 | 03-02-2018 01:04 AM | |
2374 | 08-02-2017 05:40 PM | |
2347 | 07-17-2017 05:35 PM | |
1721 | 07-10-2017 02:49 PM |
12-19-2016
01:50 PM
@Asma Dhaouadi Can you share a link to the tutorial you were working on?
... View more
12-16-2016
06:43 PM
What about the ifconfig -a? Do you see more than just the docker0 interface?
... View more
12-16-2016
06:29 PM
@Cecil New It seems like you are missing a network interface or it did not come up properly. Can you confirm that your port forwarding rules are in place within VirtualBox? Here is what I see: When I log into the VM I see the following with ifconfig: [root@sandbox ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:dcff:fe14:ae0a prefixlen 64 scopeid 0x20<link>
ether 02:42:dc:14:ae:0a txqueuelen 0 (Ethernet)
RX packets 337 bytes 44534 (43.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 346 bytes 489900 (478.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fea5:537e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:a5:53:7e txqueuelen 1000 (Ethernet)
RX packets 697 bytes 516069 (503.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 402 bytes 59902 (58.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
... View more
12-16-2016
06:20 PM
@Cecil New In the second example, the only interface listed is the docker0 interface. That is a virtual interface for talking to the Docker container. Can you try ifconfig -a to see what other interfaces are present?
... View more
12-16-2016
05:43 PM
@Cecil New Can you provide a link to the tutorial you are following? The Ambari default port is 8080. If your VirtualBox environment is working properly (with port forwarding) then you should be able to access it via http://localhost:8080. The reason you are getting a timeout is because The HDP 2.5 Sandbox is a Docker container within a CentOS VM. The IP address you see (172.17.0.2) is the IP of the Docker container which is only visible from the CentOS VM. Try logging into the VM using ssh -p 2122 root@localhost. This will log you into the CentOS VM instead of the Docker container. Now when you run ifconfig you will get the IP address of the CentOS VM which is what you need to enter into your browser.
... View more
12-16-2016
04:51 PM
2 Kudos
@Vishwanath Voruganti The grading of the exams usually takes 1-2 weeks. You are at the 10 day mark, but only 8 business days. I would give it a few more business days (middle of next week) before you worry too much. I think you've already done what you need to by emailing to certfication@hortonworks.com
... View more
12-16-2016
03:52 PM
2 Kudos
@Rajesh AJ If I understand the problem, I think your issue is related to Docker. The HDP 2.5 TP Sandbox didn't have this issue. The HDP 2.5 Sandbox uses a Docker container within the VM. So you need to forward the 9200 port via the VM and again via the Docker container inside that VM. When you run the curl command, I'm guessing you are doing from within the Sandbox itself which is why it works. Check out this article on how to add ports to the Sandbox. Elasticsearch uses port 9200, so you need to add that to the Docker container. See this link: https://community.hortonworks.com/articles/65914/how-to-add-ports-to-the-hdp-25-virtualbox-sandbox.html Don't forget to read the NOTE on using commit to save the current state of your Sandbox before deleting the container. Otherwise any changes you have made to the Sandbox will be lost.
... View more
12-15-2016
08:53 PM
@nbalaji-elangovan Have you looked into Type Mapping to override the data types? https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_controlling_type_mapping
... View more
12-13-2016
11:53 PM
3 Kudos
@Andi Sonde Rack awareness for Kafka works similar in principal to HDFS rack awareness. If you are able to define which rack each of your nodes belongs to, then Kafka is able to intelligently allocate replicas on nodes that do not share the same rack. This gives you better fault tolerance. If a rack goes down due to maintenance or power loss, you have a reduced chance that a leader and all of the replicas are located in that single rack. Obviously, this feature is only beneficial if you are able to spread your Kafka brokers across racks. Without rack awareness, Kafka has no way to know which nodes are in a common rack. That means it is possible for all of the brokers and their replicas for a topic could be taken offline when a single rack goes down.
... View more
12-13-2016
11:49 PM
@Tony Bolt I'm glad it worked! That is fantastic news. I'm sorry it took so long to ferret out the cause. It seems like a small detail, but it sure does matter. To your point, having separate instance directories will keep your logs nice and clean as well. While your solrconfig.xml has "solr.hdfs.home" defined, that is /not/ the same thing as "solr.home". You could specify solr.home when you start solr. That's probably what I would do if I was using an init.d script to run 4 different instances on the same host. Please accept my answer if you feel it was useful. 🙂
... View more