Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2527 | 04-27-2020 03:48 AM | |
| 4993 | 04-26-2020 06:18 PM | |
| 4088 | 04-26-2020 06:05 PM | |
| 3299 | 04-13-2020 08:53 PM | |
| 5038 | 03-31-2020 02:10 AM |
03-23-2017
03:59 PM
2 Kudos
@Kent Brodie I can reproduce the similar issue as yours by applying an apache httpd webserver in front of Ambari. I have written an article on PigView sometimes back and i see that you might be facing the same issue with HiveView.
https://community.hortonworks.com/content/kbentry/68951/when-webserver-is-installed-in-front-to-the-ambari.html I see that your URL looks like using port 80 (default http port) which makes me feel that you are accessing ambari server using some front end WebServer like (apache httpd). Check your webserver setting to find out if it allows "AllowEncodedSlashes" or not? For example refer to: http://httpd.apache.org/docs/current/mod/core.html#allowencodedslashes The AllowEncodedSlashes directive allows URLs which contain encoded path separators (%2F for / and additionally %5C for \ on accordant systems) to be used in the path info.
- With the default value, "Off", such URLs are refused with a 404 (Not found) error. - With the value "On", such URLs are accepted, and encoded slashes are decoded like all other encoded characters. - With the value "NoDecode", such URLs are accepted, but encoded slashes are not decoded but left in their encoded state. . Or for quick testing it will be best if you directly access the Ambari on port 8080 instead of using the webserver/frontend. Following fixed the issue for me: # cat /etc/httpd/conf/httpd.conf | grep -A 2 -B 3 AllowEncodedSlashes
<VirtualHost erie1.example.com:80>
ServerAdmin erie1.example.com
ProxyPass / http://erie1.example.com:8080/ nocanon
AllowEncodedSlashes NoDecode
</VirtualHost>
[root@erie1 ~]#
[root@erie1 ~]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ] .
... View more
03-23-2017
11:30 AM
@Ye Jun
I have uploaded a sample with the "pom.xml" for version check in the github repo: https://github.com/jaysensharma/MiddlewareMagicDemos/tree/master/HDP_Ambari/Hive/HiveJavaClient
... View more
03-23-2017
11:08 AM
@Ye Jun
Try using a different version of "hive-jdbc" driver jar for example i am using "http://repo.hortonworks.com/content/repositories/releases/org/apache/hive/hive-jdbc/1.2.1000.2.5.3.0-37/"
... View more
03-23-2017
11:01 AM
@Ye Jun
I see that some issues were reported with the "hive-jdbc" driver version jar. So can you please clarify which exact version of this jar are you using. As i am able to run my test successfully at my end (for example i am using "http://repo.hortonworks.com/content/repositories/releases/org/apache/hive/hive-jdbc/1.2.1000.2.5.3.0-37/" )
... View more
03-23-2017
10:48 AM
@Ye Jun Are you passing the driverName as "org.apache.hive.jdbc.HiveDriver" ? Also when you say with the mentioned url it can not connect .... does it mean that you are getting any error? Or is it getting hung for a long time? Are the host names resolvable from the host where you are running the code? Example: (from Client machine)
# telnet huge-server 2181
# telnet huge-agent02 2181
# telnet huge-agent01 2181 .
... View more
03-23-2017
07:42 AM
@Aditya Sharma
I have developed a very simple HBase Client code completely using maven so it would be easier for you to test and run. Please refer to the demo: https://github.com/jaysensharma/MiddlewareMagicDemos/tree/master/HDP_Ambari/HBase_Client Few changes you will need to make in the code. config.set("hbase.zookeeper.quorum", "erie3.example.com,erie1.example.com,erie4.example.com,erie2.example.com");
config.set("hbase.zookeeper.property.clientPort", "2181");
config.set("zookeeper.znode.parent", "/hbase-unsecure");
config.addResource(new Path("/PATH/TO/HBase_Client/src/main/resources/hbase-site.xml"));
config.addResource(new Path("/PATH/TO/HBase_Client/src/main/resources/core-site.xml"));
config.addResource(new Path("/PATH/TO/HBase_Client/src/main/resources/hdfs-site.xml"));
. Please use your own "hbase-site.xml","core-site.xml" and "hdfs-site.xml" files. . Then run it as following: How to build and run?
# cd /PATH/TO/HBase_Client
# mvn clean install exec:java
... View more
03-23-2017
02:21 AM
@Sanaz Janbakhsh
For HDF i responded to another thread just now. https://community.hortonworks.com/questions/90400/minimum-hardware-and-clustering-requirements-for-h.html#answer-90414
... View more
03-23-2017
02:16 AM
@Sanaz Janbakhsh Regarding your other queries. like Regarding VM vs Physical server:
VM based pros: 1. 'Easier' managing nodes. Some IT infrastructure teams insist on VMs even if you want to map 1 physical node to 1 virtual node because all their other infrastructure is based on VMs. 2. Taking advantage of NUMA and memory locality. There are some articles on this from virtual infrastructure providers that you can take a look at.
VM based disadvantages: (example may vary based on your usage and cluster)
1. Overhead. As an example, if you are running 4VMs per physical node, you are running 4 OS, 4 Datanode services, 4 Nodemanagers, 4 ambari-agents, 4 metrics collectors and 4 of any other worker services instead of one. These services will have overhead compared to running 1 of each. 2. Data Locality and redundancy. Now, there is support to know physical nodes, so no two replicas go into same physical node but that is extra configuration. You might run into virtual disk performance problems if they are not configured properly. Given a choice, I prefer using Physical servers. However, its not always your choice. In those cases, make sure you try to get following. 1. Explicit virtual disk to physical disk mapping. Say you have 2 VMs per physical node and each physical node has 16 data drives. Make sure to split 8 drives to one VM and 8 more to another VM. This way, physical disks are not shared between VMs. 2. Don't go for more than 2 VMs per physical node. This is so you minimize overhead from the services running. .
For a very basic cluster setup you can have simple two-node, non-secure, unicast cluster comprised of three instances of NiFi: The NCM, Node 1, Node 2 Please see: https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.2/bk_administration/content/clustering.html
... View more
03-23-2017
02:06 AM
@Sanaz Janbakhsh Some of the S/W and H/W requirements can be found in the following link: https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.2/bk_ambari-installation/content/system-requirements.html
... View more
03-23-2017
01:34 AM
@Eun-seop Yu
If you have 3 node cluster you need three VMs (Computers/Host machines) Every host should have a unique Hostname and the output of the following command on every host should return the correct FQDN # hostname -f - Every host should have an "/etc/hosts" file with same content in it so that it can resolve each other. Example: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.10.101 master
10.10.10.102 slave1
10.10.10.103 slave1
Same entry should be present in master/slave1/slave2 hosts "etc/hosts" file. Do not delete the first two line (127.0.0.1 and ::1) of entry from the /etc/hosts file as that is default. - Then you need to setup Password less ssh between from Master host to the slave hosts. - Then install Ambari and configure Cluster. Please see Hortonworks Video on the same: HDP 2.4 Multinode Hadoop Installation using Ambari: https://www.youtube.com/watch?v=zutXwUxmaT4
And https://www.youtube.com/watch?v=bC6Tpfdyy5k
... View more