Member since
09-29-2015
286
Posts
601
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11481 | 03-21-2017 07:34 PM | |
2897 | 11-16-2016 04:18 AM | |
1620 | 10-18-2016 03:57 PM | |
4277 | 09-12-2016 03:36 PM | |
6242 | 08-25-2016 09:01 PM |
01-19-2016
10:40 PM
2 Kudos
Since Hadoop is not a typical Enterprise software, we are having trouble getting the QA team to understand how it fits into our application landscape. They would like us to have three separate environments for Dev, QA and Production. Do you typically see this, or do you have any best practice documentation that we could provide to them?
... View more
Labels:
- Labels:
-
Apache Hadoop
01-19-2016
07:25 PM
1 Kudo
I have a Hadoop cluster, each node on a 2 X 8GB fabric interconnect (48 Port on one RACK) , each server has a dedicated 10 GB NIC for each one. To save space on each node, I would like to put the OS on a SAN backed by this CICSO UCS Interconnect.
All Hadoop data would be stored on locally on DAS on Data Nodes (JBOD) All Master nodes (and Edge Node) disks would be RAID and contain the Master components. Only the OS would be on a SAN instead of locally. Are there any issues with this?
... View more
Labels:
- Labels:
-
Apache Hadoop
01-18-2016
11:51 PM
See also Answers to OBDC access via Knox to HiveServer2 with Hive ODBC Driver v2.0.5
... View more
01-16-2016
06:38 PM
@Vidya SK Double check that you have all the Hive JDBC jar files FOR HDP 2.3.x From /usr/hdp/current/hive-client/lib/ sftp or scp to your local desktop
hive-jdbc.jar From /usr/hdp/current/hadoop-client
hadoop-common.jar hadoop-auth.jar User name: hive Password: hive I cannot remember if the Sandbox is setup with Hive run as user = false or not. Need to double check.
As was indicated earlier use URL jdbc:hive2://127.0.0.1:10000 See also the answers for a related question
... View more
01-16-2016
06:12 PM
4 Kudos
Question: I am about to initiate the cluster install wizard on a new Ambari install. I reviewed the information on service users at http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_reference_guide/content/_defining_service_users_and_groups_for_a_hd I am wondering whether I should take the "Skip Group Modifications" option. The doc states "Choosing this option is typically required if your
environment manages groups using LDAP and not on the local Linux
machines". In our environment, users and groups are managed via Active
Directory (via Centrify). We are planning to enable security on the cluster after it's
installed, and that will include a host of new users being created,
after which many of the initial users and groups will be orphaned. What does that "Skip group modifications" option actually do? Should it be used in this case? Answer: I believe the answer lies
in the fact that we do a groupmod hadoop statement and there is no group
called hadoop, or this is not allowed in your environment. Since
you will be integrating with LDAP or AD you should use the "Skip Group
Modifications". Upon installing of your Linux nodes references groupds
from LDAP, the groupmod hadoop statement would fail. See http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_Installing_HDP_AMB/content/_customize_services.html "Service Account Users and Groups The service account users and groups are available under the Misc tab. These are the operating system accounts the service components will run as.
If these users do not exist on your hosts, Ambari will automatically
create the users and groups locally on the hosts. If these users already
exist, Ambari will use those accounts. Depending on how your environment is configured, you might not allow groupmod or usermod operations. If this is the case, you must be sure all users and groups are already created and be sure to select the "Skip group modifications" option on the Misc tab. This tells Ambari to not modify group membership for the service users." Also in http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_troubleshooting/content/_resolving_cluster_install_and_configuration_problems.html "3.7. Problem: Cluster Install Fails with Groupmod Error The
cluster fails to install with an error related to running groupmod.
This can occur in environments where groups are managed in LDAP, and not
on local Linux machines. You may see an error message similar to the
following one: Fail: Execution of 'groupmod hadoop' returned 10. groupmod: group 'hadoop' does not exist in /etc/group 3.7.1. Solution When
installing the cluster using the Cluster Installer Wizard, at
the Customize Services step, select the Misc tab and choose the Skip
group modifications during install option."
... View more
Labels:
01-15-2016
04:19 PM
1 Kudo
sudo su - hdfs
hdfs dfs -mkdir /user/root
hdfs dfs -chown -R root:hdfs /user/root
exit
... View more
01-15-2016
06:14 AM
@Lance Chen You can also access the tutorials from this page: http://hortonworks.com/products/hortonworks-sandbox/#tutorial_gallery It is the same tutorials that are found from welcome page. To solve the port 8888 problem, perhaps: Once that you have it up and running try the following:
set the networking mode to host-only
get the VBox ip after logging in (ssh root@127.0.0.1 -p 2222); then change root password, the ifconfg
use the ip in the URL to access, even if the cover page states to use the 127.0.0.1 Ensure that port forwarding is enabled for 8888 (as well as other ports) Also see this article: Sandbox 127001:8080-not-accessible
... View more
01-14-2016
05:08 PM
@Peter Young Is the email on your profile accurate? We want to reach out to you.
... View more
01-14-2016
02:48 PM
3 Kudos
@Gerd Koenig Here is your answer: Here is how I connected via JExplorer (If you are using the Sandbox, you need to expose port 33389)
... View more
01-13-2016
03:27 PM
Also here is an external resource that you may find helpful Preparing for Hadoop Certification
... View more