Member since
12-10-2015
10
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1022 | 10-13-2016 03:16 PM |
07-23-2018
07:22 PM
Is it recommended to manage all the required services, Solr, HBase, Kafka, Atlas, etc., under one Ambari system to build a Production HA Atlas cluster, or break these components out separately? For example, use one Ambari system to manage a standalone HBase HA cluster, another to manage a standalone Kafka HA cluster, and finally another for an Atlas HA cluster and integrate these three separate clusters together? Looking at this architecture diagram as reference: http://atlas.apache.org/Architecture.html
... View more
10-18-2016
07:33 PM
This worked for me: https://community.hortonworks.com/articles/52844/hdp-clients-with-multi-version-and-multi-os-suppor.html As stated it is a manual workaround, but fine for my non-production use case. Thanks!
... View more
10-17-2016
06:24 PM
1 Kudo
I see the answer below on another post, but HOW does one actually disable the OS Check during the deployment phase. I have a HDP 2.5 running on RHEL7. I'm just trying to register a RHEL6 node only to install client tools, no services. https://community.hortonworks.com/questions/18479/how-to-register-host-with-different-os-to-ambari.html#answer-form
... View more
Labels:
10-13-2016
03:16 PM
From what I can see, HDP_UTILS does not include MySQL packages. It only provides Yum repository configuration for the MySQL external repository. If someone is setting up a local repository, it implies no internet access. So, the pre-installation instructions, on "Setting Up Local Repositories", it should include instruction on how to Repo Sync the MySQL packages as well as Ambari, HDP, & HDP_UTILS. This was my workaround...
... View more
10-13-2016
03:15 PM
From what I can see, HDP_UTILS does not include MySQL packages. It only provides Yum repository configuration for the MySQL external repository. If someone is setting up a local repository, it implies no internet access. So, the pre-installation instructions, on "Setting Up Local Repositories", it should include instruction on how to Repo Sync the MySQL packages as well as Ambari, HDP, & HDP_UTILS. This was my workaround...
... View more
10-12-2016
04:13 PM
1 Kudo
I've setup local repositories to install a single node sandbox cluster in an internal VM machine(no internet access), running on RHEL7. While attempting to install HDP 2.5.0 via Ambari Console (2.4.1), the MySQL Server Install step is failing because the MySQL Server bits are not included in the Ambari or HDP Repos. The HDP-UTILS Repo includes the mysql-community-release package, which setups up the mysql-community.repo & mysql-community-source.repo Yum repositories. However, these point to external repositories, so not accessible. In my opinion, if the install requires bits from other 3rd party repositories, then those bits need to be included in the Ambari, HDP, and HDP-UTILS repositories. Otherwise, folks need to chase those bits down and include them in their local repositories for installations to work.
... View more
Labels:
03-17-2016
04:31 PM
Peter can you add your code snippet here that you changed? I'm seeing the same issue with trying to install HDP 2.4.0.0-169
... View more
02-17-2016
02:59 PM
1 Kudo
Looks to be set properly. [root@localhost data]# grep agent.package.install.task.timeout /etc/ambari-server/conf/ambari.properties agent.package.install.task.timeout=1800 [root@localhost data]#
... View more
02-08-2016
02:01 PM
1 Kudo
I'm receiving the following error when running the HDFS service check. It seems to be timing out on a WebHDFS PUT statement... Ambari Version 2.1.1 Python script has been killed due to timeout after waiting 300 secs
2016-02-08 08:37:18,268 - ExecuteHadoop['dfsadmin -fs hdfs://vxkid-phdpdv05.lmig.com:8020 -safemode get | grep OFF'] {'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'logoutput': True, 'try_sleep': 3, 'tries': 20, 'user': 'hdfs'}
2016-02-08 08:37:18,270 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf dfsadmin -fs hdfs://vxkid-phdpdv05.lmig.com:8020 -safemode get | grep OFF'] {'logoutput': True, 'try_sleep': 3, 'environment': {}, 'tries': 20, 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Safe mode is OFF
2016-02-08 08:37:21,067 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://vxkid-phdpdv05.lmig.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': None, 'user': 'hdfs', 'action': ['create_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'mode': 0777}
2016-02-08 08:37:21,071 - checked_call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://vxkid-phdpdv05.lmig.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp5cvrI_ 2>/tmp/tmpcKO2SB''] {'logoutput': None, 'quiet': False}
2016-02-08 08:37:21,259 - checked_call returned (0, '')
2016-02-08 08:37:21,260 - HdfsResource['/tmp/idb60a9aca_date370816'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://vxkid-phdpdv05.lmig.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': None, 'user': 'hdfs', 'action': ['delete_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file'}
2016-02-08 08:37:21,261 - checked_call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://vxkid-phdpdv05.lmig.com:50070/webhdfs/v1/tmp/idb60a9aca_date370816?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpggYpl7 2>/tmp/tmpW_lYIG''] {'logoutput': None, 'quiet': False}
2016-02-08 08:37:21,426 - checked_call returned (0, '')
2016-02-08 08:37:21,427 - HdfsResource['/tmp/idb60a9aca_date370816'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'default_fs': 'hdfs://vxkid-phdpdv05.lmig.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': None, 'user': 'hdfs', 'action': ['create_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file'}
2016-02-08 08:37:21,428 - checked_call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://vxkid-phdpdv05.lmig.com:50070/webhdfs/v1/tmp/idb60a9aca_date370816?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp1n4fuh 2>/tmp/tmpwXsc1P''] {'logoutput': None, 'quiet': False}
2016-02-08 08:37:21,593 - checked_call returned (0, '')
2016-02-08 08:37:21,594 - Creating new file /tmp/idb60a9aca_date370816 in DFS
2016-02-08 08:37:21,595 - checked_call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT -T /etc/passwd '"'"'http://vxkid-phdpdv05.lmig.com:50070/webhdfs/v1/tmp/idb60a9aca_date370816?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmp9ChlXL 2>/tmp/tmpCmlsMf''] {'logoutput': None, 'quiet': False}
... View more
Labels: