<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Ambari custom install failure in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102493#M65428</link>
    <description>&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/1348-failed-cluster-install.pdf" target="_blank"&gt;failed-cluster-install.pdf&lt;/A&gt;Hi all,&lt;/P&gt;&lt;P&gt;I have test driven the sandbox for a while and I decided to  take my knowledge to another level. I got 4 refurbished PowerEdge servers 2 Dell 2850 and 2 Dell 2950. The Ambari server preparation and host discovery was successful  see attached pdf. In the assign master I realised the Ambari server was overloaded so I reassigned some components to other servers. I was lost when it came to assign slave and client ONLY one of the servers have been check so I decided to go with the default. The Install start and smoke test failed.(Attached pdf)&lt;/P&gt;&lt;P&gt;I don't intend to create multiple users across the cluster ,how do I achieve this which file should I edit prior to the launch ? Below extract of the user creation error&lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['hadoop'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['users'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['knox'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,321 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,322 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}&lt;/P&gt;&lt;P&gt;Any advice is welcome &lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:57:18 GMT</pubDate>
    <dc:creator>Shelton</dc:creator>
    <dc:date>2022-09-16T09:57:18Z</dc:date>
    <item>
      <title>Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102493#M65428</link>
      <description>&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/1348-failed-cluster-install.pdf" target="_blank"&gt;failed-cluster-install.pdf&lt;/A&gt;Hi all,&lt;/P&gt;&lt;P&gt;I have test driven the sandbox for a while and I decided to  take my knowledge to another level. I got 4 refurbished PowerEdge servers 2 Dell 2850 and 2 Dell 2950. The Ambari server preparation and host discovery was successful  see attached pdf. In the assign master I realised the Ambari server was overloaded so I reassigned some components to other servers. I was lost when it came to assign slave and client ONLY one of the servers have been check so I decided to go with the default. The Install start and smoke test failed.(Attached pdf)&lt;/P&gt;&lt;P&gt;I don't intend to create multiple users across the cluster ,how do I achieve this which file should I edit prior to the launch ? Below extract of the user creation error&lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['hadoop'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['users'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - Group['knox'] {'ignore_failures': False} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,320 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,321 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} &lt;/P&gt;&lt;P&gt;2016-01-14 00:25:23,322 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}&lt;/P&gt;&lt;P&gt;Any advice is welcome &lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:57:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102493#M65428</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2022-09-16T09:57:18Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102494#M65429</link>
      <description>&lt;P&gt;it's easier to do a reinstall as you may have multiple issues. Otherwise, go through each node and install clients, etc. &lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Jan 2016 21:10:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102494#M65429</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2016-01-14T21:10:41Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102495#M65430</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;/A&gt;&lt;P&gt;resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadooppolicy.xml']
failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist&lt;/P&gt;&lt;P&gt;If it's multiple attempt to install the cluster then I recommend to look into the option to cleanup the install completely and reinstall. &lt;/P&gt;</description>
      <pubDate>Thu, 14 Jan 2016 21:16:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102495#M65430</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2016-01-14T21:16:43Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102496#M65431</link>
      <description>&lt;P&gt;@Artem@neeraj  Thanks guys for your responses as you realise HDP creates a couple of users and bit difficult to manage across the cluster. &lt;/P&gt;&lt;P&gt;1. I want to have only one user eg.tom to own all the hive,hdfs,pig etc  as it easy to ssh to any server and quickly be effective avoiding the su or sudo which file should I edit to achieve this ?&lt;/P&gt;&lt;P&gt;2. I have done a lot of Linux installs the default HDP FS layout doesn't please me at all  I want to install HDP outside the /var /usr or /etc directories  so if anything goes wrong I can just delete all files in that partition and and relaunch after some minor cleanup. My reasoning is I would like to allocate /u01 like 300 GB HDD on each of the 4 server in the cluster so I end up with 1.2 T after HDFS format  for data.&lt;/P&gt;</description>
      <pubDate>Thu, 14 Jan 2016 23:31:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102496#M65431</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2016-01-14T23:31:30Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102497#M65432</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;/A&gt;&lt;/P&gt;&lt;P&gt;During the installation, you can (and should) specify where the data for HDFS resides by editing the HDFS configuration parameters before deploying the cluster. By default, Ambari picks up any filesystems besides / and puts them in the list. If you don't edit the namenode and datanode directories, you will be using /tmp, /var, /usr, etc., to store data and metadata if those are separate filesystems on your system.&lt;/P&gt;&lt;P&gt;The bits get installed under /usr/hdp. That can not be modified. The packages (RPMs) are built to put things in this standard location, and any 3rd party applications that expect the binaries and configs to be in the standard locations will not be able to function otherwise.&lt;/P&gt;&lt;P&gt;Likewise, during the installation, you can specify the service user accounts if you don't wish to use the default usernames. Since, in an unsecured cluster, any user can access the data stored in HDFS, you don't need to consolidate the service accounts in order to be productive right off the bat. If you are securing your cluster, then you won't want to run these services as the same username anyway because it can cause a security hole if you want to separate the users who can access certain functions on the cluster.&lt;/P&gt;&lt;P&gt;There is a python script that can be used to clean up a failed installation. If Ambari detects users that already exist or a few other conditions that suggest a failed install, it will recommend that you run this script to clean up the systems before proceeding with the installation. Here is where the script lives and the help information for running it:&lt;/P&gt;&lt;PRE&gt;[root@sandbox ~]# python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --help
Usage: HostCleanup.py [options]
Options:
  -h, --help            show this help message and exit
  -v, --verbose         output verbosity.
  -f FILE, --file=FILE  host check result file to read.
  -o FILE, --out=FILE   log file to store results.
  -k SKIP, --skip=SKIP  (packages|users|directories|repositories|processes|alt
                        ernatives). Use , as separator.
  -s, --silent          Silently accepts default prompt values
&lt;/PRE&gt;</description>
      <pubDate>Fri, 15 Jan 2016 21:43:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102497#M65432</guid>
      <dc:creator>emaxwell</dc:creator>
      <dc:date>2016-01-15T21:43:12Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari custom install failure</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102498#M65433</link>
      <description>&lt;P&gt;@&lt;A href="https://community.hortonworks.com/users/98/emaxwell.html"&gt;emaxwel&lt;/A&gt; @Artem @neeraj &lt;/P&gt;&lt;P&gt;Gentlemen thanks for all your responses.Its unfortunate the bits can't be installed elsewhere except in  /usr/hdp and furthermore administration of the various named used could have been simplified  I am from the Oracle Application background at most there are 2 users for the ebs application and database. I will reformat the 4 servers. &lt;A rel="user" href="https://community.cloudera.com/users/98/emaxwell.html" nodeid="98"&gt;@emaxwell&lt;/A&gt;  you have a very valid argument on the segregation of duties I will try to incorporate that "security concern"  I dont want some dark angel poke holes in my production cluster&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jan 2016 17:01:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Ambari-custom-install-failure/m-p/102498#M65433</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2016-01-18T17:01:16Z</dc:date>
    </item>
  </channel>
</rss>

