Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Lav,

I downloaded and installed the 'Hortonworks HDP 2.5 SANDBOX on VM' version from the Hortonworks Download site!

This VM version comes with the recommended version of Ambari (2.4?) already installed, for a sandbox 'Docker' implementation on a single host machine like a laptop.

Please visit the Hortonworks download site for the hortonworks tarball, described as MD5 : d42a9bd11f29775cc5b804ce82a72efd on the Hortonworks site.

The latest available version of Pivotal HDB 2.1.0 was also downloaded from the Pivotal site, to setup the HDB/HAWQ services - via Ambari, on Hortonworks HDP 2.5 VM once installed on my laptop.

Many Thanks.

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor
Hi Lav, I cannot change update the existing version of Ambari (2.4.0) that is already implemented in HDP 2.5, when the HDP 2.5 VM is installed. 
I tried to do so, and the whole HDP 2.5 platform failed to start !!! I therefore re-installed HDP 2.5 VM and also the latest Pivotal HDB 2.1.0 HAWQ services for HDP 2.5 on the VM.

....Please see the following HAWQ SCRIPT related details below, when I re-installed the Pivotal HDB 2.1.0.0 files for HDP 2.5 VM:
root@sandbox /]# cd /var/lib/hawq                                                             
[root@sandbox hawq]# ./add-hawq.py --user admin --password admin --stack HDP-2.5               
INFO: Repository hdb-2.1.0.0 with baseurl http://sandbox.hortonworks.com/hdb-2.1.0.0 added to /var/lib/ambari-server...
INFO: Repository hdb-add-ons-2.1.0.0 with baseurl http://sandbox.hortonworks.com/hdb-add-ons-2.51.0.0 added to /var/lib/ambari-server/resources/stacks/HDP/2.5/repos/repoinfo.xml              
INFO: HAWQ directory was successfully created under directory /var/lib/ambari-server/resources/stacks/HDP/2.5/services directory was successfully created under directory /var/lib/ambari-server/resources/stacks/HDP/2.5 










INFO: Please restart ambari-server for changes to take effect

Highlighted

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Lav, I have attempted to upgrade the Ambari Server to v2.4.2 as recommended, and got the following error this time, after the upgrade to Ambari-server 2.4.2 completed successfully - and I issued the 'ambari-server START' command.

Please review the [ ERROR: Exiting with exit code -1. REASON: Ambari Server java process died with exitcode 255. ] and let me know what to do to fix ERROR ...

OUTPUT from the SSH trace, loggged in as 'Root' is below:

Running Transaction                                                                             
nf/ambari-agent.ini.old'bari-agent/conf/ambari-agent.ini' to a subdirectory of itself, `/etc/ambari-agent/co
mv: cannot move `/var/lib/ambari-agent/cache/stacks' to a subdirectory of itself, `/var/lib/ambari-agent/cache/stacks_13_12_16_16_41.old'                                                      
agent/cache/common-services_13_12_16_16_41.old'mmon-services' to a subdirectory of itself, `/var/lib/ambari- Updating   : ambari-agent-2.4.2.0-136.x86_64   1/2 
Verifying  : ambari-agent-2.4.2.0-136.x86_64                 1/2 
Verifying  : ambari-agent-2.4.0.0-1225.x86_64                2/2 
Updated: ambari-agent.x86_64 0:2.4.2.0-136                                                                         
[root@sandbox /]# rpm -qa | grep ambari-agent                                                               
ambari-agent-2.4.2.0-136.x86_64                                                                             
Using python  /usr/bin/pythoner upgrade                                                        
Upgrading ambari-server                                                                        
Updating properties in ambari.properties ...                                                   
WARNING: Original file ambari-env.sh kept                                                                   
ERROR: Unexpected OSError: [Errno 22] Invalid argument: '/var/lib/ambari-server/data/tmp/solr-service/custom-services/SOLR/5.5.2.2.5'                                                         
[root@sandbox /]# ambari-server with -v or --verbose option  
-------------------
[root@sandbox /]# ambari-server start                                                             Starting ambari-server/python                                                                   Ambari Server running with administrator privileges.                                            Organizing resource files at /var/lib/ambari-server/resources...                             
Ambari database consistency check started...                                                                
No errors were found.                                                                                       
Ambari database consistency check finished                                                 
Server out at: /var/log/ambari-server/ambari-server.out                                        
Server log at: /var/log/ambari-server/ambari-server.log                                       
ERROR: Exiting with exit code -1.                                                                           
REASON: Ambari Server java process died with exitcode 255. 
Check /var/log/ambari-server/ambari-server.out (VI ambari-server.out = [ OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0  "ambari-server.out" 1L, 95C ] for more information.  

VI ambari-server.log = [ ...13 Dec 2016 17:00:10,329  INFO [main] HostRoleCommandDAO:258 - Host role command status summary cache enabled !                                                   
13 Dec 2016 17:00:10,330  INFO [main] TransactionalLock$LockArea:121 - LockArea HRC_STATUS_CACHE is enabled  
13 Dec 2016 17:00:10,581  INFO [main] AmbariServer:914 - Getting the controller                
13 Dec 2016 17:00:11,769 ERROR [main] AmbariServer:929 - Failed to run the Ambari Server org.apache.ambari.server.AmbariException: Current database store version is not compatible with current server version, serverVersion=2.4.2.0, schemaVersion=2.4.0 at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkDBVersionCompatible(DatabaseConsistencyCheckHelper.java:147) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:919) ]                                                             
[root@sandbox /]#
                                                

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Lav, tried to upgrade the Ambari server again, closely followitg the 'Ambari Upgrade' documentation on Hortonworks, and got the following error - when RESTARTING Ambari v2.4.2, as follows.

Please review and let me know how I may fix this, to continue with thw HAWQ install, thank you ...:

Dependencies Resolved                                                                                       
============================================================================================================

 Package                 Arch              Version                  Repository                         Size 
Updating:===================================================================================================
 ambari-agent            x86_64            2.4.2.0-136              Updates-ambari-2.4.2.0             22 M 

Transaction Summary                                                                                         
============================================================================================================

Upgrade       1 Package(s)                                                                                  
Total download size: 22 M                                                                                   
Is this ok [y/N]: y                                                                                         

ambari-agent-2.4.2.0-136.x86_64.rpm                                                  |  22 MB     01:05     
Running rpm_check_debug                                                                                     

Transaction Test Succeeded                                                                                  
Running Transaction                                                                                         

nf/ambari-agent.ini.old'bari-agent/conf/ambari-agent.ini' to a subdirectory of itself, `/etc/ambari-agent/co
mv: cannot move `/var/lib/ambari-agent/cache/stacks' to a subdirectory of itself, `/var/lib/ambari-agent/cac
he/stacks_13_12_16_16_41.old'                                                                               

agent/cache/common-services_13_12_16_16_41.old'mmon-services' to a subdirectory of itself, `/var/lib/ambari-
  Updating   : ambari-agent-2.4.2.0-136.x86_64                                                          1/2 

  Verifying  : ambari-agent-2.4.2.0-136.x86_64                                                          1/2 
  Verifying  : ambari-agent-2.4.0.0-1225.x86_64                                                         2/2 

Updated:                                                                                                    
  ambari-agent.x86_64 0:2.4.2.0-136                                                                         
[root@sandbox /]# rpm -qa | grep ambari-agent                                                               
ambari-agent-2.4.2.0-136.x86_64                                                                             

Using python  /usr/bin/pythoner upgrade                                                                     
Upgrading ambari-server                                                                                     
Updating properties in ambari.properties ...                                                                
WARNING: Original file ambari-env.sh kept                                                                   
ERROR: Unexpected OSError: [Errno 22] Invalid argument: '/var/lib/ambari-server/data/tmp/solr-service/custom
-services/SOLR/5.5.2.2.5'                                                                                   

[root@sandbox /]# ambari-server with -v or --verbose option                                                 
[root@sandbox /]# ambari-server start                                                                       

Starting ambari-server/python                                                                               
Ambari Server running with administrator privileges.                                                        
Organizing resource files at /var/lib/ambari-server/resources...                                            
Ambari database consistency check started...                                                                
No errors were found.                                                                                       
Ambari database consistency check finished                                                                  

Server out at: /var/log/ambari-server/ambari-server.out                                                     
Server log at: /var/log/ambari-server/ambari-server.log                                                     

ERROR: Exiting with exit code -1.                                                                           
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out fo
r more information. 

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Leonard,

I checked both Docker as well as VirbualBox sandboxes for HDP 2.5 and they both contain hawqmaster.py

I did not get a time to try the install of HDB 2.1 yet. The upgrade to Ambari 2.4.2 is independent of HDP install (looks like upgrade of SOLR component has a problem).

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Lav, that means I cannot upgrade the Ambari version from 2.4.0 to 2.4.2 in HDP 2.5 as you advised - due to the SOLR error encountered during the Ambari upgrade process.

It looks like the Pivotal HDB 2.1 (not HDP 2.1 as you mentioned) may be deleting the existing hawqmaster.py script in Hortonworks HDP 2.5, when deploying the HAWQ services via Ambari in HDP.

This seems to be a big problem to fix ... I'm truly stuck here, as I wanted to present the HAWQ + MADLIB capabilities integrated with HDP to my potential clients ... please provide a solution !!!

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Pivotal HDB 2.1 does not delete the hawqmaster.py script. It includes a plugin that copies metainfo.xml onto the HDP 2.5 stack directory inside Ambari so that Ambari recognizes HDB 2.1 as part of the stack. I can take a look at your cluster if you deploy it to an AWS instance using the HDP 2.5 docker sandbox. One other option would be to uninstall SOLR before upgrading.

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Lav, can you determine why this error occurs when deploying the HAWQ/PXF services with Ambari on HDP2.5?

I downloaded again the HDP 2.5 sandbox and Pivotal HDB 2.1 (in case there have been later code patches) but the error still exists on service deployment - for the AMBARI-AGENT only - and it refers to HAWQ version '2.0.0' in the CACHE COMMON-SERVICES- why HAWQ 2.0.0, when I downloaded and installed HDB/HAWQ 2.1.0 ?????

Please see script messages below and full error details !!

(1) 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist';

(2) 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist';

(3) 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist';

---------------------------------------

See the ERROR output details below:

HAWQ MASTER INSTALL

stderr: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'

stdout: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist' Command failed after 1 tries

-----------------------------------------------------------

HAWQ SEGMENT INSTALL

stderr: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'

stdout: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist' Command failed after 1 tries

---------------------------------------------------------------------

PXF INSTALL:

stderr: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'

stdout: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist' Command failed after 1 tries

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Leonard ,

Just to make sure you are dealing with the correct versions

HDB - 2.1

HDP - 2.5

AMBARI - 2.4.1

HAWQ-AMBARI PLUGIN - 2.1.0

OS - Centos 6.x ( Centos 7 is not supported )

Also ,Can you confirm hawq scripts are there in the ****AMBARI SERVER**** (

/var/lib/ambari-server/resources/common-services/HAWQ/2.0.0/package/scripts/)

When AMBARI server tries to install hawq it pushes these scripts files to the hawq servers with the help of ambari agents .You should see these files pushed by AMBARI SERVER to the hawq servers in

/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/ ( This is cache directory)

Having said that ,either you can start installation from the scratch or please try to restart all the Ambari servers and agents in the cluster and start the installation.The idea is ,you should need all these scripts in the Ambari server and Ambari agent cache folder .If you are not seeing this ,it could be an issue specific to your environment .If you need further/detailed investigation on this issue ,please engage Hortonworks or Pivotal on this .

Thanks

Pratheesh Nair

Re: Pivotal HDB/HAWQ latest version 2.1.0 install fails on HDP v2.5, with the following ERROR

New Contributor

Hi Pratheesh and Lav,

I Have gone back to the previous HDB 2.0.1 HAWQ installation on the HDP 2.4 VM (instead of HDP 2.5 and HDB 2.1).

I have successfully installed both HAWQ and PXF services - and started the PXF service.

However, the HAWQ MASTER and SEGMENT services fail to start - and output the following ERROR details, below. Please let me know how to configure the HAWQ MASTER service to start it via ambari:

------------ERROR Details on starting HAWQ MASTER -------

stderr: /var/lib/ambari-agent/data/errors-247.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py", line 58, in <module>
    HawqMaster().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py", line 45, in start
    master_helper.start_master()
  File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/master_helper.py", line 223, in start_master
    __init_active()
  File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/master_helper.py", line 114, in __init_active
    utils.exec_hawq_operation(hawq_constants.INIT, "{0} -a -v".format(hawq_constants.MASTER))
  File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/utils.py", line 52, in exec_hawq_operation
    logoutput=logoutput)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'source /usr/local/hawq/greenplum_path.sh && hawq init master -a -v' returned 1. 20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Prepare to do 'hawq init'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-You can find log in:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_init_20161219.log
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-GPHOME is set to:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/usr/local/hawq/.
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Parsing config file:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Init hawq with args: ['init', 'master']
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_host is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_dfs_url is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-No standby host configured, skip it
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check if hdfs path is available
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Check hdfs: /usr/local/hawq/./bin/gpcheckhdfs hdfs sandbox.hortonworks.com:8020/hawq_default off postgres 
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[WARNING]:-2016-12-19 23:10:43.830140, p46099, th140659352119456, WARNING the number of nodes in pipeline is 1 [sandbox.hortonworks.com(10.0.2.15)], is less than the expected number of replica 3 for block [block pool ID: BP-706476385-10.0.2.15-1457965111091 block ID 1073742443_1631] file /hawq_default/testFile
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-1 segment hosts defined
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Set default_hash_table_bucket_number as: 6
20161219:23:10:45:045986 hawq_init:sandbox:gpadmin-[INFO]:-Start to init master
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[INFO]:-Master postgres initdb failed
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[ERROR]:-Master init failed, exit
stdout: /var/lib/ambari-agent/data/output-247.txt
2016-12-19 23:10:42,085 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,085 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,085 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,109 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,109 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,129 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,129 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,130 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,236 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,236 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,236 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,257 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,257 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,279 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,280 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,280 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,281 - Group['hadoop'] {}
2016-12-19 23:10:42,282 - Group['users'] {}
2016-12-19 23:10:42,283 - Group['zeppelin'] {}
2016-12-19 23:10:42,283 - Group['knox'] {}
2016-12-19 23:10:42,283 - Group['ranger'] {}
2016-12-19 23:10:42,283 - Group['spark'] {}
2016-12-19 23:10:42,283 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,284 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,285 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,285 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,286 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,286 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,287 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,287 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-12-19 23:10:42,288 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,288 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,289 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,289 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,290 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,291 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,291 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,292 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,292 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,294 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,294 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-12-19 23:10:42,297 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-12-19 23:10:42,306 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-12-19 23:10:42,306 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-12-19 23:10:42,307 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-12-19 23:10:42,308 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-12-19 23:10:42,314 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-12-19 23:10:42,314 - Group['hdfs'] {}
2016-12-19 23:10:42,314 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-12-19 23:10:42,315 - Directory['/etc/hadoop'] {'mode': 0755}
2016-12-19 23:10:42,327 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 23:10:42,329 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-12-19 23:10:42,344 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-12-19 23:10:42,351 - Skipping Execute[('setenforce', '0')] due to not_if
2016-12-19 23:10:42,352 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,354 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,354 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,358 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,361 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,361 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-12-19 23:10:42,368 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,369 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-12-19 23:10:42,371 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 23:10:42,376 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-12-19 23:10:42,381 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-12-19 23:10:42,574 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,574 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,574 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,597 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,597 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,620 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,620 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,620 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,622 - Group['gpadmin'] {'ignore_failures': True}
2016-12-19 23:10:42,623 - User['gpadmin'] {'gid': 'gpadmin', 'password': 'saNIJ3hOyqasU', 'ignore_failures': True, 'groups': ['gpadmin', 'hadoop']}
2016-12-19 23:10:42,623 - Skipping failure of User['gpadmin'] due to ignore_failures. Failure reason: 'pwd.struct_passwd' object has no attribute 'pw_password'
2016-12-19 23:10:42,624 - Execute['chown -R gpadmin:gpadmin /usr/local/hawq/'] {'timeout': 600}
2016-12-19 23:10:42,708 - XmlConfig['hdfs-client.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,718 - Generating config: /usr/local/hawq/etc/hdfs-client.xml
2016-12-19 23:10:42,719 - File['/usr/local/hawq/etc/hdfs-client.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,741 - XmlConfig['yarn-client.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,749 - Generating config: /usr/local/hawq/etc/yarn-client.xml
2016-12-19 23:10:42,749 - File['/usr/local/hawq/etc/yarn-client.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,760 - XmlConfig['hawq-site.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,767 - Generating config: /usr/local/hawq/etc/hawq-site.xml
2016-12-19 23:10:42,767 - File['/usr/local/hawq/etc/hawq-site.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,777 - Directory['/tmp/hawq/'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,777 - Directory['/etc/sysctl.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-12-19 23:10:42,777 - File['/tmp/hawq/hawq_sysctl.conf'] {'owner': 'gpadmin', 'content': ..., 'group': 'gpadmin'}
2016-12-19 23:10:42,778 - Writing File['/tmp/hawq/hawq_sysctl.conf'] because it doesn't exist
2016-12-19 23:10:42,778 - Changing owner for /tmp/hawq/hawq_sysctl.conf from 0 to gpadmin
2016-12-19 23:10:42,778 - Changing group for /tmp/hawq/hawq_sysctl.conf from 0 to gpadmin
2016-12-19 23:10:42,778 - File['/tmp/hawq/hawq_sysctl.conf'] {'action': ['delete']}
2016-12-19 23:10:42,779 - Deleting File['/tmp/hawq/hawq_sysctl.conf']
2016-12-19 23:10:42,779 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-12-19 23:10:42,779 - File['/etc/security/limits.d/gpadmin.conf'] {'owner': 'gpadmin', 'content': '#### HAWQ Limits Parameters  ###########\ngpadmin hard nofile 2900000\ngpadmin soft nproc 131072\ngpadmin hard nproc 131072\ngpadmin soft nofile 2900000\n', 'group': 'gpadmin'}
2016-12-19 23:10:42,779 - File['/usr/local/hawq/etc/gpcheck.cnf'] {'owner': 'gpadmin', 'content': ..., 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,782 - File['/usr/local/hawq/etc/slaves'] {'owner': 'gpadmin', 'content': Template('slaves.j2'), 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,785 - File['/tmp/hawq_hosts'] {'owner': 'gpadmin', 'content': Template('hawq-hosts.j2'), 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,785 - Writing File['/tmp/hawq_hosts'] because it doesn't exist
2016-12-19 23:10:42,785 - Changing owner for /tmp/hawq_hosts from 0 to gpadmin
2016-12-19 23:10:42,785 - Changing group for /tmp/hawq_hosts from 0 to gpadmin
2016-12-19 23:10:42,787 - File['/home/gpadmin/.hawq-profile.sh'] {'owner': 'gpadmin', 'content': Template('hawq-profile.sh.j2'), 'group': 'gpadmin'}
2016-12-19 23:10:42,787 - Execute['echo 'source /home/gpadmin/.hawq-profile.sh' >> /home/gpadmin/.bashrc'] {'not_if': "grep 'source /home/gpadmin/.hawq-profile.sh' /home/gpadmin/.bashrc", 'user': 'gpadmin', 'timeout': 600}
2016-12-19 23:10:42,792 - Skipping Execute['echo 'source /home/gpadmin/.hawq-profile.sh' >> /home/gpadmin/.bashrc'] due to not_if
2016-12-19 23:10:42,792 - Directory['/data/hawq/master'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,793 - Directory['/tmp'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,793 - Execute['chmod 700 /data/hawq/master'] {'user': 'root', 'timeout': 600}
2016-12-19 23:10:42,854 - Execute['source /usr/local/hawq/greenplum_path.sh && hawq ssh-exkeys -f /tmp/hawq_hosts -p [PROTECTED]'] {'logoutput': True, 'not_if': None, 'only_if': None, 'user': 'gpadmin', 'timeout': 900}
[STEP 1 of 5] create local ID and authorize on local host
  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

[INFO] completed successfully
2016-12-19 23:10:43,425 - File['/tmp/hawq_hosts'] {'action': ['delete']}
2016-12-19 23:10:43,425 - Deleting File['/tmp/hawq_hosts']
2016-12-19 23:10:43,426 - HdfsResource['/hawq_default'] {'security_enabled': False, 'keytab': [EMPTY], 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'recursive_chown': True, 'owner': 'gpadmin', 'group': 'gpadmin', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755}
2016-12-19 23:10:43,430 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/hawq_default?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpGAH55T 2>/tmp/tmpVbzoqz''] {'logoutput': None, 'quiet': False}
2016-12-19 23:10:43,467 - call returned (0, '')
2016-12-19 23:10:43,468 - HdfsResource[None] {'security_enabled': False, 'keytab': [EMPTY], 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute']}
2016-12-19 23:10:43,469 - Execute['source /usr/local/hawq/greenplum_path.sh && hawq init master -a -v'] {'logoutput': True, 'not_if': None, 'only_if': None, 'user': 'gpadmin', 'timeout': 900}
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Prepare to do 'hawq init'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-You can find log in:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_init_20161219.log
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-GPHOME is set to:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/usr/local/hawq/.
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Parsing config file:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Init hawq with args: ['init', 'master']
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_host is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_dfs_url is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-No standby host configured, skip it
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check if hdfs path is available
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Check hdfs: /usr/local/hawq/./bin/gpcheckhdfs hdfs sandbox.hortonworks.com:8020/hawq_default off postgres 
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[WARNING]:-2016-12-19 23:10:43.830140, p46099, th140659352119456, WARNING the number of nodes in pipeline is 1 [sandbox.hortonworks.com(10.0.2.15)], is less than the expected number of replica 3 for block [block pool ID: BP-706476385-10.0.2.15-1457965111091 block ID 1073742443_1631] file /hawq_default/testFile
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-1 segment hosts defined
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Set default_hash_table_bucket_number as: 6
20161219:23:10:45:045986 hawq_init:sandbox:gpadmin-[INFO]:-Start to init master
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[INFO]:-Master postgres initdb failed 

20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[ERROR]:-Master init failed, exit

Don't have an account?
Coming from Hortonworks? Activate your account here