Member since
02-15-2019
27
Posts
1
Kudos Received
0
Solutions
07-19-2017
10:27 PM
Configuring an Application in the Streaming Application Manager in HDF-3: When I dragged a KAFKA source tile component unto the Application Builder canvas - and double clicked on it to configure/specify the mandatory argument 'KAFKA TOPIC*'; on selecting the correct topic from the dropdown list - the error 'SCHEMA NOT FOUND' was displayed by the application builder service. I have attached a screenshot of the error encountered below.screen-shot-2017-07-18-at-12057-pm.png
... View more
07-18-2017
09:23 PM
Hi Shriharsha, here is the screenshot (attached) for the second error, posted earlier: screen-shot-2017-07-18-at-12057-pm.png
... View more
07-17-2017
10:13 PM
Hi Shriharsha, thanks for the advice to run the 'bootstrap.sh' script ... It resolved the issue and I was able to add/create the application. However, when I dragged a KAFKA source tile component unto the Application Builder canvas - and double clicked on it to configure/specify the mandatory argument 'KAFKA TOPIC*'; on selecting the correct topic from the dropdown list - the error 'SCHEMA NOT FOUND' was displayed by the application builder service. How can I resolve this second error ?? Thank You 🙂
... View more
07-14-2017
06:58 PM
When I click on NEW APPLICATION option under "My Application" in S.A.M, to add/create a NEW application, SAM displays the ERROR - "Cannot read property 'TopologyComponentUISpecification' of undefined" !!! PLEASE NOTE: (a) I have previously created - a valid SERVICE POOL and ENVIRONMENT in SAM, successfully, for use by the 'New Application' builder. (b) How can I resolve the above ERROR, to continue to ADD a NEW APPLICATION (using the 'New Application' function) in SAM ??
... View more
05-02-2017
10:39 PM
Hello, when trying to install the NIFI service in HDP v2.6 via Ambari, I get the following error - and service is NOT installed:
stderr: /var/lib/ambari-agent/data/errors-279.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/NIFI/package/scripts/master.py", line 131, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/NIFI/package/scripts/master.py", line 50, in install
Execute('tar -xf '+params.temp_file+' -C '+ params.nifi_dir +' >> ' + params.nifi_log_file, user=params.nifi_user)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'tar -xf /tmp/nifi-1.1.0.2.1.2.0-10-bin.tar.gz -C /opt/nifi-1.1.0.2.1.2.0-10-bin >> /var/log/nifi/nifi-setup.log' returned 2. gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
... View more
Labels:
03-01-2017
11:49 PM
Hi Vineet, I have been evaluating your implementation of the HDB2.1.1 sandbox, as requested - to validate the MADLIB installation is working as stated in the 'Apache Madlib Confluence' tutorials. However, I get the following ERROR when trying to train the model, as specified in the tutorial: 18:30:25 [SELECT - 0 rows, 839.195 secs] [Code: 0, SQL State: XX000] ERROR: plpy.SPIError: failed to acquire resource from resource manager, queued resource request is timed out due to no resource (plpython.c:4663)
Where: Traceback (most recent call last):
PL/Python function "logregr_train", line 23, in <module>
return logistic.logregr_train(**globals())
PL/Python function "logregr_train", line 133, in logregr_train
PL/Python function "logregr_train", line 260, in __logregr_train_compute
PL/Python function "logregr_train", line 75, in __compute_logregr
PL/Python function "logregr_train", line 114, in __enter__
PL/Python function "logregr_train", line 197, in runSQL
PL/Python function "logregr_train"
... 1 statement(s) executed, 0 rows affected, exec/fetch time: 839.195/0.000 sec [0 successful, 1 errors] QUESTION - what can I do to resolve the error and continue with my evaluation of MADLIB using HAWQ??
... View more
01-26-2017
06:53 PM
Hi Vineet, thanks for uploading the revised version of HDB2.1.1 on CentOS 6. However, there are two things that must be addressed, to allow me to evaluate HDB2.1.1 effectively, as follows: 1) HAWQ has the capability to interface with both HIVE and HBASE - but HBASE is currently NOT installed in the sandbox. ADDITIONALLY, when I try to install the embedded service, the INSTALL (Hbase Master) FAILS !!
.......
2017-01-25 20:25:17,360 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-01-25 20:25:17,360 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-01-25 20:25:17,381 - checked_call returned (0, '')
2017-01-25 20:25:17,382 - Ensuring that hadoop has the correct symlink structure
2017-01-25 20:25:17,382 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-25 20:25:17,385 - checked_call['hostid'] {}
2017-01-25 20:25:17,388 - checked_call returned (0, '000a0f02')
2017-01-25 20:25:17,392 - Version 2.5.3.0-37 was provided as effective cluster version. Using package version 2_5_3_0_37
2017-01-25 20:25:17,393 - Package['hbase_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-01-25 20:25:17,548 - Installing package hbase_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install hbase_2_5_3_0_37')
Command failed after 1 tries 2) Please confirm whether MADLIB (for Machine Learning using SQL) is fully installed - if NOT, please direct me to instructions on installing MADLIB (v1.9 ???) within HAWQ 2.1.1. Many Thanks !
... View more
01-25-2017
03:27 PM
Hi Janssens, thanks for the link to the HDB HAWQ download on VMWARE. I already have an Oracle Virtualbox machine setup (.ova VM files) and would like to continue with this VM stack. Therefore, I would appreciate if you could send me the link to the Oracle Vm (.ova) download files. Many Thanks,
... View more
01-20-2017
07:30 PM
Hi Pratheesh, I ran 'remove-compression.sh' on madpack to remove the compression, successfully. However, at the checking /creating PL/PYTHON stage, the installation fails with the following ERROR: root@sandbox ~]# su - gpadmin
[gpadmin@sandbox ~]$ /usr/local/hawq_2_1_1_0/madlib/bin/madpack install -s madlib -p hawq -c gpadmin@sandbox.hortonworks.com:65432/template1
madpack.py : INFO : Detected HAWQ version 2.1.
madpack.py : INFO : *** Installing MADlib ***
madpack.py : INFO : MADlib tools version = 1.9.1 (/usr/local/hawq_2_1_1_0/madlib/Versions/1.9.1/bin/../madpack/madpack.py)
madpack.py : INFO : MADlib database version = None (host=sandbox.hortonworks.com:65432, db=template1, schema=madlib)
madpack.py : INFO : Testing PL/Python environment...
madpack.py : ERROR : SQL command failed:
SQL: SELECT plpy_version_for_madlib() AS ver;
ERROR: failed to acquire resource from resource manager, queued resource request is timed out due to no resource (pquery.c:804)
madpack.py : ERROR : MADlib installation failed.
[gpadmin@sandbox ~]$
---- Why does it fail with a time out from resource manager ... is there any configuration that I should change, to allow the installation to complete successfully?
Many Thanks!
... View more
01-18-2017
09:40 PM
Trying to install MADLIB 1.9.1 package (downloaded from the Pivotal site) - HDP 2.4 running HAWQ 2.1.1, gives the following ERROR: gpadmin@sandbox hawq_2_1_1_0]$ /usr/local/hawq_2_1_1_0/madlib/bin/madpack install-check -p hawq -c gpadmin
/gpadmin@sandbox.hortonworks.com:65432/template1
madpack.py : INFO : Detected HAWQ version 2.1.
madpack.py : ERROR : This version is not among the HAWQ versions for which MADlib support files have been installed (2.0).
... View more
01-03-2017
08:02 PM
I Lav, I downloaded the HDP 2.5 VM version (NOT 2.4!) from the Hortonworks download site. It is already integrated with Docker and Linux V7.
... View more
12-22-2016
05:55 PM
I looked in the HAWQ INIT LOG file and extracted details of the error - relating to POSTGRESQL INITDB failure, below. Please let me know what parameters/other to re-configure, to allow Postgresql INITDB to succeed: ---------------------------------------- 16-12-22 16:29:34.155702 UTC,,,p178170,th1083267360,,,,0,,,seg-10000,,,,,"FATAL","XX000","could not create
shared memory segment: Invalid argument (pg_shmem.c:183)","Failed system call was shmget(key=1, size=50621302
4, 03600).","This error usually means that PostgreSQL's request for a shared memory segment exceeded your ker
nel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 506213024 bytes), reduce PostgreSQL's shared_buffers parameter (currently 4000) and/or its max_connections parameter (currently 3000).
If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memory configuration.",,,,,,"InternalIpcMemoryCreate","pg_shmem.c",183,1 0x8c7098 postgres errstart + 0x288
2 0x7849fe postgres PGSharedMemoryCreate + 0x22e
3 0x7cf176 postgres CreateSharedMemoryAndSemaphores + 0x336
4 0x8d8509 postgres BaseInit + 0x19
5 0x7e67a2 postgres PostgresMain + 0x482
6 0x4a41ec postgres main + 0x4fc
7 0x7f803c6eed5d libc.so.6 __libc_start_main + 0xfd
8 0x4a4289 postgres <symbol not found> + 0x4a4289
child process exited with exit code 1
initdb: removing contents of data directory "/data/hawq/master"
Master postgres initdb failed !!!!
20161222:16:29:34:177771 hawq_init:sandbox:gpadmin-[INFO]:-Master postgres initdb failed !!!
20161222:16:29:34:177771 hawq_init:sandbox:gpadmin-[ERROR]:-Master init failed, exit
20161222:16:37:28:180216 hawq_init:sandbox:gpadmin-[INFO]:-Prepare to do 'hawq init'
20161222:16:37:28:180216 hawq_init:sandbox:gpadmin-[INFO]:-You can find log in:
20161222:16:37:28:180216 hawq_init:sandbox:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_init_20161222.log
... View more
12-20-2016
12:05 AM
Hi Pratheesh and Lav, I Have gone back to the previous HDB 2.0.1 HAWQ installation on the HDP 2.4 VM (instead of HDP 2.5 and HDB 2.1). I have successfully installed both HAWQ and PXF services - and started the PXF service. However, the HAWQ MASTER and SEGMENT services fail to start - and output the following ERROR details, below. Please let me know how to configure the HAWQ MASTER service to start it via ambari: ------------ERROR Details on starting HAWQ MASTER ------- stderr: /var/lib/ambari-agent/data/errors-247.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py", line 58, in <module>
HawqMaster().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py", line 45, in start
master_helper.start_master()
File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/master_helper.py", line 223, in start_master
__init_active()
File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/master_helper.py", line 114, in __init_active
utils.exec_hawq_operation(hawq_constants.INIT, "{0} -a -v".format(hawq_constants.MASTER))
File "/var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/utils.py", line 52, in exec_hawq_operation
logoutput=logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'source /usr/local/hawq/greenplum_path.sh && hawq init master -a -v' returned 1. 20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Prepare to do 'hawq init'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-You can find log in:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_init_20161219.log
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-GPHOME is set to:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/usr/local/hawq/.
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Parsing config file:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Init hawq with args: ['init', 'master']
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_host is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_dfs_url is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-No standby host configured, skip it
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check if hdfs path is available
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Check hdfs: /usr/local/hawq/./bin/gpcheckhdfs hdfs sandbox.hortonworks.com:8020/hawq_default off postgres
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[WARNING]:-2016-12-19 23:10:43.830140, p46099, th140659352119456, WARNING the number of nodes in pipeline is 1 [sandbox.hortonworks.com(10.0.2.15)], is less than the expected number of replica 3 for block [block pool ID: BP-706476385-10.0.2.15-1457965111091 block ID 1073742443_1631] file /hawq_default/testFile
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-1 segment hosts defined
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Set default_hash_table_bucket_number as: 6
20161219:23:10:45:045986 hawq_init:sandbox:gpadmin-[INFO]:-Start to init master
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[INFO]:-Master postgres initdb failed
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[ERROR]:-Master init failed, exit stdout: /var/lib/ambari-agent/data/output-247.txt 2016-12-19 23:10:42,085 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,085 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,085 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,109 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,109 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,129 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,129 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,130 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,236 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,236 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,236 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,257 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,257 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,279 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,280 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,280 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,281 - Group['hadoop'] {}
2016-12-19 23:10:42,282 - Group['users'] {}
2016-12-19 23:10:42,283 - Group['zeppelin'] {}
2016-12-19 23:10:42,283 - Group['knox'] {}
2016-12-19 23:10:42,283 - Group['ranger'] {}
2016-12-19 23:10:42,283 - Group['spark'] {}
2016-12-19 23:10:42,283 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,284 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,285 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,285 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,286 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,286 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,287 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,287 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-12-19 23:10:42,288 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,288 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,289 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,289 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,290 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,291 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,291 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,292 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-12-19 23:10:42,292 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,293 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,294 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-12-19 23:10:42,294 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-12-19 23:10:42,297 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-12-19 23:10:42,306 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-12-19 23:10:42,306 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-12-19 23:10:42,307 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-12-19 23:10:42,308 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-12-19 23:10:42,314 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-12-19 23:10:42,314 - Group['hdfs'] {}
2016-12-19 23:10:42,314 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-12-19 23:10:42,315 - Directory['/etc/hadoop'] {'mode': 0755}
2016-12-19 23:10:42,327 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 23:10:42,329 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-12-19 23:10:42,344 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-12-19 23:10:42,351 - Skipping Execute[('setenforce', '0')] due to not_if
2016-12-19 23:10:42,352 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,354 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,354 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-12-19 23:10:42,358 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,361 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,361 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-12-19 23:10:42,368 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-12-19 23:10:42,369 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-12-19 23:10:42,371 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 23:10:42,376 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-12-19 23:10:42,381 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-12-19 23:10:42,574 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-12-19 23:10:42,574 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-12-19 23:10:42,574 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 23:10:42,597 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-12-19 23:10:42,597 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-19 23:10:42,620 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-12-19 23:10:42,620 - Ensuring that hadoop has the correct symlink structure
2016-12-19 23:10:42,620 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 23:10:42,622 - Group['gpadmin'] {'ignore_failures': True}
2016-12-19 23:10:42,623 - User['gpadmin'] {'gid': 'gpadmin', 'password': 'saNIJ3hOyqasU', 'ignore_failures': True, 'groups': ['gpadmin', 'hadoop']}
2016-12-19 23:10:42,623 - Skipping failure of User['gpadmin'] due to ignore_failures. Failure reason: 'pwd.struct_passwd' object has no attribute 'pw_password'
2016-12-19 23:10:42,624 - Execute['chown -R gpadmin:gpadmin /usr/local/hawq/'] {'timeout': 600}
2016-12-19 23:10:42,708 - XmlConfig['hdfs-client.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,718 - Generating config: /usr/local/hawq/etc/hdfs-client.xml
2016-12-19 23:10:42,719 - File['/usr/local/hawq/etc/hdfs-client.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,741 - XmlConfig['yarn-client.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,749 - Generating config: /usr/local/hawq/etc/yarn-client.xml
2016-12-19 23:10:42,749 - File['/usr/local/hawq/etc/yarn-client.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,760 - XmlConfig['hawq-site.xml'] {'group': 'gpadmin', 'conf_dir': '/usr/local/hawq/etc/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'gpadmin', 'configurations': ...}
2016-12-19 23:10:42,767 - Generating config: /usr/local/hawq/etc/hawq-site.xml
2016-12-19 23:10:42,767 - File['/usr/local/hawq/etc/hawq-site.xml'] {'owner': 'gpadmin', 'content': InlineTemplate(...), 'group': 'gpadmin', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 23:10:42,777 - Directory['/tmp/hawq/'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,777 - Directory['/etc/sysctl.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-12-19 23:10:42,777 - File['/tmp/hawq/hawq_sysctl.conf'] {'owner': 'gpadmin', 'content': ..., 'group': 'gpadmin'}
2016-12-19 23:10:42,778 - Writing File['/tmp/hawq/hawq_sysctl.conf'] because it doesn't exist
2016-12-19 23:10:42,778 - Changing owner for /tmp/hawq/hawq_sysctl.conf from 0 to gpadmin
2016-12-19 23:10:42,778 - Changing group for /tmp/hawq/hawq_sysctl.conf from 0 to gpadmin
2016-12-19 23:10:42,778 - File['/tmp/hawq/hawq_sysctl.conf'] {'action': ['delete']}
2016-12-19 23:10:42,779 - Deleting File['/tmp/hawq/hawq_sysctl.conf']
2016-12-19 23:10:42,779 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-12-19 23:10:42,779 - File['/etc/security/limits.d/gpadmin.conf'] {'owner': 'gpadmin', 'content': '#### HAWQ Limits Parameters ###########\ngpadmin hard nofile 2900000\ngpadmin soft nproc 131072\ngpadmin hard nproc 131072\ngpadmin soft nofile 2900000\n', 'group': 'gpadmin'}
2016-12-19 23:10:42,779 - File['/usr/local/hawq/etc/gpcheck.cnf'] {'owner': 'gpadmin', 'content': ..., 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,782 - File['/usr/local/hawq/etc/slaves'] {'owner': 'gpadmin', 'content': Template('slaves.j2'), 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,785 - File['/tmp/hawq_hosts'] {'owner': 'gpadmin', 'content': Template('hawq-hosts.j2'), 'group': 'gpadmin', 'mode': 0644}
2016-12-19 23:10:42,785 - Writing File['/tmp/hawq_hosts'] because it doesn't exist
2016-12-19 23:10:42,785 - Changing owner for /tmp/hawq_hosts from 0 to gpadmin
2016-12-19 23:10:42,785 - Changing group for /tmp/hawq_hosts from 0 to gpadmin
2016-12-19 23:10:42,787 - File['/home/gpadmin/.hawq-profile.sh'] {'owner': 'gpadmin', 'content': Template('hawq-profile.sh.j2'), 'group': 'gpadmin'}
2016-12-19 23:10:42,787 - Execute['echo 'source /home/gpadmin/.hawq-profile.sh' >> /home/gpadmin/.bashrc'] {'not_if': "grep 'source /home/gpadmin/.hawq-profile.sh' /home/gpadmin/.bashrc", 'user': 'gpadmin', 'timeout': 600}
2016-12-19 23:10:42,792 - Skipping Execute['echo 'source /home/gpadmin/.hawq-profile.sh' >> /home/gpadmin/.bashrc'] due to not_if
2016-12-19 23:10:42,792 - Directory['/data/hawq/master'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,793 - Directory['/tmp'] {'owner': 'gpadmin', 'group': 'gpadmin', 'recursive': True}
2016-12-19 23:10:42,793 - Execute['chmod 700 /data/hawq/master'] {'user': 'root', 'timeout': 600}
2016-12-19 23:10:42,854 - Execute['source /usr/local/hawq/greenplum_path.sh && hawq ssh-exkeys -f /tmp/hawq_hosts -p [PROTECTED]'] {'logoutput': True, 'not_if': None, 'only_if': None, 'user': 'gpadmin', 'timeout': 900}
[STEP 1 of 5] create local ID and authorize on local host
... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped
[STEP 2 of 5] keyscan all hosts and update known_hosts file
[STEP 3 of 5] authorize current user on remote hosts
[STEP 4 of 5] determine common authentication file content
[STEP 5 of 5] copy authentication files to all remote hosts
[INFO] completed successfully
2016-12-19 23:10:43,425 - File['/tmp/hawq_hosts'] {'action': ['delete']}
2016-12-19 23:10:43,425 - Deleting File['/tmp/hawq_hosts']
2016-12-19 23:10:43,426 - HdfsResource['/hawq_default'] {'security_enabled': False, 'keytab': [EMPTY], 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'recursive_chown': True, 'owner': 'gpadmin', 'group': 'gpadmin', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755}
2016-12-19 23:10:43,430 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/hawq_default?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpGAH55T 2>/tmp/tmpVbzoqz''] {'logoutput': None, 'quiet': False}
2016-12-19 23:10:43,467 - call returned (0, '')
2016-12-19 23:10:43,468 - HdfsResource[None] {'security_enabled': False, 'keytab': [EMPTY], 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute']}
2016-12-19 23:10:43,469 - Execute['source /usr/local/hawq/greenplum_path.sh && hawq init master -a -v'] {'logoutput': True, 'not_if': None, 'only_if': None, 'user': 'gpadmin', 'timeout': 900}
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Prepare to do 'hawq init'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-You can find log in:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_init_20161219.log
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-GPHOME is set to:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-/usr/local/hawq/.
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Parsing config file:
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Init hawq with args: ['init', 'master']
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_host is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_address_port is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_dfs_url is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_master_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check: hawq_segment_temp_directory is set
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-No standby host configured, skip it
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Check if hdfs path is available
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[DEBUG]:-Check hdfs: /usr/local/hawq/./bin/gpcheckhdfs hdfs sandbox.hortonworks.com:8020/hawq_default off postgres
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[WARNING]:-2016-12-19 23:10:43.830140, p46099, th140659352119456, WARNING the number of nodes in pipeline is 1 [sandbox.hortonworks.com(10.0.2.15)], is less than the expected number of replica 3 for block [block pool ID: BP-706476385-10.0.2.15-1457965111091 block ID 1073742443_1631] file /hawq_default/testFile
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-1 segment hosts defined
20161219:23:10:43:045986 hawq_init:sandbox:gpadmin-[INFO]:-Set default_hash_table_bucket_number as: 6
20161219:23:10:45:045986 hawq_init:sandbox:gpadmin-[INFO]:-Start to init master
20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[INFO]:-Master postgres initdb failed 20161219:23:10:47:045986 hawq_init:sandbox:gpadmin-[ERROR]:-Master init failed, exit
... View more
12-16-2016
05:00 PM
Lav, can you determine why this error occurs when deploying the HAWQ/PXF services with Ambari on HDP2.5? I downloaded again the HDP 2.5 sandbox and Pivotal HDB 2.1 (in case there have been later code patches) but the error still exists on service deployment - for the AMBARI-AGENT only - and it refers to HAWQ version '2.0.0' in the CACHE COMMON-SERVICES- why HAWQ 2.0.0, when I downloaded and installed HDB/HAWQ 2.1.0 ????? Please see script messages below and full error details !! (1) 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; (2) 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'; (3) 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'; --------------------------------------- See the ERROR output details below: HAWQ MASTER INSTALL stderr:
Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist' stdout:
Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'
Command failed after 1 tries ----------------------------------------------------------- HAWQ SEGMENT INSTALL stderr: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist' stdout: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py does not exist' Command failed after 1 tries --------------------------------------------------------------------- PXF INSTALL: stderr:
Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist' stdout:
Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py does not exist'
Command failed after 1 tries
... View more
12-15-2016
06:45 PM
Lav, that means I cannot upgrade the Ambari version from 2.4.0 to 2.4.2 in HDP 2.5 as you advised - due to the SOLR error encountered during the Ambari upgrade process. It looks like the Pivotal HDB 2.1 (not HDP 2.1 as you mentioned) may be deleting the existing hawqmaster.py script in Hortonworks HDP 2.5, when deploying the HAWQ services via Ambari in HDP. This seems to be a big problem to fix ... I'm truly stuck here, as I wanted to present the HAWQ + MADLIB capabilities integrated with HDP to my potential clients ... please provide a solution !!!
... View more
12-13-2016
06:19 PM
Hi Lav, tried to upgrade the Ambari server again, closely followitg the 'Ambari Upgrade' documentation on Hortonworks, and got the following error - when RESTARTING Ambari v2.4.2, as follows. Please review and let me know how I may fix this, to continue with thw HAWQ install, thank you ...: Dependencies Resolved
============================================================================================================
Package Arch Version Repository Size
Updating:===================================================================================================
ambari-agent x86_64 2.4.2.0-136 Updates-ambari-2.4.2.0 22 M
Transaction Summary
============================================================================================================
Upgrade 1 Package(s)
Total download size: 22 M
Is this ok [y/N]: y
ambari-agent-2.4.2.0-136.x86_64.rpm | 22 MB 01:05
Running rpm_check_debug
Transaction Test Succeeded
Running Transaction
nf/ambari-agent.ini.old'bari-agent/conf/ambari-agent.ini' to a subdirectory of itself, `/etc/ambari-agent/co
mv: cannot move `/var/lib/ambari-agent/cache/stacks' to a subdirectory of itself, `/var/lib/ambari-agent/cac
he/stacks_13_12_16_16_41.old'
agent/cache/common-services_13_12_16_16_41.old'mmon-services' to a subdirectory of itself, `/var/lib/ambari-
Updating : ambari-agent-2.4.2.0-136.x86_64 1/2
Verifying : ambari-agent-2.4.2.0-136.x86_64 1/2
Verifying : ambari-agent-2.4.0.0-1225.x86_64 2/2
Updated:
ambari-agent.x86_64 0:2.4.2.0-136
[root@sandbox /]# rpm -qa | grep ambari-agent
ambari-agent-2.4.2.0-136.x86_64
Using python /usr/bin/pythoner upgrade
Upgrading ambari-server
Updating properties in ambari.properties ...
WARNING: Original file ambari-env.sh kept
ERROR: Unexpected OSError: [Errno 22] Invalid argument: '/var/lib/ambari-server/data/tmp/solr-service/custom
-services/SOLR/5.5.2.2.5'
[root@sandbox /]# ambari-server with -v or --verbose option
[root@sandbox /]# ambari-server start
Starting ambari-server/python
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
No errors were found.
Ambari database consistency check finished
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out fo
r more information.
... View more
12-13-2016
06:09 PM
Hi Lav, I have attempted to upgrade the Ambari Server to v2.4.2 as recommended, and got the following error this time, after the upgrade to Ambari-server 2.4.2 completed successfully - and I issued the 'ambari-server START' command. Please review the [ ERROR: Exiting with exit code -1. REASON: Ambari Server java process died with exitcode 255. ] and let me know what to do to fix ERROR ... OUTPUT from the SSH trace, loggged in as 'Root' is below: Running Transaction
nf/ambari-agent.ini.old'bari-agent/conf/ambari-agent.ini' to a subdirectory of itself, `/etc/ambari-agent/co
mv: cannot move `/var/lib/ambari-agent/cache/stacks' to a subdirectory of itself, `/var/lib/ambari-agent/cache/stacks_13_12_16_16_41.old'
agent/cache/common-services_13_12_16_16_41.old'mmon-services' to a subdirectory of itself, `/var/lib/ambari- Updating : ambari-agent-2.4.2.0-136.x86_64 1/2
Verifying : ambari-agent-2.4.2.0-136.x86_64 1/2
Verifying : ambari-agent-2.4.0.0-1225.x86_64 2/2
Updated: ambari-agent.x86_64 0:2.4.2.0-136
[root@sandbox /]# rpm -qa | grep ambari-agent
ambari-agent-2.4.2.0-136.x86_64
Using python /usr/bin/pythoner upgrade
Upgrading ambari-server
Updating properties in ambari.properties ...
WARNING: Original file ambari-env.sh kept
ERROR: Unexpected OSError: [Errno 22] Invalid argument: '/var/lib/ambari-server/data/tmp/solr-service/custom-services/SOLR/5.5.2.2.5'
[root@sandbox /]# ambari-server with -v or --verbose option
-------------------
[root@sandbox /]# ambari-server start Starting ambari-server/python Ambari Server running with administrator privileges. Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
No errors were found.
Ambari database consistency check finished
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255.
Check /var/log/ambari-server/ambari-server.out (VI ambari-server.out = [ OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 "ambari-server.out" 1L, 95C ] for more information.
VI ambari-server.log = [ ...13 Dec 2016 17:00:10,329 INFO [main] HostRoleCommandDAO:258 - Host role command status summary cache enabled !
13 Dec 2016 17:00:10,330 INFO [main] TransactionalLock$LockArea:121 - LockArea HRC_STATUS_CACHE is enabled
13 Dec 2016 17:00:10,581 INFO [main] AmbariServer:914 - Getting the controller
13 Dec 2016 17:00:11,769 ERROR [main] AmbariServer:929 - Failed to run the Ambari Server org.apache.ambari.server.AmbariException: Current database store version is not compatible with current server version, serverVersion=2.4.2.0, schemaVersion=2.4.0 at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkDBVersionCompatible(DatabaseConsistencyCheckHelper.java:147) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:919) ]
[root@sandbox /]#
... View more
12-12-2016
05:48 PM
Hi Lav, I cannot change update the existing version of Ambari (2.4.0) that is already implemented in HDP 2.5, when the HDP 2.5 VM is installed.
I tried to do so, and the whole HDP 2.5 platform failed to start !!! I therefore re-installed HDP 2.5 VM and also the latest Pivotal HDB 2.1.0 HAWQ services for HDP 2.5 on the VM.
....Please see the following HAWQ SCRIPT related details below, when I re-installed the Pivotal HDB 2.1.0.0 files for HDP 2.5 VM:
root@sandbox /]# cd /var/lib/hawq
[root@sandbox hawq]# ./add-hawq.py --user admin --password admin --stack HDP-2.5
INFO: Repository hdb-2.1.0.0 with baseurl http://sandbox.hortonworks.com/hdb-2.1.0.0 added to /var/lib/ambari-server...
INFO: Repository hdb-add-ons-2.1.0.0 with baseurl http://sandbox.hortonworks.com/hdb-add-ons-2.51.0.0 added to /var/lib/ambari-server/resources/stacks/HDP/2.5/repos/repoinfo.xml
INFO: HAWQ directory was successfully created under directory /var/lib/ambari-server/resources/stacks/HDP/2.5/services directory was successfully created under directory /var/lib/ambari-server/resources/stacks/HDP/2.5
INFO: Please restart ambari-server for changes to take effect
... View more
12-10-2016
09:41 PM
Hi Lav, I downloaded and installed the 'Hortonworks HDP 2.5 SANDBOX on VM' version from the Hortonworks Download site! This VM version comes with the recommended version of Ambari (2.4?) already installed, for a sandbox 'Docker' implementation on a single host machine like a laptop. Please visit the Hortonworks download site for the hortonworks tarball, described as MD5 : d42a9bd11f29775cc5b804ce82a72efd on the Hortonworks site. The latest available version of Pivotal HDB 2.1.0 was also downloaded from the Pivotal site, to setup the HDB/HAWQ services - via Ambari, on Hortonworks HDP 2.5 VM once installed on my laptop. Many Thanks.
... View more
12-09-2016
11:17 PM
Hi Lav, I am running HDP 2.5 with Ambari v2,4 already installed as a component of the stack. Ambari comes installed with HDP, there is no separate version to install and run ...
... View more
12-09-2016
09:23 PM
YES, the directory (../common-services/HAWQ/2.0.0/package/scripts) EXISTS - but there is NO 'hawqmaster.py' (the missing file reported in the error message) in the scripts directory. Please try installing it on a single-node host machine (mine is a laptop) to verify whether a 'bug' exists in this latest version of HDB 2.1. Thanks!
... View more
12-09-2016
04:32 PM
Pratheesh, how do I get hold of the entire installation log in hdfs - what isdirectory
... View more
12-08-2016
10:36 PM
Hi Pratheesh, nice of you to respond so quickly...yes, this is a fresh installation of Hawq v2.1 and NOT an upgrade from 2.0 to 2.1. Regards.
... View more
12-08-2016
08:22 PM
1 Kudo
Pivotal HDB latest version 2.1.0 install on Hortonworks v2.5 fails during the service DEPLOY stage with the following ERROR: Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py does not exist'
... View more
Labels:
11-16-2016
04:33 PM
Hi Ancil and Vineet, I am already evaluating apache NIFI/HDF and MongoDB connectivity using HDP2.5. Since I only want to setup 1 HDP VM, I need to find a way to add the additional HAWQ+XPF services to the same platform using ambari . NOTE - that I have just tried to install another unrelated service 'SOLR v55225' via ambari which gave the SAME '500' error when 'selecting MASTERS' - and indicates a more widespread problem with adding new services to HDP version 2.5 (and logged in as ADMIN, ADMIN). I could re-install the HDP2.5 VM but will loose my NIFI/HDF workflow newly configured on the platform. Therefore, I'd appreciate if you could send me the command, to delete the directory via SSH and logged in as 'Root'. Many Thanks!
... View more
11-15-2016
09:59 PM
When trying to install the PIVOTAL HAWQ and PXF services on HDP (VERSION 2.5) I get the following error - when I click on the NEXT button for 'SELECT MASTERS' : 500 status code received on POST method for API: /api/v1/stacks/HDP/versions/2.5/recommendations Error message: Error occured during stack advisor command invocation: Unable to delete directory /var/run/ambari-server/stack-recommendations/1. Therefore - Ambari on HDP v2.5 does NOT let me continue to complete the installation/implementation of the HAWQ and PXF services - thus, I cannot test the Pivotal HDB implementation on the HDP2.5 platform. Please HELP !!!!!
... View more