Member since
01-19-2016
15
Posts
7
Kudos Received
0
Solutions
03-09-2016
12:35 AM
2016-03-08 15:33:35,182 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485 2016-03-08 15:33:35,183 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-03-08 15:33:35,183 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-08 15:33:35,200 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-03-08 15:33:35,200 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-08 15:33:35,216 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-03-08 15:33:35,216 - Ensuring that hadoop has the correct symlink structure
2016-03-08 15:33:35,217 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-08 15:33:35,301 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-03-08 15:33:35,301 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-03-08 15:33:35,301 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-08 15:33:35,318 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-03-08 15:33:35,318 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-08 15:33:35,334 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-03-08 15:33:35,334 - Ensuring that hadoop has the correct symlink structure
2016-03-08 15:33:35,334 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-08 15:33:35,335 - Group['spark'] {}
2016-03-08 15:33:35,336 - Group['hadoop'] {}
2016-03-08 15:33:35,337 - Group['users'] {}
2016-03-08 15:33:35,337 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,337 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,338 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2016-03-08 15:33:35,338 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,338 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2016-03-08 15:33:35,339 - User['mahout'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,339 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,340 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-03-08 15:33:35,340 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,341 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,341 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,341 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,342 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,342 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-03-08 15:33:35,343 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-08 15:33:35,344 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-03-08 15:33:35,347 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-03-08 15:33:35,347 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-03-08 15:33:35,348 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-08 15:33:35,349 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-03-08 15:33:35,352 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-03-08 15:33:35,352 - Group['hdfs'] {'ignore_failures': False}
2016-03-08 15:33:35,352 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2016-03-08 15:33:35,353 - Directory['/etc/hadoop'] {'mode': 0755}
2016-03-08 15:33:35,362 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-03-08 15:33:35,362 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-03-08 15:33:35,371 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-03-08 15:33:35,383 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-03-08 15:33:35,384 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-03-08 15:33:35,385 - Changing owner for /var/run/hadoop from 509 to root
2016-03-08 15:33:35,385 - Changing group for /var/run/hadoop from 501 to root
2016-03-08 15:33:35,385 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-03-08 15:33:35,388 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-03-08 15:33:35,389 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-03-08 15:33:35,389 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-03-08 15:33:35,395 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-03-08 15:33:35,396 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-03-08 15:33:35,396 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-03-08 15:33:35,399 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-03-08 15:33:35,402 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-03-08 15:33:35,495 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-03-08 15:33:35,495 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-03-08 15:33:35,495 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-08 15:33:35,511 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-03-08 15:33:35,511 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-08 15:33:35,526 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-03-08 15:33:35,526 - Ensuring that hadoop has the correct symlink structure
2016-03-08 15:33:35,526 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-08 15:33:35,527 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-03-08 15:33:35,527 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-03-08 15:33:35,527 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-08 15:33:35,542 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-03-08 15:33:35,542 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-08 15:33:35,557 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-03-08 15:33:35,557 - Ensuring that hadoop has the correct symlink structure
2016-03-08 15:33:35,557 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-08 15:33:35,561 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-03-08 15:33:35,565 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-03-08 15:33:35,566 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-03-08 15:33:35,572 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2016-03-08 15:33:35,572 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,578 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-03-08 15:33:35,583 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-03-08 15:33:35,583 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,587 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-03-08 15:33:35,588 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2016-03-08 15:33:35,593 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-03-08 15:33:35,593 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,597 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-03-08 15:33:35,602 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-03-08 15:33:35,602 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,607 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-03-08 15:33:35,612 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-03-08 15:33:35,612 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,642 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-03-08 15:33:35,648 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-03-08 15:33:35,648 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-03-08 15:33:35,661 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2016-03-08 15:33:35,662 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0751, 'recursive': True}
2016-03-08 15:33:35,674 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/boot', '/boot/efi', '/dstage', '/home', '/opt', '/tmp', '/usr', '/var', '/data1', '/data2', '/data3', '/data4', '/data5', '/data6', '/data7', '/data8', '/data9', '/data10', '/proc/sys/fs/binfmt_misc', '/bdata1', '/bdata2', '/bdata3', '/data4/ramdisk'].
2016-03-08 15:33:35,675 - Mount point for directory /bdata1/hadoop/hdfs/data is /
2016-03-08 15:33:35,675 - Mount point for directory /bdata2/hadoop/hdfs/data is /
2016-03-08 15:33:35,675 - Mount point for directory /bdata3/hadoop/hdfs/data is /
2016-03-08 15:33:35,675 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-03-08 15:33:35,676 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2016-03-08 15:33:35,676 - Changing owner for /var/run/hadoop from 0 to hdfs
2016-03-08 15:33:35,676 - Changing group for /var/run/hadoop from 0 to hadoop
2016-03-08 15:33:35,677 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2016-03-08 15:33:35,677 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2016-03-08 15:33:35,677 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2016-03-08 15:33:35,692 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2016-03-08 15:33:35,692 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
... View more
03-09-2016
12:31 AM
During the testing the datanodes went down now they are not coming up I am not concerned about the data at this point just need to restart the system. following is the log
... View more
Labels:
- Labels:
-
Apache Hadoop
02-20-2016
11:44 PM
1 Kudo
@Neeraj no I am not a supported customer but I remember there was some sort of a script that could be run on the cluster to get the recommendations.
... View more
02-20-2016
08:40 PM
1 Kudo
is there any script that I can run to get the best possible parameters for my environment
... View more
Labels:
- Labels:
-
Apache Phoenix
02-11-2016
06:35 PM
1 Kudo
I am getting the same error on Ambari metrics collector install "noting to do" can you please be more specific what are the steps to solve this issue. I have checked the repos all looks good and was able to deploy all other components of HDP now trying to install metrics collector and am stuck. Help is highly appreciated.
... View more
02-11-2016
06:33 PM
I am also getting the same issue when you say drop postgresql will it affect any other services. What will be the exact steps to install ambari metrics I have installed the rest of the HDP things but metrics collector is saying nothing to do error.
... View more
02-04-2016
08:26 AM
If I have the internet access do I still need to create the mirror server or I can jump directly to step2) installing server after making sure the other requirements are met m liks ssh etc or still do I have to create the mirror server and download all the repos individually. . .
... View more
02-01-2016
09:12 PM
1 Kudo
Hi I need to run some bench marking that will provide optimal performance for Write I/O, Read I/O CPU utilization, Memory Utilization. Is there any benchmarking suite available for hortonworks that I can use or If someone has any matrix to what and how to test that would be a great help.
... View more
01-21-2016
05:57 PM
another question if you can answer does Hortonworks has any thing like Drill ?
... View more
01-20-2016
09:56 PM
Thanks much your help is appreciated.
... View more