Member since
08-09-2016
77
Posts
4
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2186 | 10-17-2016 04:14 PM | |
1581 | 10-04-2016 01:35 PM | |
8301 | 09-27-2016 11:50 AM | |
1460 | 08-24-2016 04:10 PM | |
2572 | 08-16-2016 10:12 AM |
09-07-2017
02:06 PM
Hi , i need help please Vertex failed, vertexName=Map 1, vertexId=vertex_1504783053935_0010_1_00, diagnostics=[Task failed, taskId=task_1504783053935_0010_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1504783053935_0010_1_00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:319)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:184)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:329)
at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:364)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:274)
... 15 more
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Pig
02-28-2017
03:55 PM
@SBandaru [root@node3 .ssh]# yum clean all
Loaded plugins: fastestmirror, nvidia, priorities, security
Cleaning repos: HDF-2.0 HDP-UTILS-1.1.0.21 adobe-linux-x86_64 base elrepo epel extras google-chrome google64 updates virtualbox
Cleaning up Everything
Cleaning up list of fastest mirrors [root@node3 .ssh]# yum -d 0 -e 0 -v -y install ambari-metrics-collector
Loading "fastestmirror" plugin
Loading "nvidia" plugin
Loading "priorities" plugin
Not loading "refresh-packagekit" plugin, as it is disabled
Loading "security" plugin
Config time: 0.022
[nvidia]: device found: pci:v000010DEd00000FF3sv000010DEsd00001162bc03sc00i00
Yum Version: 3.2.29
rpmdb time: 0.000
Setting up Install Process
Setting up Package Sacks
Loading mirror speeds from cached hostfile
* base: mirrors.coreix.net
* elrepo: ftp.icm.edu.pl
* epel: ftp.icm.edu.pl
* extras: mirrors.coreix.net
* updates: mirrors.coreix.net
Searching 12 packages
searching package kmod-nvidia-375.26-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-375.39-1.el6.elrepo.x86_64
searching in provides entries
searching package kmod-nvidia-375.20-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-367.57-1.el6.elrepo.x86_64
searching in provides entries
searching package kmod-nvidia-375.39-1.el6.elrepo.x86_64
searching in provides entries
searching package kmod-nvidia-367.57-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-32bit-367.57-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-32bit-375.39-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-32bit-375.20-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-32bit-375.26-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-375.26-1.el6.elrepo.x86_64
searching in provides entries
searching package nvidia-x11-drv-375.20-1.el6.elrepo.x86_64
searching in provides entries
pkgsack time: 2.462
Checking for virtual provide or file-provide for ambari-metrics-collector
No package ambari-metrics-collector available.
Error: Nothing to do
... View more
02-28-2017
02:52 PM
stderr: /var/lib/ambari-agent/data/errors-13.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py", line 148, in <module>
AmsCollector().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py", line 36, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector' returned 1. Error: Nothing to do stdout: /var/lib/ambari-agent/data/output-13.txt 2017-02-28 14:48:42,696 - Group['hadoop'] {}
2017-02-28 14:48:42,697 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-28 14:48:42,698 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-28 14:48:42,698 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-02-28 14:48:42,699 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-02-28 14:48:42,700 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-02-28 14:48:42,702 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-02-28 14:48:42,715 - Initializing 2 repositories
2017-02-28 14:48:42,715 - Repository['HDF-2.0'] {'base_url': 'http://public-repo-1.hortonworks.com/HDF/centos6/2.x/updates/2.0.0.0', 'action': ['create'], 'components': ['HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDF', 'mirror_list': None}
2017-02-28 14:48:42,720 - File['/etc/yum.repos.d/HDF.repo'] {'content': '[HDF-2.0]\nname=HDF-2.0\nbaseurl=http://public-repo-1.hortonworks.com/HDF/centos6/2.x/updates/2.0.0.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-02-28 14:48:42,720 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-02-28 14:48:42,722 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-02-28 14:48:42,722 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-28 14:48:42,918 - Skipping installation of existing package unzip
2017-02-28 14:48:42,918 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-28 14:48:42,932 - Skipping installation of existing package curl
2017-02-28 14:48:42,932 - Package['hdf-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-28 14:48:42,946 - Skipping installation of existing package hdf-select
2017-02-28 14:48:43,068 - Using hadoop conf dir: /usr/hdf/current/hadoop-client/conf
2017-02-28 14:48:43,070 - checked_call['hostid'] {}
2017-02-28 14:48:43,073 - checked_call returned (0, 'a8c04701')
2017-02-28 14:48:43,075 - Package['ambari-metrics-collector'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-28 14:48:43,129 - Installing package ambari-metrics-collector ('/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector')
2017-02-28 14:48:45,584 - Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector' returned 1. Error: Nothing to do
2017-02-28 14:48:45,584 - Failed to install package ambari-metrics-collector. Executing '/usr/bin/yum clean metadata'
2017-02-28 14:48:45,720 - Retrying to install package ambari-metrics-collector after 30 seconds
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
02-24-2017
02:36 PM
@Matt Clarke so what you proposition to use hortonworks with IOT ? have two cluster one for HDP and the second for HDF or use just HDF and other component
... View more
02-24-2017
01:51 PM
@Matt Clarke because i will need hbase also so i dont think that i can add it in hdp
... View more
02-24-2017
12:35 PM
@Michael Young can i use the same cluster but , install ambari server fo hdp on a server and ambari-server for HDF on another server but use the same node ??
... View more
02-24-2017
12:18 PM
@Michael Young how to install HDF on the same cluster , because i wanna use HDF and HDP
... View more
02-24-2017
12:08 PM
Please how to install apache NIFI with ambari on existing HDP cluster
... View more
Labels:
10-19-2016
02:40 PM
yes thank you i found the solution with the same steps .
... View more
10-17-2016
04:14 PM
i found the solution : Create a new LVM physical volume in that new partition.
pvcreate /dev/sdb
Add the new physical volume to your volume group.
vgextend VolGroup /dev/sdab
Extend the logical volume containing the filesystem you want to extend.
lvextend -l +100%FREE VolGroup/name_of_logical_volume
resize2fs /name_of_logical_volume
... View more
10-17-2016
01:31 PM
@Neeraj Sabharwal
... View more
10-17-2016
01:30 PM
@Sagar Shimpi
... View more
10-17-2016
11:57 AM
@Sagar Shimpi fdisk -l
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00009ad3
Device Boot Start End Blocks Id System
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00037ccf
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 121602 976248832 8e Linux LVM
Disk /dev/mapper/vg_manager-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_manager-lv_swap: 8271 MB, 8271167488 bytes
255 heads, 63 sectors/track, 1005 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_manager-lv_home: 937.7 GB, 937716350976 bytes
255 heads, 63 sectors/track, 114004 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
... View more
10-17-2016
11:56 AM
[root@Manager ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_manager-lv_root
50G 47G 536M 99% /
tmpfs 3.8G 72K 3.8G 1% /dev/shm
/dev/sda1 477M 41M 411M 9% /boot
/dev/mapper/vg_manager-lv_home
860G 9.6G 807G 2% /home
... View more
10-17-2016
11:56 AM
[root@Manager ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_manager-lv_root
50G 47G 536M 99% /
tmpfs 3.8G 72K 3.8G 1% /dev/shm
/dev/sda1 477M 41M 411M 9% /boot
/dev/mapper/vg_manager-lv_home
860G 9.6G 807G 2% /home
... View more
10-17-2016
10:57 AM
@Sagar Shimpi
... View more
10-17-2016
09:53 AM
1 Kudo
hi i have this problem the space on my root is 99 % i wanna increase it how to do it please [root@Manager ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_manager-lv_root
50G 47G 501M 99% /
tmpfs 3.8G 80K 3.8G 1% /dev/shm
/dev/mapper/vg_manager-lv_home
860G 9.6G 807G 2% /home
/dev/sdb1 917G 72M 871G 1% /home2
... View more
Labels:
- Labels:
-
Apache Ambari
10-05-2016
11:19 AM
i have mount new disk so i add it to hdfs , but i have ambari-agent disk 50 gb so how to increase it ?
... View more
10-05-2016
11:14 AM
How to increase ambari-agent disk space ??
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
10-04-2016
01:35 PM
i found the solution i uninstall ambari-server without reset it and install it and it's work .
... View more
10-04-2016
11:41 AM
@Artem Ervits thate the error: Internal Exception: org.postgresql.util.PSQLException: Connection
refused. Check that the hostname and port are correct and that the
postmaster is accepting TCP/IP connections.
... View more
10-04-2016
11:14 AM
yes : Internal Exception: org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
... View more
10-04-2016
09:35 AM
yes all cluster was fine , today i try to start ambari-server then i have this error , i tried to service postgresql start
Starting postgresql service: [FAILED]
... View more
10-04-2016
09:14 AM
Ambari Server running with administrator privileges.
Running initdb: This may take upto a minute.
About to start PostgreSQL
ERROR: Exiting with exit code 3.
REASON: Unable to start PostgreSQL server. Status stopped. . Exiting
... View more
Labels:
- Labels:
-
Apache Ambari
09-27-2016
11:51 AM
Thank you @Ian Roberts but i found the solution
... View more
09-27-2016
11:50 AM
i found the solution go to yarn.nodemanager.disk-health-checker.min-healthy-disks
and change the value to 0 and restart yarn and it gonna work.
... View more
09-27-2016
10:07 AM
2016-09-27 09:44:38,711 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,712 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,713 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,715 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,717 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,717 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,718 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,719 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures': True, 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:38,720 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-09-27 09:44:38,752 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-09-27 09:44:38,752 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-27 09:44:38,779 - Writing File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] because contents don't match
2016-09-27 09:44:38,780 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'dfs.datanode.data.dir': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2016-09-27 09:44:38,793 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-09-27 09:44:38,793 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-27 09:44:38,860 - Writing File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] because contents don't match
2016-09-27 09:44:38,861 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2016-09-27 09:44:38,874 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2016-09-27 09:44:38,874 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-27 09:44:38,923 - Writing File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] because contents don't match
2016-09-27 09:44:38,924 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 501 to yarn
2016-09-27 09:44:38,924 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2016-09-27 09:44:38,937 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2016-09-27 09:44:38,937 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,050 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] because contents don't match
2016-09-27 09:44:39,050 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2016-09-27 09:44:39,063 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2016-09-27 09:44:39,064 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,100 - Writing File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] because contents don't match
2016-09-27 09:44:39,101 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 506 to yarn
2016-09-27 09:44:39,101 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group': 'hadoop'}
2016-09-27 09:44:39,123 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644}
2016-09-27 09:44:39,127 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644}
2016-09-27 09:44:39,133 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
2016-09-27 09:44:39,134 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] because contents don't match
2016-09-27 09:44:39,135 - File['/usr/hdp/current/hadoop-yarn-nodemanager/bin/container-executor'] {'group': 'hadoop', 'mode': 02050}
2016-09-27 09:44:39,143 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg'] {'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644}
2016-09-27 09:44:39,148 - Directory['/cgroups_test/cpu'] {'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
... View more
09-27-2016
09:49 AM
2016-09-27 09:44:39,168 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2016-09-27 09:44:39,172 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'}
2016-09-27 09:44:39,179 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-09-27 09:44:39,191 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2016-09-27 09:44:39,192 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,239 - Writing File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] because contents don't match
2016-09-27 09:44:39,239 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 508 to mapred
2016-09-27 09:44:39,240 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-09-27 09:44:39,253 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2016-09-27 09:44:39,253 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,269 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 508 to hdfs
2016-09-27 09:44:39,269 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-09-27 09:44:39,282 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-09-27 09:44:39,282 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,290 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] because contents don't match
2016-09-27 09:44:39,290 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:39,312 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2016-09-27 09:44:39,325 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-09-27 09:44:39,325 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,340 - Writing File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] because contents don't match
2016-09-27 09:44:39,341 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-09-27 09:44:39,354 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-09-27 09:44:39,354 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-09-27 09:44:39,363 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] because contents don't match
2016-09-27 09:44:39,364 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2016-09-27 09:44:39,364 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2016-09-27 09:44:39,366 - File['/var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid'] {'action': ['delete'], 'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid` >/dev/null 2>&1'}
2016-09-27 09:44:39,373 - Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf start nodemanager'] {'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid` >/dev/null 2>&1', 'user': 'yarn'}
2016-09-27 09:44:40,596 - Execute['ls /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid` >/dev/null 2>&1'] {'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid` >/dev/null 2>&1', 'tries': 5, 'user': 'yarn', 'try_sleep': 1}
2016-09-27 09:44:40,798 - Skipping Execute['ls /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid` >/dev/null 2>&1'] due to not_if
... View more
09-27-2016
09:48 AM
yes i can restart the unhealthy nodemanager i have this on log 2016-09-27 09:44:32,687 - Group['hadoop'] {'ignore_failures': False}
2016-09-27 09:44:32,690 - Group['users'] {'ignore_failures': False}
2016-09-27 09:44:32,691 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,692 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,693 - User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,694 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,695 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}
2016-09-27 09:44:32,696 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,697 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}
2016-09-27 09:44:32,698 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,699 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,700 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,701 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,702 - User['ams'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}
2016-09-27 09:44:32,703 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-27 09:44:32,734 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-09-27 09:44:32,741 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-09-27 09:44:32,742 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-09-27 09:44:32,757 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-27 09:44:32,759 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-09-27 09:44:32,766 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-09-27 09:44:32,767 - Group['hdfs'] {'ignore_failures': False}
2016-09-27 09:44:32,768 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2016-09-27 09:44:32,769 - Directory['/etc/hadoop'] {'mode': 0755}
2016-09-27 09:44:32,789 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-09-27 09:44:32,807 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-09-27 09:44:32,857 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:32,879 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:32,880 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-09-27 09:44:32,888 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-09-27 09:44:32,891 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-09-27 09:44:32,896 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-09-27 09:44:32,909 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-09-27 09:44:32,919 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-09-27 09:44:32,921 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-09-27 09:44:32,929 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-09-27 09:44:32,941 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-09-27 09:44:33,397 - Execute['export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf stop nodemanager'] {'user': 'yarn'}
2016-09-27 09:44:38,656 - Directory['/hadoop/yarn/local'] {'group': 'hadoop', 'recursive': True, 'cd_access': 'a', 'ignore_failures': True, 'mode': 0775, 'owner': 'yarn'}
2016-09-27 09:44:38,659 - Directory['/hadoop/yarn/log'] {'group': 'hadoop', 'recursive': True, 'cd_access': 'a', 'ignore_failures': True, 'mode': 0775, 'owner': 'yarn'}
2016-09-27 09:44:38,659 - Execute[('chown', '-R', 'yarn', '/hadoop/yarn/local/usercache/ambari-qa')] {'sudo': True, 'only_if': 'test -d /hadoop/yarn/local/usercache/ambari-qa'}
... View more
09-27-2016
09:40 AM
@Sindhu i can check just that 1 NodeManager is unhealthy
... View more