Member since 
    
	
		
		
		09-17-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                31
            
            
                Posts
            
        
                2
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2318 | 03-23-2020 11:38 PM | |
| 15437 | 07-27-2018 08:45 AM | |
| 4288 | 05-09-2018 08:28 AM | |
| 1279 | 10-21-2016 06:29 AM | 
			
    
	
		
		
		03-23-2020
	
		
		11:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Solved this issue by running below commands on the corresponding node. You need root/sudo access for this.     1) yum list installed | grep spark2  2) yum-complete-transaction  3) yum remove spark2*  4) Goto Ambari and install Spark2 client again.      If still fails just refreshed tez config then tried 1 more time of step 4.     This is issue is happening for almost to any component due to break/killed yum     yum remove spark2*   output looks like below      Removed:  spark2.noarch 0:2.3.2.3.1.0.0-78.el7 spark2_3_0_0_0_1634-yarn-shuffle.noarch 0:2.3.1.3.0.0.0-1634  spark2_3_1_0_0_78.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-master.noarch 0:2.3.2.3.1.0.0-78  spark2_3_1_0_0_78-python.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-worker.noarch 0:2.3.2.3.1.0.0-78  spark2_3_1_0_0_78-yarn-shuffle.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_4_0_315.noarch 0:2.3.2.3.1.4.0-315  spark2_3_1_4_0_315-python.noarch 0:2.3.2.3.1.4.0-315 spark2_3_1_4_0_315-yarn-shuffle.noarch 0:2.3.2.3.1.4.0-315  Dependency Removed:  datafu_3_0_0_0_1634.noarch 0:1.3.0.3.0.0.0-1634 hadoop_3_0_0_0_1634.x86_64 0:3.1.0.3.0.0.0-1634  hadoop_3_0_0_0_1634-client.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-hdfs.x86_64 0:3.1.0.3.0.0.0-1634  hadoop_3_0_0_0_1634-libhdfs.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-mapreduce.x86_64 0:3.1.0.3.0.0.0-1634  hadoop_3_0_0_0_1634-yarn.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_1_0_0_78.x86_64 0:3.1.1.3.1.0.0-78  hadoop_3_1_0_0_78-client.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-hdfs.x86_64 0:3.1.1.3.1.0.0-78  hadoop_3_1_0_0_78-libhdfs.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-mapreduce.x86_64 0:3.1.1.3.1.0.0-78  hadoop_3_1_0_0_78-yarn.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_4_0_315.x86_64 0:3.1.1.3.1.4.0-315  hadoop_3_1_4_0_315-client.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-hdfs.x86_64 0:3.1.1.3.1.4.0-315  hadoop_3_1_4_0_315-libhdfs.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-mapreduce.x86_64 0:3.1.1.3.1.4.0-315  hadoop_3_1_4_0_315-yarn.x86_64 0:3.1.1.3.1.4.0-315 hbase_3_0_0_0_1634.noarch 0:2.0.0.3.0.0.0-1634  hbase_3_1_0_0_78.noarch 0:2.0.2.3.1.0.0-78 hbase_3_1_4_0_315.noarch 0:2.0.2.3.1.4.0-315  hive_3_0_0_0_1634.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-hcatalog.noarch 0:3.1.0.3.0.0.0-1634  hive_3_0_0_0_1634-jdbc.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-webhcat.noarch 0:3.1.0.3.0.0.0-1634  hive_3_1_0_0_78.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_0_0_78-hcatalog.noarch 0:3.1.0.3.1.0.0-78  hive_3_1_0_0_78-jdbc.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_4_0_315.noarch 0:3.1.0.3.1.4.0-315  hive_3_1_4_0_315-hcatalog.noarch 0:3.1.0.3.1.4.0-315 hive_3_1_4_0_315-jdbc.noarch 0:3.1.0.3.1.4.0-315  livy2_3_1_0_0_78.noarch 0:0.5.0.3.1.0.0-78 livy2_3_1_4_0_315.noarch 0:0.5.0.3.1.4.0-315  pig_3_0_0_0_1634.noarch 0:0.16.0.3.0.0.0-1634 tez_3_0_0_0_1634.noarch 0:0.9.1.3.0.0.0-1634  tez_3_1_0_0_78.noarch 0:0.9.1.3.1.0.0-78 tez_3_1_4_0_315.noarch 0:0.9.1.3.1.4.0-315  Installing package spark2_3_1_0_0_78 ('/usr/bin/yum -y install spark2_3_1_0_0_78') 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2020
	
		
		07:52 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Restart Spark2 Client  Task Log  stderr: /var/lib/ambari-agent/data/errors-8648.txt 
 2020-03-20 16:05:33,015 - The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging.  Traceback (most recent call last):  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 55, in <module>  SparkClient().execute()  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute  method(env)  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 966, in restart  self.install(env)  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 35, in install  self.configure(env)  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 41, in configure  setup_spark(env, 'client', upgrade_type=upgrade_type, action = 'config')  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/setup_spark.py", line 107, in setup_spark  mode=0644  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__  self.env.run()  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run  self.run_action(resource, action)  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action  provider_action()  File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/properties_file.py", line 55, in action_create  encoding = self.resource.encoding,  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__  self.env.run()  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run  self.run_action(resource, action)  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action  provider_action()  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 120, in action_create  raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))  resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] failed, parent directory /usr/hdp/current/spark2-client/conf doesn't exist 
   stdout: /var/lib/ambari-agent/data/output-8648.txt 
 2020-03-20 16:05:32,298 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78  2020-03-20 16:05:32,313 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf  2020-03-20 16:05:32,483 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78  2020-03-20 16:05:32,488 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf  2020-03-20 16:05:32,489 - Group['livy'] {}  2020-03-20 16:05:32,491 - Group['spark'] {}  2020-03-20 16:05:32,491 - Group['ranger'] {}  2020-03-20 16:05:32,491 - Group['hdfs'] {}  2020-03-20 16:05:32,491 - Group['hadoop'] {}  2020-03-20 16:05:32,491 - Group['users'] {}  2020-03-20 16:05:32,492 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,493 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,494 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,495 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,496 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}  2020-03-20 16:05:32,497 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}  2020-03-20 16:05:32,497 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}  2020-03-20 16:05:32,498 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}  2020-03-20 16:05:32,499 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}  2020-03-20 16:05:32,500 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,501 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}  2020-03-20 16:05:32,502 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,502 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}  2020-03-20 16:05:32,503 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}  2020-03-20 16:05:32,505 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}  2020-03-20 16:05:32,511 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if  2020-03-20 16:05:32,511 - Group['hdfs'] {}  2020-03-20 16:05:32,512 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}  2020-03-20 16:05:32,512 - FS Type: HDFS  2020-03-20 16:05:32,513 - Directory['/etc/hadoop'] {'mode': 0755}  2020-03-20 16:05:32,525 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}  2020-03-20 16:05:32,526 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}  2020-03-20 16:05:32,541 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}  2020-03-20 16:05:32,549 - Skipping Execute[('setenforce', '0')] due to not_if  2020-03-20 16:05:32,550 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}  2020-03-20 16:05:32,552 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}  2020-03-20 16:05:32,552 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}  2020-03-20 16:05:32,553 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}  2020-03-20 16:05:32,556 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}  2020-03-20 16:05:32,557 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}  2020-03-20 16:05:32,563 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}  2020-03-20 16:05:32,571 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}  2020-03-20 16:05:32,572 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}  2020-03-20 16:05:32,573 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}  2020-03-20 16:05:32,576 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}  2020-03-20 16:05:32,579 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}  2020-03-20 16:05:32,583 - Skipping unlimited key JCE policy check and setup since it is not required  2020-03-20 16:05:32,631 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}  2020-03-20 16:05:32,650 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')  2020-03-20 16:05:32,715 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}  2020-03-20 16:05:32,735 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')  2020-03-20 16:05:32,928 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf  2020-03-20 16:05:32,938 - Directory['/var/run/spark2'] {'owner': 'spark', 'create_parents': True, 'group': 'hadoop', 'mode': 0775}  2020-03-20 16:05:32,940 - Directory['/var/log/spark2'] {'owner': 'spark', 'group': 'hadoop', 'create_parents': True, 'mode': 0775}  2020-03-20 16:05:32,940 - PropertiesFile['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...}  2020-03-20 16:05:32,944 - Generating properties file: /usr/hdp/current/spark2-client/conf/spark-defaults.conf  2020-03-20 16:05:32,944 - File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 'encoding': 'UTF-8'}  2020-03-20 16:05:32,995 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}  2020-03-20 16:05:33,014 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')  2020-03-20 16:05:33,015 - The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging. 
 Command failed after 1 tries 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache Spark
			
    
	
		
		
		11-04-2019
	
		
		11:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You need to stop ambari metrics service via ambari and then remove all temp files. Go to Ambari Metrics collector service host. and execute the below command.       mv /var/lib/ambari-metrics-collector /tmp/ambari-metrics-collector_OLD       Now you can restart ams service again and now you should be good with Ambari Metrics. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-03-2018
	
		
		10:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, Please find the below steps for moving zookeeper data directory.   change dataDir conf in ambari ( Go to Ambari -> ZooKeeper -> Configs -> ZooKeeper Server   -> ZooKeeper directory    /mnt/scratch/zookeeper)    Stop
all zookeeper servers ( Zookeeper -> service actions -> stop )    copy contents to new dir, change permission of folder (myid and version-2/ ) .                                                                       Login to zookeeper1 node                                                                                                                                                               $ cp -r /mnt/sda/zookeeper/* /mnt/scratch/zookeeper/                                                                                                                 $ chown -R zookeeper:hadoop  /mnt/scratch/zookeeper/  start only zookeeper1 node zookeeper server from ambari UI   repeat 2-4 for other two zookeeper servers (zookeeper2 and zookeeper3)  Restart all services if require.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2018
	
		
		03:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Nice Ambari Hardware Matrix... Thanks a lot... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-20-2018
	
		
		05:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I got the Ambari Metrics error "Connection failed: [Errno 111] Connection refused to metrics02-node:6188"  from Ambari UI.  Ambari metrics collector connection refused : metrics data not available on Ambari Dashboard   Alert : Metrics Collector Process. Connection failed: [Errno 111] Connection refused to metrics02-node:6188  As you mentioned, i removed data from hbase.tmp.dir and hbase.rootdir.  rm -rf /var/lib/ambari-metrics-collector/hbase/*  rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/*  Later restared Ambari Metrics from Ambari. It worked fine.  Thanks a lot. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2018
	
		
		09:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Here is my solution  https://community.hortonworks.com/questions/208928/increase-open-file-limit-of-the-user-to-scale-for.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2018
	
		
		08:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Here is the solution...   1. Services - Hive, HBase, HDFS, Oozie, YARN, MapReduce, Ambari Metrics  These Services we can directly change the file limit from Ambari UI.  Ambari UI > ServiceConfigs> <username of the service>_user_nofile_limit
Example: 1. Ambari UI -> HIVE -> Configs -> Advanced -> Advanced hive-env -> hive_user_nofile_limit  64000
         2. Ambari UI > Ambari Metrics > configs > Advanced ams-hbase-env > max_open_files_limit  64000
         3. Ambari UI > Yarn > configs > Advanced yarn-env > yarn_user_nofile_limit  64000
         4. Ambari UI > MAPREDUCE2 > configs > Advanced mapred-env > mapred_user_nofile_limit  64000      2. Services -  Zookeeper, Spark, WebHCat, Ranger .  Users - zookeeper, Spark, hcat, ranger  For users spark, hcat, zookeeper, ranger. Add the below lines for their respective nodes in /etc/security/limits.conf  /etc/security/limits.conf file should have below entries.  zookeeper  -    nofile    64000 
spark      -    nofile    64000
hcat       -    nofile    64000
ranger     -    nofile    64000  After save the changes. Login as spark/hcat/zookeeper user and execute ulimit -a command.  check the output. The output should contain value as open files (-n) 64000  Please find the below ulimit -a output .  [spark@node01]$ ulimit -a 
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 513179
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 64000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 64000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited  If you still see ulimit -a values not updated. Then please add the below lines to file  /etc/pam.d/su .   vim /etc/pam.d/su
session         required        pam_limits.so  Repeat the above process... It will be successful. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2018
	
		
		08:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Increase open file limit of the user to scale for large data processing  :  hive, hbase, hdfs, oozie, yarn, mapred, Zookeeper, Spark, HCat 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		07-25-2018
	
		
		05:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Pedro Andrade  thanks for your reply. I checked the permissions it was fine (The file owned by "hdfs:hdfs" and permissions set to 644).   The node was out of service for an extended time, So I followed the below steps  
 delete all data and directories in the dfs.datanode.data.dir (keep that directory, though).  or Move the data    for  example : $ mv  /mnt/dn/sdl/datanode/current       /mnt/dn/sdl/datanode/current.24072018  restart the data node daemon or service  Later we can delete the backup data   $ rm -rf   /mnt/dn/sdl/datanode/current.24072018   Now Datanode is up and live....  Thanks for hortonworks help and contribution.  Reference :  https://community.hortonworks.com/questions/192751/databode-uuid-unassigned.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













