Member since 
    
	
		
		
		09-29-2014
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                224
            
            
                Posts
            
        
                11
            
            
                Kudos Received
            
        
                10
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1467 | 01-24-2024 10:45 PM | |
| 5121 | 03-30-2022 08:56 PM | |
| 4084 | 08-12-2021 10:40 AM | |
| 9244 | 04-28-2021 01:30 AM | |
| 4172 | 09-27-2016 08:16 PM | 
			
    
	
		
		
		01-24-2024
	
		
		10:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 sentry issue list :  Relative URI paths not supported by Sentry  i was going to read the document of sentry , and found this issue. relative URI paths not support . i am not sure, i think viewfs://cluster6 is relative URI, the real path is hdfs://nameservice or hdfs://nameservice2 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-24-2024
	
		
		10:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 recently, i was starting to investigate this issue by  debug information, the root cause is sentry didn't add the path to sentry table when using viewfs://clusterX, but sentry did add the path when using hdfs://nameservice or hdfs://nameservice2  the debug information is below:  1. DEBUG org.apache.sentry.service.thrift.NotificationProcessor: HMS Path Update [OP : addPath, authzObj : jlwang18, path : hdfs://nameservice/user/hive/warehouse/jlwang18.db, notification event ID: 103361576]     2.DEBUG org.apache.sentry.service.thrift.NotificationProcessor: HMS Path Update [OP : addPath, authzObj : jlwang17, path : viewfs://cluster6/user/hive/warehouse/jlwang17.db] - nothing to add, notification event ID: 103361563]  but i still don't know why nothing to add when using viewfs://cluster6 .  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-17-2024
	
		
		01:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 the platform is : CDH 5.14.4  after setting federation, the hdfs entrypoint is: viewfs://cluster6/ , the back namenode entrypoint is : hdfs://nameservice/ and hdfs://nameservice2/  add new warehouse directory is  /user2/hive/warehouse and managed by nameservice2.  but i found the strange things is  if i create database using hdfs://nameservice2, sentry can sync ACL with hdfs, but if i used viewfs://cluster6/ , sentry can not sync ACL.  for example:  create database test location '/user2/hive/warehouse/test.db'    can not sync ACL  but create database test location 'hdfs://nameservice2/user2/hive/warehouse/test.db' is ok.  who knows why ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Sentry
 
			
    
	
		
		
		04-05-2022
	
		
		07:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @araujo  do you have any suggestion for this case ?   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-01-2022
	
		
		04:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 i give you a whole job log of sqoop  for checking more details (this is just a example, hive query is the same)        [root@host243 ~]# sqoop export --connect jdbc:mysql://10.37.144.6:3306/xaxsuatdb?characterEncoding=utf-8 --username root --password xaxs2016  --table customer_feature --export-dir "/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h" --input-fields-terminated-by '\001' --input-null-string '\\N' --input-null-non-string '\\N' --update-key CUSTOMER_BP,ORG_CODE,TAG_ID,VERSION --columns CUSTOMER_BP,ORG_CODE,CPMO_COP,TAG_ID,TAG_NAME,TAG_VALUE,VERSION,UPDATE_TIME  --update-mode allowinsert -m 1;
Warning: /data/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
22/04/01 19:10:03 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-cdh6.2.0
22/04/01 19:10:04 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
22/04/01 19:10:04 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
22/04/01 19:10:04 INFO tool.CodeGenTool: Beginning code generation
22/04/01 19:10:04 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customer_feature` AS t LIMIT 1
22/04/01 19:10:04 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customer_feature` AS t LIMIT 1
22/04/01 19:10:04 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /data/cloudera/parcels/CDH/lib/hadoop-mapreduce
22/04/01 19:10:05 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-root/compile/7255ac988b70c7d9b5eb963a6f4946f5/customer_feature.java to /root/./customer_feature.java. Error: Destination '/root/./customer_feature.java' already exists
22/04/01 19:10:05 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/7255ac988b70c7d9b5eb963a6f4946f5/customer_feature.jar
22/04/01 19:10:05 WARN manager.MySQLManager: MySQL Connector upsert functionality is using INSERT ON
22/04/01 19:10:05 WARN manager.MySQLManager: DUPLICATE KEY UPDATE clause that relies on table's unique key.
22/04/01 19:10:05 WARN manager.MySQLManager: Insert/update distinction is therefore independent on column
22/04/01 19:10:05 WARN manager.MySQLManager: names specified in --update-key parameter. Please see MySQL
22/04/01 19:10:05 WARN manager.MySQLManager: documentation for additional limitations.
22/04/01 19:10:05 INFO mapreduce.ExportJobBase: Beginning export of customer_feature
22/04/01 19:10:06 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
22/04/01 19:10:06 WARN mapreduce.ExportJobBase: IOException checking input file header: java.io.EOFException
22/04/01 19:10:06 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
22/04/01 19:10:06 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
22/04/01 19:10:06 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
22/04/01 19:10:07 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm127
22/04/01 19:10:07 INFO hdfs.DFSClient: Created token for hive: HDFS_DELEGATION_TOKEN owner=hive@DEV.ENN.CN, renewer=yarn, realUser=, issueDate=1648811407106, maxDate=1649416207106, sequenceNumber=176449, masterKeyId=2259 on ha-hdfs:nameservice1
22/04/01 19:10:07 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for hive: HDFS_DELEGATION_TOKEN owner=hive@DEV.ENN.CN, renewer=yarn, realUser=, issueDate=1648811407106, maxDate=1649416207106, sequenceNumber=176449, masterKeyId=2259)
22/04/01 19:10:07 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/hive/.staging/job_1648759620123_0052
22/04/01 19:10:09 INFO input.FileInputFormat: Total input files to process : 37
22/04/01 19:10:09 INFO input.FileInputFormat: Total input files to process : 37
22/04/01 19:10:09 INFO mapreduce.JobSubmitter: number of splits:2
22/04/01 19:10:09 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
22/04/01 19:10:09 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
22/04/01 19:10:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1648759620123_0052
22/04/01 19:10:09 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for hive: HDFS_DELEGATION_TOKEN owner=hive@DEV.ENN.CN, renewer=yarn, realUser=, issueDate=1648811407106, maxDate=1649416207106, sequenceNumber=176449, masterKeyId=2259)]
22/04/01 19:10:09 INFO conf.Configuration: resource-types.xml not found
22/04/01 19:10:09 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
22/04/01 19:10:09 INFO impl.YarnClientImpl: Submitted application application_1648759620123_0052
22/04/01 19:10:09 INFO mapreduce.Job: The url to track the job: http://host243.master.dev.cluster.enn.cn:8088/proxy/application_1648759620123_0052/
22/04/01 19:10:09 INFO mapreduce.Job: Running job: job_1648759620123_0052
22/04/01 19:10:17 INFO mapreduce.Job: Job job_1648759620123_0052 running in uber mode : false
22/04/01 19:10:17 INFO mapreduce.Job:  map 0% reduce 0%
22/04/01 19:10:26 INFO mapreduce.Job:  map 50% reduce 0%
22/04/01 19:10:38 INFO mapreduce.Job:  map 59% reduce 0%
22/04/01 19:10:44 INFO mapreduce.Job:  map 67% reduce 0%
22/04/01 19:10:50 INFO mapreduce.Job:  map 75% reduce 0%
22/04/01 19:10:56 INFO mapreduce.Job:  map 84% reduce 0%
22/04/01 19:11:02 INFO mapreduce.Job:  map 92% reduce 0%
22/04/01 19:11:07 INFO mapreduce.Job:  map 100% reduce 0%
22/04/01 19:11:07 INFO mapreduce.Job: Job job_1648759620123_0052 completed successfully
22/04/01 19:11:07 INFO mapreduce.Job: Counters: 34
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=504986
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=135301211
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=113
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
                HDFS: Number of bytes read erasure-coded=0
        Job Counters 
                Launched map tasks=2
                Other local map tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=106796
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=53398
                Total vcore-milliseconds taken by all map tasks=53398
                Total megabyte-milliseconds taken by all map tasks=109359104
        Map-Reduce Framework
                Map input records=1500414
                Map output records=1500414
                Input split bytes=3902
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=296
                CPU time spent (ms)=35690
                Physical memory (bytes) snapshot=930320384
                Virtual memory (bytes) snapshot=5727088640
                Total committed heap usage (bytes)=1557135360
                Peak Map Physical memory (bytes)=534118400
                Peak Map Virtual memory (bytes)=2866765824
        File Input Format Counters 
                Bytes Read=0
        File Output Format Counters 
                Bytes Written=0
22/04/01 19:11:07 INFO mapreduce.ExportJobBase: Transferred 129.0333 MB in 60.3414 seconds (2.1384 MB/sec)
22/04/01 19:11:07 INFO mapreduce.ExportJobBase: Exported 1500414 records.      we can see, the sqoop job is finished and successful. but we can find this error log from container logs.     Log Type: container-localizer-syslog
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 3398
2022-04-01 19:10:18,487 WARN [main] org.apache.hadoop.security.LdapGroupsMapping: Exception while trying to get password for alias hadoop.security.group.mapping.ldap.bind.password: 
java.io.IOException: Configuration problem with provider path.
	at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2272)
	at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2191)
	at org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:719)
	at org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:616)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:106)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:102)
	at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451)
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:352)
	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:314)
	at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1973)
	at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:743)
	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:693)
	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:604)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:461)
Caused by: java.io.FileNotFoundException: /var/run/cloudera-scm-agent/process/26878-yarn-NODEMANAGER/creds.localjceks (Permission denied)
	at java.io.FileInputStream.open0(Native Method)
	at java.io.FileInputStream.open(FileInputStream.java:195)
	at java.io.FileInputStream.<init>(FileInputStream.java:138)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.getInputStreamForFile(LocalJavaKeyStoreProvider.java:83)
	at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.locateKeystore(AbstractJavaKeyStoreProvider.java:321)
	at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:86)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.<init>(LocalJavaKeyStoreProvider.java:58)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.<init>(LocalJavaKeyStoreProvider.java:50)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider$Factory.createProvider(LocalJavaKeyStoreProvider.java:177)
	at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
	at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2253)
	... 15 more
2022-04-01 19:10:18,723 INFO [main] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2022-04-01 19:10:19,741 WARN [ContainerLocalizer Downloader] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
Log Type: prelaunch.err
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 0
Log Type: prelaunch.out
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 70
Setting up env variables
Setting up job resources
Launching container
Log Type: stderr
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 0
Log Type: stdout
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 0
Log Type: syslog
Log Upload Time: Fri Apr 01 19:11:14 +0800 2022
Log Length: 45172
Showing 4096 bytes of 45172 total. Click here for the full log.
ainer_e483_1648759620123_0052_01_000002/transaction-api-1.1.jar:/data/yarn/nm/usercache/hive/appcache/application_1648759620123_0052/container_e483_1648759620123_0052_01_000002/commons-jexl-2.1.1.jar
java.io.tmpdir: /data/yarn/nm/usercache/hive/appcache/application_1648759620123_0052/container_e483_1648759620123_0052_01_000002/tmp
user.dir: /data/yarn/nm/usercache/hive/appcache/application_1648759620123_0052/container_e483_1648759620123_0052_01_000002
user.name: hive
************************************************************/
2022-04-01 19:10:22,636 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2022-04-01 19:10:23,151 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2022-04-01 19:10:23,248 WARN [main] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
2022-04-01 19:10:23,427 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000005_0:0+1582218,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000008_0:0+22415866,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000014_0:0+23029717,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000015_0:0+10901528,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000017_0:0+17525005,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000018_0:0+16289569,/user/hive/warehouse/penglin.db/label_cus_kpi_hightable_h/000036_0:0+43553385
2022-04-01 19:10:23,432 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
2022-04-01 19:10:23,432 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
2022-04-01 19:10:23,432 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
2022-04-01 19:11:04,627 INFO [Thread-14] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
2022-04-01 19:11:04,671 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1648759620123_0052_m_000000_0 is done. And is in the process of committing
2022-04-01 19:11:04,715 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1648759620123_0052_m_000000_0' done.
2022-04-01 19:11:04,728 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1648759620123_0052_m_000000_0: Counters: 26
	File System Counters
		FILE: Number of bytes read=0
		FILE: Number of bytes written=252493
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=135298087
		HDFS: Number of bytes written=0
		HDFS: Number of read operations=22
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=0
		HDFS: Number of bytes read erasure-coded=0
	Map-Reduce Framework
		Map input records=1500414
		Map output records=1500414
		Input split bytes=778
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=231
		CPU time spent (ms)=34070
		Physical memory (bytes) snapshot=534118400
		Virtual memory (bytes) snapshot=2866765824
		Total committed heap usage (bytes)=788529152
		Peak Map Physical memory (bytes)=534118400
		Peak Map Virtual memory (bytes)=2866765824
	File Input Format Counters 
		Bytes Read=0
	File Output Format Counters 
		Bytes Written=0
2022-04-01 19:11:04,829 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system...
2022-04-01 19:11:04,829 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped.
2022-04-01 19:11:04,829 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-01-2022
	
		
		04:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 	
Log Type: container-localizer-syslog
Log Upload Time: Thu Mar 31 02:24:54 +0800 2022
Log Length: 3720
2022-03-31 02:24:16,275 WARN [main] org.apache.hadoop.security.LdapGroupsMapping: Exception while trying to get password for alias hadoop.security.group.mapping.ldap.bind.password: 
java.io.IOException: Configuration problem with provider path.
	at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2272)
	at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2191)
	at org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:719)
	at org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:616)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:106)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:102)
	at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451)
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:352)
	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:314)
	at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1973)
	at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:743)
	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:693)
	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:604)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:461)
Caused by: java.io.FileNotFoundException: /var/run/cloudera-scm-agent/process/26618-yarn-NODEMANAGER/creds.localjceks (Permission denied)
	at java.io.FileInputStream.open0(Native Method)
	at java.io.FileInputStream.open(FileInputStream.java:195)
	at java.io.FileInputStream.<init>(FileInputStream.java:138)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.getInputStreamForFile(LocalJavaKeyStoreProvider.java:83)
	at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.locateKeystore(AbstractJavaKeyStoreProvider.java:321)
	at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:86)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.<init>(LocalJavaKeyStoreProvider.java:58)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.<init>(LocalJavaKeyStoreProvider.java:50)
	at org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider$Factory.createProvider(LocalJavaKeyStoreProvider.java:177)
	at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
	at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2253)
	... 15 more
2022-03-31 02:24:16,438 INFO [main] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2022-03-31 02:24:17,272 WARN [ContainerLocalizer Downloader] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
2022-03-31 02:24:19,294 WARN [ContainerLocalizer Downloader] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error  it shows up in container-localizer-syslog.   as you know, every map/reduce task has many logs when we open yarn web-ui, pick any one job ,there are map/reduce tasks, click these task to check task details, we can see the below logs:  container-localizer-syslog : Total file length is 3398 bytes.
prelaunch.err : Total file length is 0 bytes.
prelaunch.out : Total file length is 70 bytes.
stderr : Total file length is 1643 bytes.
stdout : Total file length is 0 bytes.
syslog : Total file length is 141307 bytes.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-31-2022
	
		
		07:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 actually , i don't know which user should visit this file while running map/reduce(hive query or sqoop, maybe there are also other programs) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-31-2022
	
		
		07:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, araujo  please refer to the below  information:  [appadmin@host21 ~]$ namei -l /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks
f: /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks
dr-xr-xr-x root         root         /
drwxr-xr-x root         root         run
drwxr-xr-x cloudera-scm cloudera-scm cloudera-scm-agent
drwxr-x--x root         root         process
drwxr-x--x yarn         hadoop       9295-yarn-RESOURCEMANAGER
-rw-r----- yarn         hadoop       creds.localjceks     [appadmin@host21 ~]$ ls -ln /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks
-rw-r----- 1 981 984 533 Mar 25 03:10 /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks  the creds.localjecks owner is 981:984, the below output is yarn user id and hadoop group id.     [root@host21 ~]# cat /etc/passwd | grep 981
solr:x:987:981:Solr:/var/lib/solr:/sbin/nologin
yarn:x:981:975:Hadoop Yarn:/var/lib/hadoop-yarn:/bin/bash
[root@host21 ~]# 
[root@host21 ~]# cat /etc/group | grep hadoop
hadoop:x:984:hdfs,mapred,yarn       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-31-2022
	
		
		04:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 /run/cloudera-scm-agent/process/9506-IMPALA-impala-CATALOGSERVER-45e2ae1dbc69e00f769182717dd71aa8-ImpalaRoleDiagnosticsCollection/creds.localjceks
/run/cloudera-scm-agent/process/9478-hue-KT_RENEWER/creds.localjceks
/run/cloudera-scm-agent/process/9476-hue-HUE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9471-impala-CATALOGSERVER/creds.localjceks
/run/cloudera-scm-agent/process/9462-impala-CATALOGSERVER/creds.localjceks
/run/cloudera-scm-agent/process/9456-sentry-SENTRY_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9455-oozie-OOZIE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9454-hue-KT_RENEWER/creds.localjceks
/run/cloudera-scm-agent/process/9452-hue-HUE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9448-hive-HIVEMETASTORE/creds.localjceks
/run/cloudera-scm-agent/process/9446-sentry-SENTRY_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9445-oozie-OOZIE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9444-hue-KT_RENEWER/creds.localjceks
/run/cloudera-scm-agent/process/9442-hue-HUE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9438-hive-HIVEMETASTORE/creds.localjceks
/run/cloudera-scm-agent/process/9437-hue-KT_RENEWER/creds.localjceks
/run/cloudera-scm-agent/process/9435-hue-HUE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9429-impala-CATALOGSERVER/creds.localjceks
/run/cloudera-scm-agent/process/9424-oozie-OOZIE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9420-hive-HIVEMETASTORE/creds.localjceks
/run/cloudera-scm-agent/process/9400-sentry-SENTRY_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9399-yarn-RESOURCEMANAGER/creds.localjceks
/run/cloudera-scm-agent/process/9388-yarn-JOBHISTORY/creds.localjceks
/run/cloudera-scm-agent/process/9413-hbase-REGIONSERVER/creds.localjceks
/run/cloudera-scm-agent/process/9411-hbase-MASTER/creds.localjceks
/run/cloudera-scm-agent/process/9377-hdfs-NAMENODE-nnRpcWait/creds.localjceks
/run/cloudera-scm-agent/process/9361-hdfs-NAMENODE/creds.localjceks
/run/cloudera-scm-agent/process/9351-HBaseShutdown/creds.localjceks
/run/cloudera-scm-agent/process/9343-hue-HUE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9345-hue-KT_RENEWER/creds.localjceks
/run/cloudera-scm-agent/process/9339-hive-HIVEMETASTORE/creds.localjceks
/run/cloudera-scm-agent/process/9338-oozie-OOZIE_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9337-sentry-SENTRY_SERVER/creds.localjceks
/run/cloudera-scm-agent/process/9333-hue-KT_RENEWER/creds.localjceks  every roles has their own creds.localjecks, and the default permission is 640. i pick some roles localjecks for your checking  [root@host21 ~]# ls -l /run/cloudera-scm-agent/process/9478-hue-KT_RENEWER/creds.localjceks
-rw-r----- 1 hue hue 1501 Mar 25 04:11 /run/cloudera-scm-agent/process/9478-hue-KT_RENEWER/creds.localjceks
[root@host21 ~]# ls -l /run/cloudera-scm-agent/process/9471-impala-CATALOGSERVER/creds.localjceks
-rw-r----- 1 impala impala 533 Mar 25 04:01 /run/cloudera-scm-agent/process/9471-impala-CATALOGSERVER/creds.localjceks
[root@host21 ~]# 
[root@host21 ~]# ls -l /run/cloudera-scm-agent/process/8788-hive-HIVEMETASTORE/creds.localjceks
-rw-r----- 1 hive hive 528 Mar  4 09:34 /run/cloudera-scm-agent/process/8788-hive-HIVEMETASTORE/creds.localjceks
[root@host21 ~]# ls -l /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks
-rw-r----- 1 yarn hadoop 533 Mar 25 03:10 /run/cloudera-scm-agent/process/9295-yarn-RESOURCEMANAGER/creds.localjceks     when i run hive sql or sqoop , the permission denied of creds.localjecks happended.      
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-31-2022
	
		
		02:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 [root@host243 ~]# id yarn
uid=979(yarn) gid=973(yarn) groups=973(yarn),982(hadoop),979(solr)
[root@host243 ~]# 
[root@host243 ~]# 
[root@host243 ~]# hdfs groups yarn
yarn : hadoop yarn  openldap user has been imported from OS user. so i think openldap user and group keep the same as os user/group.  there is just one think i'd like to share with you , after integrated with openldap, i haven't delete OS user.  
						
					
					... View more