Member since 
    
	
		
		
		03-11-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                197
            
            
                Posts
            
        
                30
            
            
                Kudos Received
            
        
                40
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2292 | 11-07-2024 08:47 AM | |
| 1608 | 11-07-2024 08:36 AM | |
| 1102 | 06-18-2024 01:34 AM | |
| 760 | 06-18-2024 01:25 AM | |
| 924 | 06-18-2024 01:16 AM | 
			
    
	
		
		
		02-25-2025
	
		
		10:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 NiFi will automatically encrypt sensitive properties (e.g., passwords) when you start the application. You can provide the plain-text values in the nifi.properties file, and NiFi will replace them with encrypted values upon startup.  You can verify the same as below.    cd /var/run/cloudera-scm-agent/process/    grep -Er nifi.security.keyPasswd     If you have a property like nifi.security.keyPasswd=myPassword, NiFi will encrypt it and store it in the format:            nifi.security.keyPasswd={aes-256-gcm}encryptedValue  If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-25-2025
	
		
		04:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I would recommend reaching out to the Cloudera accounts team for assistance. They will be able to investigate the issue and provide a resolution to help you access the support portal. They have the necessary expertise to handle account-related issues and ensure that you can utilize the support services you purchased. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-25-2025
	
		
		04:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							   Summary   This article details the resolution of a startup failure for Knox Gateway roles during a Data Hub upgrade, involving the reconfiguration of a missing knox_pac4j_password.   Symptoms    While upgrading the Data Hub cluster, Knox Gateway services failed to start.  Error message: java.lang.IllegalArgumentException: The variable [${knox_pac4j_password}] does not have a corresponding value.  The upgrade process was halted due to this failure.    Cause    The probable cause of the startup failure was an improperly generated or missing knox_pac4j_password during the upgrade process.  The built-in upgrade handler should have automatically generated the Knox secret, but it did not.    Instructions   To resolve the issue, the following steps were taken:   Verify if an entry for knox_pac4j_password exists in the Cloudera Manager database.  Execute SQL query: SELECT * FROM configs WHERE attr LIKE '%knox_pac4j_password%';    Generate a 16-bit UUID random password.  Use OpenSSL: openssl rand -base64 16    Create a JSON file (test.json) with the generated password as follows:   { "items": [ { "name": "knox_pac4j_password", "value": "[Generated_Password]" } ] }        Confirm the Knox service name via Cloudera API call.  Use curl to fetch service details.    Update the Cloudera Manager configuration with the new password using Cloudera API.  Use curl with the PUT method to update the configuration.    Confirm the new value is persisted in the Cloudera Manager database.  Execute SQL query: SELECT * FROM configs WHERE attr LIKE '%knox_pac4j_password%';    Restart or start the Knox service.  Use Cloudera Manager UI or API to restart the service.     Following these steps successfully resolved the issue, allowing the Knox service to start without errors, and the upgrade process was able to continue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-25-2025
	
		
		04:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  Summary   Resolved service errors on Datanode#09 by identifying and addressing NFS mount issues. The Cloudera agent was reconfigured to bypass NFS checks temporarily, allowing services to return to a healthy state.   Symptoms    Multiple service errors reported on Datanode#09.  Cloudera agent was not in contact with Cloudera Manager.  Filesystem usage for certain nodev filesystems was unknown due to an inactive worker process.    Cause    The NFS partition was not properly mounted on the host, which led to problems with the Cloudera agent's health checks.  The Cloudera agent configuration was set to monitor NFS mounts (monitored_nodev_filesystem_types=nfs,nfs4,tmpfs), which failed due to the unmounted NFS partition.    Instructions    Temporarily comment out the NFS monitoring line in the Cloudera agent configuration file to bypass the check and restore agent communication:  Open /etc/cloudera-scm-agent/config.ini using a text editor.  Comment out the line monitored_nodev_filesystem_types=nfs,nfs4,tmpfs.  Restart the Cloudera agent service: service cloudera-scm-agent restart.    Once the NFS mount point is recovered with the help of the OS team, restore the original configuration:  Uncomment the previously commented line in /etc/cloudera-scm-agent/config.ini.  Restart the Cloudera agent service to apply changes.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-11-2024
	
		
		02:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @ecampos Yes try the same with localhost or IP.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2024
	
		
		08:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ecampos   To resolve this issue and to allow CM server to start, modify the db.properties file as below:   Note: Back up the current file at /etc/cloudera-scm-server/db.properties.  ---------------  com.cloudera.cmf.db.type = mysql  com.cloudera.cmf.orm.hibernate.connection.url=jdbc:mysql://<db_host>/scm?allowPublicKeyRetrieval=true&useSSL=false  com.cloudera.cmf.orm.hibernate.connection.username=scm  com.cloudera.cmf.orm.hibernate.connection.password=cloudera  com.cloudera.cmf.db.setupType=EXTERNAL  --------------- 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2024
	
		
		08:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 To disable Kerberos in Cloudera, it’s important to know that there is no direct option to turn it off entirely once enabled, as Kerberos is typically applied cluster-wide.  It's generally not recommended to disable Kerberos once enabled due to security implications. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-18-2024
	
		
		01:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Firstly, it is essential to determine the cause of the permissions change. Some possible factors could be a misconfigured script or process running on the server, a security setting, or interaction with other components of your system.  To troubleshoot this issue, you could start by checking your system logs (/var/log/syslog or similar) and any relevant application logs for any hints about what is causing the permissions change. Additionally, you may want to review any configuration files or scripts that might be involved in managing the permissions of the /var/log directory.  Since you mentioned that SELinux is disabled and the firewall is also disabled, it is less likely that these factors are causing the problem. However, I would recommend double-checking any remaining security settings or access control policies that could potentially impact the /var/log directory.  As a temporary workaround, you mentioned using a crontab that changes the permissions of the directory to 755 every 5 minutes. While this may alleviate the issue temporarily, it is not a sustainable solution in the long run. Instead, it would be more ideal to identify and address the root cause of the permissions change.  If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-18-2024
	
		
		01:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Based on the provided information, it seems that you are able to successfully log in to Nifi, but you are encountering an error when trying to access the Nifi UI. The error message suggests that there is an SSLPeerUnverifiedException, specifically related to the hostname and subjectAltNames.  This error usually occurs when there is a certificate validation issue. The certificate being used by Nifi may not be recognized or trusted by your system, causing the SSL connection to fail.  To resolve this issue, you can try the following steps:  1. Check the certificate: Verify the certificate being used by Nifi and ensure it is correctly configured. Make sure that the certificate has the correct subjectAltName entries for the hostname you are using to access Nifi UI.  2. Trust the certificate: If the certificate is valid but not trusted by your system, you can add it to the trust store of your Java installation or the certificate store of your operating system. This will allow your system to trust the certificate and establish a secure SSL connection.  3. Check network configuration: Ensure that there are no network or firewall issues preventing the SSL connection between your client and the Nifi server. Confirm that the correct ports (usually 8080 or 8443) are open and accessible.  4. Verify Nifi configuration: Double-check the Nifi configuration files, especially the nifi.properties file, to ensure that the SSL configuration is correctly set up. Pay attention to properties related to keystore, truststore, and SSL/TLS protocols.  If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-18-2024
	
		
		01:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 To troubleshoot the issue of ranger policies not getting synced, you can check the following log files in HDFS:  1. ranger_admin.log: This log file contains the logs related to the Ranger Admin service. It can be found in the Ranger Admin node at the location: `/var/log/ranger/ranger-admin`.  2. ranger_admin_audit.log: This log file contains the logs for auditing actions performed by Ranger Admin. It can be found at the same location as ranger_admin.log.=  3. hdfs.log: This log file contains the logs for HDFS operations. It can be found in the Hadoop log folder, which is usually located at: `/var/log/hadoop/hdfs` or `/var/log/hadoop-hdfs` 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













