Member since 
    
	
		
		
		07-14-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                22
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2576 | 05-28-2021 03:14 AM | 
			
    
	
		
		
		06-25-2021
	
		
		01:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Mcafee also lists launch_container.sh under vulnerability which can be ignored as launching containers is part of a yarn architecture.     Please be kind to go through the below point to understand more about yarn:     1. Client/user submits a job request.  2. Resource manager will check the input splits and will throw an error if it cannot be computed.  3. On successful computation, the resource manager will call the job to submit the procedure.  4. It will then find a node manager where it can launch app master  5. App master process will check the input splits and will make mapper and reducer tasks(Task IDs are given at this point)  6. App master will do the computation and based on that it will request resources in form of a container that will contain memory and CPU as requested by the App master.  7. On receiving the request, the node manager will use "launch_container.sh" as well as a few other job-related inputs to successfully launch a container to be used for task execution.     This is the expected behavior and should be considered a false vulnerability. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-03-2021
	
		
		02:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @venkatadach ,     Please share below details :     1. Screenshots of the issue  2. Are you able to get any results while running the query  3. Zeppelin logs and livy interpreter logs     Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-03-2021
	
		
		02:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @fj5000 ,     Please provide below details for us to proceed :     1. CM/CDH version  2. Document you are following to implement KDC freeipa setup  3.  Refer https://www.freeipa.org/page/Troubleshooting/Kerberos and see if you can follow the suggestions coined in the doc     Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-03-2021
	
		
		02:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @swathi_k ,     As the error suggests, it is unable to download the parcel files. Do you have valid credentials as the parcels are under paywall now .     Also, please upload CM logs, CM agent log from CM server node as well as the operational logs for further troubleshooting.    Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		05:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ryu , could you please upload HS2 and HMS logs at the time you are hitting below exception while trying to make a connection to hive using Squirrel :     ++++++  Error: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out  SQLState:  08S01  ErrorCode: 0  ++++++  Did you find anything suspicious from the Squirrel's log end ?  Also, let us know if it is intermittent or not ?     Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		05:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ryu , as a general rule of thumb Cloudera recommends that you determine the total number of HS2 servers on a cluster by dividing the expected maximum number of concurrent users on a cluster by 40. For example, if 400 concurrent users are expected, 10 HS2 instances should be available to support them.     Also, here are some HS2 tuning best practices that you can visit => https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_hive_tuning.html#hs2_perf_best_practices       Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		03:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @sweeny_here , Could you please provide below details :     1. Complete cloudera-scm-server logs  2. Steps/documentation followed to enable DB SSL  3. After enabling SSL on DB, did you import the DB SSL certificate to the CM truststore ?     Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		03:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @kkhambadkone1 , You are unable to find it under add service because Kafka Connect is included in CDH 6.3.x, but is not supported. Flume and Sqoop are proven solutions for batch and real time data loading that complement Kafka's message broker capability.     In order to use SMM, you need to download parcels from the official cloudera download portal but in order to do that you must be a CSM customer to access these downloads.     Having said that, please find the steps below :     #Assumption : You already have CDH 6.x and Kerberos enabled  1. Install a database  In this case, we are using MySQL:  https://docs.cloudera.com/csp/2.0.1/deployment/topics/csp-installing_mysql.html  2. Configure the database for schema registry and SMM  https://docs.cloudera.com/csp/2.0.1/deployment/topics/csp-configuring-schema-registry-metadata-store...  3. Download Schema Registry and SMM parcels     SMM https://www.cloudera.com/downloads/cdf/csm.html  CSR https://www.cloudera.com/downloads/cdf/csp.html   4. Install the Parcels  Install the services in this order:     1. Schema Registry  2. SRM (if no SRM installation, avoid this step)  3. SMM     https://docs.cloudera.com/csp/2.0.1/deployment/topics/csp-get-parcel-csd.html  5. Distribute and activate the parcels.   In Schema registry point “Schema Registry storage connector url” to the mysql hostname. Check “Enable Kerberos Authentication”.  Use the database registry password for “Schema Registry storage connector password” 5.1 For SMM use   cm.metrics.host = cloudera manager host  cm.metrics.password = cloudera manager UI password  cm.metrics.service.name = kafka (default)  Streams Messaging Manager storage connector url = jdbc:mysql://FQDN_MYHSQL:3306/streamsmsgmgr  Streams Messaging Manager storage connector password = user database password specified  Check “Enable Kerberos Authentication”     6. Add Kafka service   Check "Enable Kerberos Authentication"   7. Configure and access the SMM UI   Property "cm.metrics.service.name" must match with the Kafka service name, by default is "kafka"  Create streamsmsgmgr principal in the KDC, example when using MIT KDC  kadmin.local  add_principal streammsmmgr    Finally copy the /etc/krb5.conf to your local machine and get a valid kerberos ticket for streammsmmgr user by using "kinit streammsmmgr" and use the same password chosen for the user creation time.    Please hit "accept as solution" if your queries have been answered    Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		03:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @VBo , Could you please let us know if you were able to start the Atlas Metadata server ? .     If not, please provide us application.log, operation logs to check further.     Regards,  Aditya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2021
	
		
		02:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @z_j22 , Could you please reproduce the issue and provide us the NN logs covering the timeline ?     Regards,  Aditya 
						
					
					... View more