Member since
08-01-2020
14
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2311 | 02-28-2021 06:18 AM | |
2886 | 02-18-2021 01:06 AM |
03-17-2021
10:31 AM
2 Kudos
Hi @pauljoshiva To resolve your problem, you can create a new config group under hdfs service and add new datanodes with 3 partition to this new config group and keep only those 3 partitions hdp/hdfs01, /hdp/hdfs02, /hdp/hdfs03 under dfs.datanode.data.dir. By doing so, you will have two different set of config group with different set of configuration like the datanode partitions. You can make further changes as per you requirement. Make sure you add those 3 partitions only under the new config group. Please accept this answer if it helped you resolve your issue.
... View more
02-28-2021
06:18 AM
Hello @ryu The command that you are using is incorrect or maybe a typo. There is a 's' missing after log in the command. Use below command to fetch the logs. Yarn logs -applicationId <application id of the job> Please accept this answer if you find it useful to resolve your issue. Regards, Amir Mirza
... View more
02-18-2021
01:06 AM
I made a haste before getting the result just by concluding over the portal. Contact certification@cloudera.com for any queries related to Certifications. I cleared the exam and I received my certificate and badge.
... View more
02-14-2021
08:31 PM
@alam_shakeen right. I have sent an email as well to certifications@cloudera.com but still no reply yet. Please update this thread if you get any update on the exam eligibility reset from Cloudera.
... View more
02-12-2021
04:21 AM
Hi @GangWar , Thank you for the response. @Dgati could you please help me on this.
... View more
02-11-2021
11:14 AM
1 Kudo
Hello, I had my CCA122 exam during which I was not able to solve two questions due to repo was not accessible. I tried manually and even from browser, I was getting not found and 403 for HDP and ambari repo. Same was informed and noted by the proctor. As per Cloudera exam policy, if any technical issue is encountered my exam should be reset but they have marked me failed. What could I do in such situation? As per my understanding, Repo should be managed by Cloudera during exam
... View more
Labels:
- Labels:
-
Certification
02-11-2021
04:50 AM
Hi @Magudeswaran Ambari server uses appropriate database base connector which is located under /user/share/java/*<databasename>.jar This same connector jar is used to connect to any external or internal database. It needs to be mentioned using ambari-server --jdbc * command. Refer documentation for exact command. Issue could be due to the connector jar that is configured for ambari to use. For eg. Ambari is configured to use postgres but you are trying to connect to Mariadb and since it doesn't has the proper jar configured, it won't connect and you will face issue. Let me know if this resolves your issue. Else please share the screen shot and error logs to check further.
... View more
02-11-2021
04:43 AM
Hi @Aco Yes, you can create it manually. Check the zookeeper document on how to create those directories. It happens due to insufficient permissions. You need to create the directories from zookeeper cli and set appropriate permission, rest of the data and file creation will be taken care by zookeeper. I will see if I could find the exact steps to share it with you
... View more
02-09-2021
01:38 AM
Hi, If /atsv2-hbase-secure/meta-region-server is not getting created on its own then you can create it manually and set appropriate permission/ACL on the same as per your configuration. Destroy ats-hbase and then recreate it manually. Restart timeline reader V2 and resource manager. Ats-hbase in system mode gives issue. Can you try to run it in embedded mode or use external hbase. Let me know if that works. I can share steps if you want to create ats-hbase manually or use external hbase
... View more
02-09-2021
01:15 AM
Hi @Koffi Can you share gateway.log and audit.log during the timeframe when you are accessing namenode ui. If namenode ui was accessible earlier, can you check if there was a failover done during the time from when it is not accessible. Please share advanced topology file to check your configuration.
... View more