Member since
08-01-2020
14
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1294 | 02-28-2021 06:18 AM | |
1308 | 02-18-2021 01:06 AM |
03-17-2021
10:31 AM
2 Kudos
Hi @pauljoshiva To resolve your problem, you can create a new config group under hdfs service and add new datanodes with 3 partition to this new config group and keep only those 3 partitions hdp/hdfs01, /hdp/hdfs02, /hdp/hdfs03 under dfs.datanode.data.dir. By doing so, you will have two different set of config group with different set of configuration like the datanode partitions. You can make further changes as per you requirement. Make sure you add those 3 partitions only under the new config group. Please accept this answer if it helped you resolve your issue.
... View more
02-28-2021
06:18 AM
Hello @ryu The command that you are using is incorrect or maybe a typo. There is a 's' missing after log in the command. Use below command to fetch the logs. Yarn logs -applicationId <application id of the job> Please accept this answer if you find it useful to resolve your issue. Regards, Amir Mirza
... View more
02-18-2021
01:06 AM
I made a haste before getting the result just by concluding over the portal. Contact certification@cloudera.com for any queries related to Certifications. I cleared the exam and I received my certificate and badge.
... View more
02-14-2021
08:31 PM
@alam_shakeen right. I have sent an email as well to certifications@cloudera.com but still no reply yet. Please update this thread if you get any update on the exam eligibility reset from Cloudera.
... View more
02-12-2021
04:21 AM
Hi @GangWar , Thank you for the response. @Dgati could you please help me on this.
... View more
02-11-2021
11:14 AM
1 Kudo
Hello, I had my CCA122 exam during which I was not able to solve two questions due to repo was not accessible. I tried manually and even from browser, I was getting not found and 403 for HDP and ambari repo. Same was informed and noted by the proctor. As per Cloudera exam policy, if any technical issue is encountered my exam should be reset but they have marked me failed. What could I do in such situation? As per my understanding, Repo should be managed by Cloudera during exam
... View more
Labels:
- Labels:
-
Certification
02-11-2021
04:50 AM
Hi @Magudeswaran Ambari server uses appropriate database base connector which is located under /user/share/java/*<databasename>.jar This same connector jar is used to connect to any external or internal database. It needs to be mentioned using ambari-server --jdbc * command. Refer documentation for exact command. Issue could be due to the connector jar that is configured for ambari to use. For eg. Ambari is configured to use postgres but you are trying to connect to Mariadb and since it doesn't has the proper jar configured, it won't connect and you will face issue. Let me know if this resolves your issue. Else please share the screen shot and error logs to check further.
... View more
02-11-2021
04:43 AM
Hi @Aco Yes, you can create it manually. Check the zookeeper document on how to create those directories. It happens due to insufficient permissions. You need to create the directories from zookeeper cli and set appropriate permission, rest of the data and file creation will be taken care by zookeeper. I will see if I could find the exact steps to share it with you
... View more
02-09-2021
01:38 AM
Hi, If /atsv2-hbase-secure/meta-region-server is not getting created on its own then you can create it manually and set appropriate permission/ACL on the same as per your configuration. Destroy ats-hbase and then recreate it manually. Restart timeline reader V2 and resource manager. Ats-hbase in system mode gives issue. Can you try to run it in embedded mode or use external hbase. Let me know if that works. I can share steps if you want to create ats-hbase manually or use external hbase
... View more
02-09-2021
01:15 AM
Hi @Koffi Can you share gateway.log and audit.log during the timeframe when you are accessing namenode ui. If namenode ui was accessible earlier, can you check if there was a failover done during the time from when it is not accessible. Please share advanced topology file to check your configuration.
... View more
02-09-2021
01:10 AM
Can you check user id of ranger usersync on your Linux server. It seems the user id is less than 500. If yes, ask your sysadmin to change user id of all Hadoop users according beyond 500. If you are still facing the issue, do share ranger usersync and admin logs. Have you enabled plugins for services under ambari -> ranger -> ranger plugins
... View more
02-05-2021
02:47 AM
Try to increase the heap size for nodemanager from Yarn -> Configs and see if that resolves your issue. If not then probably you will have to do a performance or query tuning. Please accept this answer if it helps you resolve your issue.
... View more
02-05-2021
02:29 AM
HTTP Error 403 suggests Authorization error. When you do ambari-server ldap setup, everything gets stored in the database which your ambari server is using/configured while installation probably postgres or any other whichever you have used. I would suggest you to check your LDAP configuration which you have used/passed for integrating ambari with ldap. As per above error, there is a issue with authorization while ldap integration. its not with DB but LDAP. LDAP Configuration or bind user would be possible reason for your issue. Please accept this answer if it helps you resolve your issue and share with the community.
... View more
02-05-2021
02:13 AM
You can't access Namenode or any other service from Knox portal. However, a reverse proxy can be configured to connect to namenode Ui via knox. You will have to configure the Advanced topology file as per your cluster configuration in Ambari -> Knox -> Configs -> Advanced topology Refer below article for correct configuration. Your HDP and ambari version might differ but steps would remain the same, make sure you see appropriate version in the file path according to your cluster configuartion. https://community.cloudera.com/t5/Community-Articles/Configure-Knox-to-access-HDFS-UI/ta-p/249388 Please accept this answer if it help you resolve your query
... View more