Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2535 | 04-27-2020 03:48 AM | |
| 5003 | 04-26-2020 06:18 PM | |
| 4098 | 04-26-2020 06:05 PM | |
| 3304 | 04-13-2020 08:53 PM | |
| 5046 | 03-31-2020 02:10 AM |
03-31-2017
02:01 PM
@Rishabh Oberoi Which specific property are you trying to get when you are noticing this error? - You will not be able to set some properties at runtime unless you have included that in the whitelist until this is done and you have restarted hive service along with HiveServer2 this would not take effect. - Have you set the following properties ? hive.security.authorization.sqlstd.confwhitelist
OR
hive.security.authorization.sqlstd.confwhitelist.append - Similar : https://community.hortonworks.com/content/supportkb/48746/changing-hive-properties-in-beeline-gives-error-it.html - Article: https://community.hortonworks.com/articles/60309/working-with-variables-in-hive-hive-shell-and-beel.html [Search for "(state=42000,code=1)" in this article]
... View more
03-31-2017
01:51 PM
1 Kudo
@Kumar Veerappan Ambari provides some inbuilt alerts to findout the Weekly/Daily growth in the HDFS usage.
Ambari UI --> Alerts (Tab) --> "Alert Definition Filter" Search for "HDFS Storage Capacity Usage"
This service-level alert is triggered if the increase in storage capacity usage deviation has grown beyond the specified threshold within a given period. This alert will monitor Daily and Weekly periods. Please see: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/hdfs_service_alerts.html However if you want to get this data for 6 months then you might have to write your own custom alert script. \Sometime back i have written a basic example of how we can have our own custom ambari alert.
https://community.hortonworks.com/articles/38149/how-to-create-and-register-custom-ambari-alerts.html
Grafana basically fetches data from AMS (Ambari Metrics Collector) using APIs so the data need to be available on AMS first.
... View more
03-31-2017
04:45 AM
@Michael Dennis "MD" Uanang
Its working now. Please try again. Issue is fixed.
... View more
03-31-2017
04:28 AM
@Michael Dennis "MD" Uanang
I see there is some issue. Let me check with the team to get it fixed ASAP.
... View more
03-30-2017
07:43 PM
@khadeer mhmd Additional references: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0-Win/bk_HDP_Install_Win/content/ref-8896fe3b-8788-4a88-b7cc-d77d6f2481fc.1.html
log4j.properties : Use the log4j.properties file to modify the log purging
intervals of the HDFS logs. This file defines logging for all the Hadoop services.
It includes, information related to appenders used for logging and layout. For more
details, see the log4j documentation. .
... View more
03-30-2017
07:39 PM
@khadeer mhmd References: https://community.hortonworks.com/articles/8882/how-to-control-size-of-log-files-for-various-hdp-c.html https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html You can get the logging config from ambari UI.
Login to Ambari Goto the config tab of HDFS component. Filter for Advanced hdfs-log4j In case of ambari managed cluster it should be managed via ambari only else it will override out manual changes made to the log4j.propeties file on component restart. /etc/hadoop/conf/log4j.properties .
... View more
03-30-2017
10:01 AM
@Zhao Chaofeng
Please login to Ranger UI http://RANGER_HOST:6080/index.html#!/policymanager/resource Check the policies defined for Kafka (specially for topics) that you are not restricting access. . Purely at Kafka level you can check the permissions using the following utility: # bin/kafka-acls.sh --list --topic <TOPIC_NAME>
. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_secure-kafka-ambari/content/kafka-acl-examples.html
... View more
03-30-2017
09:30 AM
1 Kudo
@Michael Dennis "MD" Uanang The following error indicates that your few DB tables are corrupted. org.postgresql.util.PSQLException: ERROR: could not read block 0 of relation base/16384/16567: read only 0 of 8192 bytes . The easiest way to verify the same is to just try doing a select query on the same table from the DB. Exampel: # psql -U ambari ambari
Password for user ambari: bigdata
ambari=> SELECT * FROM clusterstate;
. A similar discussion happened in the thread: https://community.hortonworks.com/questions/84010/ambari-server-fails-to-start-after-database-consis.html . Or if you have earlier DB dump backup then you can restore the DB dump.
... View more
03-30-2017
07:11 AM
@Aditya Kumar Roy Looks like you have not given the permission recursively. Have you used "-R" option with "chmod" ? Example: # chmod 755 -R /root/java/ .
Just for quick verification you can list the file to see the current permission: # ls -l /root/java/jdk1.8.0_121/bin/java .
... View more
03-29-2017
08:42 AM
1 Kudo
@Rodion Gork - When you deleted the large data directory from HDFS then after how much time did you run the "hdfs dfs -du /user/root" command? (immediately or a few seconds/minutes later) - Also what does the following command show? # su - hdfs -c "hdfs dfsadmin -report" - Is it still consuming the same 95% usages (even after a long time?) - Although you are using "skipTrash", By any change have you altered any of the following parameter value: ---> Deletion interval specifies how long (in minutes) a checkpoint will be expired before it is deleted. It is the value of fs.trash.interval. The NameNode runs a thread to periodically remove expired checkpoints from the file system. --->
Emptier interval specifies how long (in minutes) the NameNode waits before running a thread to manage checkpoints. The NameNode deletes checkpoints that are older than fs.trash.interval and creates a new checkpoint from /user/${username}/.Trash/Current. This frequency is determined by the value of fs.trash.checkpoint.interval, and it must not be greater than the deletion interval. This ensures that in an emptier window, there are one or more checkpoints in the trash.
... View more