Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 929 | 06-04-2025 11:36 PM | |
| 1535 | 03-23-2025 05:23 AM | |
| 761 | 03-17-2025 10:18 AM | |
| 2747 | 03-05-2025 01:34 PM | |
| 1811 | 03-03-2025 01:09 PM |
02-07-2019
06:36 PM
@Richard Wheeler if you left the sandbox idling then for sure it MUST be the logs generated in /var/logs/{component}/ the HDP components continually generate logs with the components statuses and on the sandbox, it's mount on / # du -a /var/log/ | sort -n -r | head -n 20 Sample output 3363560 /var/log/
1966344 /var/log/kafka
494300 /var/log/ambari-metrics-collector
267092 /var/log/hadoop
265560 /var/log/hadoop/hdfs
171528 /var/log/hadoop/hdfs/hadoop-hdfs-namenode-test.tarta.se.log
159432 /var/log/ambari-agent
98756 /var/log/ambari-infra-solr
81932 /var/log/ambari-metrics-collector/ambari-metrics-collector.log.3
81932 /var/log/ambari-metrics-collector/ambari-metrics-collector.log.2
81932 /var/log/ambari-metrics-collector/ambari-metrics-collector.log.1
81928 /var/log/ambari-metrics-collector/ambari-metrics-collector.log.4
81924 /var/log/ambari-metrics-collector/ambari-metrics-collector.log.5
69956 /var/log/oozie
49116 /var/log/hbase
40056 /var/log/ranger
39136 /var/log/ranger/admin
36420 /var/log/hive
36232 /var/log/hadoop-yarn
36176 /var/log/hbase/hbase-hbase-regionserver-test.tarta.se.log So you will need to delete the old files to regain some space. You can also run discretely a script in the cron !
... View more
02-07-2019
06:06 PM
1 Kudo
@Michael Bronson Apache Kafka uses Zookeeper to select a controller, Zookeeper tracks the status of Kafka cluster nodes and also plays a vital role for serving many other purposes, such as leader detection, configuration management, synchronization, detecting when a new node joins or leaves the cluster, etc.and maintain cluster membership by storing configuration, including the list of topics in the cluster. In order to remain part of the Kafka cluster, each broker has to send keep-alive to Zookeeper in regular intervals. This is something every Zookeeper client does by default. If the broker doesn't heartbeat Zookeeper every zookeeper.session.timeout.ms milliseconds (6000 by default), Zookeeper will assume the broker is dead. This will cause leader election for all partitions that had a leader on that broker. If this broker happened to be the controller, you will also see a new controller elected. In a Kafka cluster, service discovery helps the brokers find each other and know who’s in the cluster; and consensus helps the brokers elect a cluster controller, know what partitions exist, where they are, if they’re a leader of a partition or if they’re a follower and need to replicate, and so on. A controller is not too complex it is a normal broker that simply has an additional responsibility. That means it still leads partitions, has writes/reads going through it and replicates data. The most important part of that additional responsibility is keeping track of nodes in the cluster and appropriately handling nodes that leave, join or fail. This includes rebalancing partitions and assigning new partition leaders. There is always exactly one controller broker in a Kafka cluster. HTH
... View more
02-07-2019
03:42 PM
@Shraddha Singh Where machine is the FQDN and {rangerkms_password} is the rangerkms user password. The FQDN is the output of $hostname -f Re-run the below commands grant all privileges on rangerkms.* to 'rangerkms'@'machine' identified by '{rangerkms_password}';
grant all privileges on rangerkms.* to 'rangerkms'@'machine' with grant option; And let me know
... View more
02-07-2019
10:17 AM
@christophe VALMIR Any updates?
... View more
02-07-2019
08:05 AM
1 Kudo
@ram sriram The below command sets your replication factor to 1 for all new files you will create, with a potential data loss unless you are running HDP 3.x which has a new HDFS algorithm EC erasure coding $ hdfs dfs -setrep -w 1 -R / As responded above the changes only affect new files you will create. After changing the replication factor you won't see any hdfs size changes until the trash time interval which was set on 360 minutes configured by the hdfs trash interval has been reached fs.trash.interval Once the NameNode metadata has been updated, it is the DataNodes which would actually do the operation. There could be some delay, but space is definitely reclaimed. HTH
... View more
02-06-2019
01:29 PM
@Shraddha Singh The current directory has links to /usr/hdp/3..0.x/{hdp_component} in your case, the below is from my HDP 2.6.5. so you should have copied those directories to /usr/hdp/3.0.x/ and do the tedious work to recreate symlinks from /usr/hdp/current as seen below, quite a good exercise. If this is a test cluster and most probably a single node # tree /usr/hdp/current/
/usr/hdp/current/
├── atlas-client -> /usr/hdp/2.6.5.0-292/atlas
├── atlas-server -> /usr/hdp/2.6.5.0-292/atlas
├── falcon-client -> /usr/hdp/2.6.5.0-292/falcon
├── falcon-server -> /usr/hdp/2.6.5.0-292/falcon
├── hadoop-client -> /usr/hdp/2.6.5.0-292/hadoop
├── hadoop-hdfs-client -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-datanode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-journalnode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-namenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-nfs3 -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-portmap -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-secondarynamenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-zkfc -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-httpfs -> /usr/hdp/2.6.5.0-292/hadoop-httpfs
├── hadoop-mapreduce-client -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-mapreduce-historyserver -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-yarn-client -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-nodemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-resourcemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-timelineserver -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hbase-client -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-master -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-regionserver -> /usr/hdp/2.6.5.0-292/hbase
├── hive-client -> /usr/hdp/2.6.5.0-292/hive
├── hive-metastore -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2 -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2-hive2 -> /usr/hdp/2.6.5.0-292/hive2
├── hive-webhcat -> /usr/hdp/2.6.5.0-292/hive-hcatalog
├── kafka-broker -> /usr/hdp/2.6.5.0-292/kafka
├── knox-server -> /usr/hdp/2.6.5.0-292/knox
├── livy2-client -> /usr/hdp/2.6.5.0-292/livy2
├── livy2-server -> /usr/hdp/2.6.5.0-292/livy2
├── livy-client -> /usr/hdp/2.6.5.0-292/livy
├── oozie-client -> /usr/hdp/2.6.5.0-292/oozie
├── oozie-server -> /usr/hdp/2.6.5.0-292/oozie
├── phoenix-client -> /usr/hdp/2.6.5.0-292/phoenix
├── phoenix-server -> /usr/hdp/2.6.5.0-292/phoenix
├── pig-client -> /usr/hdp/2.6.5.0-292/pig
├── ranger-admin -> /usr/hdp/2.6.5.0-292/ranger-admin
├── ranger-tagsync -> /usr/hdp/2.6.5.0-292/ranger-tagsync
├── ranger-usersync -> /usr/hdp/2.6.5.0-292/ranger-usersync
├── shc -> /usr/hdp/2.6.5.0-292/shc
├── slider-client -> /usr/hdp/2.6.5.0-292/slider
├── spark2-client -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-historyserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-thriftserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark-client -> /usr/hdp/2.6.5.0-292/spark
├── spark-historyserver -> /usr/hdp/2.6.5.0-292/spark
├── spark_llap -> /usr/hdp/2.6.5.0-292/spark_llap
├── spark-thriftserver -> /usr/hdp/2.6.5.0-292/spark
├── sqoop-client -> /usr/hdp/2.6.5.0-292/sqoop
├── sqoop-server -> /usr/hdp/2.6.5.0-292/sqoop
├── storm-slider-client -> /usr/hdp/2.6.5.0-292/storm-slider-client
├── tez-client -> /usr/hdp/2.6.5.0-292/tez
├── zeppelin-server -> /usr/hdp/2.6.5.0-292/zeppelin
├── zookeeper-client -> /usr/hdp/2.6.5.0-292/zookeeper
└── zookeeper-server -> /usr/hdp/2.6.5.0-292/zookeeper
55 directories, 2 files I would advise you if possible to re-install it completely and have a clean environment. HTH
... View more
02-06-2019
01:03 PM
@Chris Jenkins My pleasure I made your day and welcome to Big data space, having to go all through all this will make you better technically you've now seen the different facets to resolving a problem. Happy Hadooping !
... View more
02-05-2019
08:14 PM
@Ruslan Fialkovsky There is a patch attached did you update your code?
... View more
02-05-2019
08:11 PM
@christophe VALMIR After running the --mpack you usually need to restart Ambari server. In the document, you quoted at which step are you stuck?
... View more
02-05-2019
04:48 PM
@Al Kirmer Unfortunately its a hard and fast rule when it comes to paid support. HWX won't take responsibility or guarantee good functionality of your cluster or it's components. HWX does extensive testing before adding a third-party software/tool on its support list. I am not an HW employee but I bet you will get the same response HWX could be in the process of certifying those versions but after the merge with Cloudera I am pretty sure there won't be any major releases until the new Cloudera Data Platform (CDP) new name for the next product is out sometime in or after 2020 currently there are teams involved in the integration and choice of the best products from both worlds HWX & CDH HTH
... View more