Member since
10-01-2016
156
Posts
8
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8240 | 04-04-2019 09:41 PM | |
3157 | 06-04-2018 08:34 AM | |
1475 | 05-23-2018 01:03 PM | |
2988 | 05-21-2018 07:12 AM | |
1834 | 05-08-2018 10:48 AM |
03-25-2020
06:28 AM
Thank you very much. This is the one that satisfies me. Documents are expected to make clear and simple things, not complicated.
... View more
03-23-2020
01:39 AM
I know thank you but I still don't understand why we add same property (dfs.datanode.balance.max.concurrent.moves) on a different section DataNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml Balancer Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml although same property (dfs.datanode.balance.max.concurrent.moves) already exists in Cloudera Manager. Is CM supposed to refuse this addition?
... View more
03-22-2020
12:15 PM
Yes, you are right. I have not realized that. But if dfs.datanode.ec.reconstruction.xmits.weight is already in hdfs-site.xml why Cloudera document makes us add the same property for balancer and DataNode again, what is the point?
... View more
03-22-2020
02:49 AM
I am trying to rebalance hdfs with Cloudera Manager 6.3 with HDFS Balancer Document
It says add the same property dfs.datanode.balance.max.concurrent.moves into different section
DataNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
Balancer Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
But before adding the property I searched and saw dfs.datanode.balance.max.concurrent.moves was already there. Nevertheless, I did what the document says. After adding properties Cloudera Manager asked me to restart/redeploy stale configurations. Before restart, I saw totally different properties added.
I don't understand although we seem to add the same property why different properties are added to hdfs-site.xml?
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
02-28-2020
05:06 AM
In CDH 6.X I can't find Advanced spark2-metrics-properties in Spark config. Should I create manually?
... View more
10-28-2019
06:45 PM
Here are newly imported HDP 2.6.5 Sandbox spark-shell --master yarn [root@sandbox-hdp ~]# spark-shell --master yarn
SPARK_MAJOR_VERSION is set to 2, using Spark2
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at <a href="http://sandbox-hdp.hortonworks.com:4040" target="_blank">http://sandbox-hdp.hortonworks.com:4040</a>
Spark context available as 'sc' (master = yarn, app id = application_1572283124735_0001).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.0.2.6.5.0-292
/_/
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
... View more
10-27-2019
11:01 AM
Where did you paste log4j2.xml? I solved the problem by adding the following : -Dlog4j.configurationFile=/usr/hdp/current/flume-server/conf/log4j2.xml My whole command is: /usr/hdp/current/flume-server/bin/flume-ng agent --conf conf --conf-file /home/maria_dev/flume_uygulama_01/flumelogs.conf --name a1 -Dflume.root.logger=INFO,console -Dlog4j.configurationFile=/usr/hdp/current/flume-server/conf/log4j2.xml
... View more
10-27-2019
05:21 AM
Hi @Shelton No specific steps actually. I just open HDP 2.6.5 Sandbox, connect it via ssh then run spark-shell --master yarn Alternatively, I tried to start Spark on Zeppelin and examined the logs. ERROR was the same. Spark-shell has opened in local mode, no problem. But I can't start it yarn mode. But I managed to start a newly imported Sandbox.
... View more
10-26-2019
10:42 PM
Unfortunately, after a while, the same problem occurred again.
... View more
10-19-2019
04:18 AM
I tried to add kafka-clients-1.0.0.jar into /usr/hdp/2.6.5.0-292/kafka/libs folder but it is no use.
... View more