Member since
08-08-2017
9
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1861 | 08-14-2017 12:32 AM | |
3501 | 08-11-2017 02:03 AM |
09-26-2017
02:30 AM
Hi @Bin Ye You should change this property to fit your setting <property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/hdfs/namenode</value>
</property>
... View more
09-05-2017
03:05 AM
HDFS 2.7.3 HDP 2.6.1.0 Hi all, As title, if I want to enable hdfs name service function but without using name node HA wizard(there is only one node in my cluster for test bed) What should I configure in hdfs, or which guide should I follow.
... View more
Labels:
- Labels:
-
Apache Hadoop
08-24-2017
08:02 AM
Have you check your properties in Ambari -> ranger -> Advanced ranger-ugsync-site If not please refer to this page
... View more
08-14-2017
12:32 AM
Is that ambari-agent service still running? Or try to restart it. [root@hdptest ~]# /etc/init.d/ambari-agent status
Found ambari-agent PID: 63290
ambari-agent running.
Agent PID at: /run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log
... View more
08-11-2017
02:03 AM
I think you've got the point, the dfs.datanode.dir in Ambari is the global setting, so it will assume every host would have these dirs(/grid/data1, /grid/data2, /grid/data3), in your case you need to create config group to suite your environment. And there are two way to solve the existing data under your directory, but first let's increase the dfs.datanode.balance.bandwidthPerSec value(bytes/sec) compare to your network speed in HDFS setting by Ambari UI, this will help to speed up the progress. The safe way is to decommission DataNodes and reconfig your group setting then recommission the node one by one https://community.hortonworks.com/articles/69364/decommission-and-reconfigure-data-node-disks.html the unsafe is to reconfig the setting and remove the directory directly based on your replication setting, then wait for replicate complete by check hdfs dfsadmin -report command's under replicated blocks value to 0.
... View more
08-11-2017
01:12 AM
In my case, I navigate to the folder /data/user/flamingo/.ivy2/jars
...
Ivy Default Cache set to: /data/user/flamingo/.ivy2/cache
The jars for the packages stored in: /data/user/flamingo/.ivy2/jars
...
And copy all the jars below to the directory you want to store jars, then execute the spark command like: SPARK_MAJOR_VERSION=2 bin/spark-shell --jars="/path/to/jars" Then the result seems worked!
... View more
08-08-2017
02:09 AM
Hi all, We are trying to enable Kafka's SASL/PLAIN in HDP-2.6.1.0 without kerberos, and we only install 1 host for the test to ensure there is no network issue happen. Before we enable SASL/PLAIN both kafka console producer and consumer work perfectly, after enable SASL/PLAIN the broker log seems okay. [2017-08-08 09:41:22,101] INFO [ExpirationReaper-1004], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-08 09:41:22,102] INFO [ExpirationReaper-1004], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-08 09:41:22,108] INFO [ExpirationReaper-1004], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-08 09:41:22,133] INFO [GroupCoordinator 1004]: Starting up. (kafka.coordinator.GroupCoordinator)
[2017-08-08 09:41:22,134] INFO [GroupCoordinator 1004]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2017-08-08 09:41:22,142] INFO [Group Metadata Manager on Broker 1004]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-08-08 09:41:22,155] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2017-08-08 09:41:22,194] INFO Creating /brokers/ids/1004 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2017-08-08 09:41:22,205] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-08-08 09:41:22,206] INFO Registered broker 1004 at path /brokers/ids/1004 with addresses: SASL_PLAINTEXT -> EndPoint(0.0.0.0,6667,SASL_PLAINTEXT) (kafka.utils.ZkUtils)
[2017-08-08 09:41:22,219] INFO [Kafka Server 1004], started (kafka.server.KafkaServer)
But when we try to produce and consume via kafka-console script we get this error [2017-08-08 10:03:46,936] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,041] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,143] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,245] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,347] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,449] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,551] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,654] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,756] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,858] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:47,960] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:48,062] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:48,165] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-08-08 10:03:48,267] WARN Bootstrap broker localhost:6667 disconnected (org.apache.kafka.clients.NetworkClient)
Any help would be appreciated. Thanks in advance Confing: kafka_server_jaas.conf: KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka-secret"
user_kafka="kafka-secret"
user_test="test-secret";
};
kafka_client_jaas.conf: KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="test"
password="test-secret";
};
Pass the kafka_server_jaas.conf location as JVM parameter to kafka-env template: Add the properties to the Custom Kafka-broker: Change listeners: PLAINTEXT://0.0.0.0:6667 -> SASL_PLAINTEXT://0.0.0.0:6667
Both producer.properties and consumer.properties: security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
Terminal 1 $ export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/kafka/kafka_client_jaas.conf"
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:6667 --topic apple3 --from-beginning --consumer.config=/tmp/kafka/consumer.properties Terminal 2 $ export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/kafka/kafka_client_jaas.conf"
$ bin/kafka-console-producer.sh --broker-list localhost:6667 --topic apple3 --producer.config=/tmp/kafka/producer.properties
... View more
Labels: