Member since
08-08-2017
9
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1921 | 08-14-2017 12:32 AM | |
3611 | 08-11-2017 02:03 AM |
09-08-2017
05:28 PM
Since you said only one node, I am not sure if you mean there is only one physical machine in the cluster. if that is the case, you want to enable something called pseudo-distributed mode, where all processes, that is namenode, datanode etc. are going to run on the same machine. You can find instructions here. https://hortonworks.com/hadoop-tutorial/introducing-apache-ambari-deploying-managing-apache-hadoop/ On the other hand, if you want to run namenode on a single machine -- but you have a set of physical nodes, you can use Ambari normally. if you want to use Amabri, You can select HDFS and the standard installation does not assume that you have HA. If you want to turn on HA then use HA Wizard.
... View more
12-14-2017
09:04 PM
1 Kudo
Hi @Mark Lee:
Have you attempted to call the comsumer and producer with the following parameter appended to the end of the command line:
--security-protocol SASL_PLAINTEXT
As an example, your producer command line would look something like this: bin/kafka-console-producer.sh --broker-list localhost:6667 --topic apple3 --producer.config=/tmp/kafka/producer.properties --security-protocol SASL_PLAINTEXT
... View more
08-11-2017
01:12 AM
In my case, I navigate to the folder /data/user/flamingo/.ivy2/jars
...
Ivy Default Cache set to: /data/user/flamingo/.ivy2/cache
The jars for the packages stored in: /data/user/flamingo/.ivy2/jars
...
And copy all the jars below to the directory you want to store jars, then execute the spark command like: SPARK_MAJOR_VERSION=2 bin/spark-shell --jars="/path/to/jars" Then the result seems worked!
... View more