Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 860 | 06-04-2025 11:36 PM | |
| 1437 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2584 | 03-05-2025 01:34 PM | |
| 1702 | 03-03-2025 01:09 PM |
03-23-2018
11:29 AM
Did you create a FW rule in you ec2. Console to allow port 5901 to outside world * or limited it to your IP? Please revert
... View more
03-22-2018
09:17 AM
1 Kudo
@Girish Mallula Below is the sequences to test Kafka please try to open a console for each command I am assuming your localhost value in bold [ listeners : PLAINTEXT://xxx.domain:6667 ] should be the output of FQDN $ hostname -f or the host's IP Step 1: Start the zookeeper server Start a ZooKeeper server that's packaged with Kafka bin/zookeeper-server-start.sh config/zookeeper.properties Step 2: Start the kafka broker server bin/kafka-server-start.sh config/server.properties Step 3: Create a topic bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic girishtp List the topics bin/kafka-topics.sh --list --zookeeper localhost:2181 output __consumer_offsets
girishtp Step 4: Send some messages By default, each line will be sent as a separate message. Run the producer and then type a few messages into the console to send to the server. bin/kafka-console-producer.sh --broker-list localhost:9092 --topic girishtp {type some random message here } Step 5: Start a consumer Kafka consumer will dump out messages to standard output(console). bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic girishtp--from-beginning {you will see the messages typed in step 4 } Step 6 Delete a topic # bin/kafka-run-class.sh kafka.admin.TopicCommand --zookeeper localhost:2181 --delete --topic girishtp Output Topic girishtp is marked for deletion.
... View more
03-20-2018
07:51 PM
@Artur Bukowski Did you set this in the oozie user's .profile or bashrc HADOOP_CLASSPATH=$(hcat -classpath)
export HADOOP_CLASSPATH Can you attach the new error stack.
... View more
03-20-2018
06:56 PM
You are facing this issue snappy issue during HDP installation I suspect you have more problems than the screenshot you have attached. Can you scroll down that window and send the screenshot. Please do the below steps zypper rm snappy
zypper in snappy-devel Hortonworks provides an OS-specific installation instruction that you must STRICTLY follow to avoid unpleasant surprises because it becomes very ..very frustrating ..Don't ignore any point if you can automate it that's even better.
... View more
03-20-2018
01:17 PM
@Michael Napoleon The problem seems to be around the default temporary table (at least, it is failing here before anything else). The query you make with the "VALUES" part work like this : - it creates a temporary hive table with the line to be inserted - it then queries that temporary table for inserting the data into the target table From the output I see, the temporary table is created [default/values__tmp__table__1]. But the user does not have "select" permission on it. I guess that a workaround would be to grant "SELECT" on the database "default" but this could bring some security issues for you (since the user will have read permission on all the tables inside "default").
... View more
03-20-2018
11:19 AM
@Vinay K Not important to me though but ethically . I am fine with all 🙂
... View more
03-20-2018
10:54 AM
@Vinay K @Sandeep Kumar It's great that your problem has been resolved. It isn't normal that someone attributes himself the correct answer when other HCC members contributed to the answer namely Sandeep and I. Your solution which was suggested by me. 1 Quorum of zookeeper
3. Changes in the hdfs-site.xml/core-site.xml config
4 Journal nodes /Sandeep too So taking account the above I guess someone else merited the point 🙂
... View more
03-20-2018
10:33 AM
1 Kudo
@asubramanian As I had reiterated your installation didn't go smoothly, the workaround could be to copy all the missing folders under 1.4.0.0-38 to 1.4.0.0-18 folder and retry. cd $METRON_HOME/bin
./zk_load_configs.sh -i ../config/zookeeper -m PUSH -z ${zookeeper} If your setup in not on bare metal but VMware then better start from scratch . Where is your Ambari server? If its co-related with the Master node the don't dedicate a node to only the database that would be wastage of resources make it a master 3 masternode # the 3 zookeeper should be insatlled here
3 workernode # Primarily Datanodes /nodemanager/Hbase regionc servers
1 edge node Hope that helps
... View more
03-19-2018
03:59 PM
@Gaurang Shah Sqoop expects the password file is located on HDFS. Can you move the file to a directory located in HDFS Try to specify that path and file name eg /user/gaurang/sqlpwd.pass With the correct permissions. A better one to use will be password-alias which makes use of the hadoop credential provider to store the password instead of the clear text password. This option works with HDP 2.2 or later. Please see https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_connecting_to_a_database_server
... View more
03-19-2018
03:37 PM
@Bramantya Anggriawan If your installation went fine, then you should be able to execute the zookeeper from cd $METRON_HOME/bin
./zk_load_configs.sh -i ../config/zookeeper -m PUSH -z ${zookeeper} And remember to give the zookeeper quorum as a parameter !
... View more