Member since
08-28-2015
194
Posts
45
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2212 | 07-05-2017 11:58 PM |
10-25-2022
12:28 AM
I have around ten years of experience in working on data management and Business Intelligence tools and technologies. Would like to explore Hadoop extensively. Interested to learn and work on new age Hadoop services to master the data science domain. Tushar.
... View more
11-15-2019
05:17 AM
Can you more elaborate on "Hostname tied to the actual IP address?" and " Use "ifconfig -a" to see a listing of your network interfaces and choose one that has an actual IP address. " How do I know which hostname and ip address to use. Because while installing single node cluster of cloudera how would you know which hosts to specify? Thank you!
... View more
01-14-2019
10:23 PM
Very well explained, thanks for the post~
... View more
06-05-2018
07:31 PM
thank you so much Gerd, well, this is one of 6 master nodes, this is not NN or SNN. so what is your suggestion?
... View more
02-14-2018
12:37 AM
try add allow port on firewalld on host, this solved my prob $ firewall-cmd --zone=public --add-port=9000/tcp (Centos7) or $ service firewalld disable Hope this may help
... View more
10-19-2017
02:24 AM
in /etc/hosts for all nodes: put ip_adress FQDN DN 10.10.1.230 name.domain.com name the FQDN must be before the name
... View more
10-13-2017
01:56 PM
@Robin Dong, https://rklicksolutions.wordpress.com/2017/04/04/read-data-from-kafka-stream-and-store-it-in-to-mongodb/ https://github.com/alonsoir/hello-kafka-twitter-scala These links may help you.
... View more
09-28-2017
03:02 PM
Thank you so much.
... View more
08-03-2017
02:22 PM
yes, thank you, all working. I missed ';' at end of update statement. thanks again.
... View more
07-07-2017
07:07 PM
thank you for your time, this is the /etc/hosts file, I have 6 of them b1 ec2-54-216-249-113.us-west-1.compute.amazonaws.com 54.216.249.113 ip-172-31-4-232.us-west-1.compute.internal
b2 ec2-54-215-167-71.us-west-1.compute.amazonaws.com 54.216.167.71 ip-172-31-14-204.us-west-1.compute.internal
b3 ec2-54-192-74-19.us-west-1.compute.amazonaws.com 54.192.74.19 ip-172-31-0-58.us-west-1.compute.internal b4 ec2-54-183-108-236.us-west-1.compute.amazonaws.com 54.183.108.236 ip-172-31-12-149.us-west-1.compute.internal netstat -tulpn | grep -i 8083
output is nothing
... View more
07-07-2017
04:34 AM
Hi, Can u reset the ambari-agent in all servers and try to register. amabri-agent reset <Ambari_host_name>
... View more
07-04-2017
04:15 AM
@Robin Dong
You can check the "/etc/ambari-server/conf/ambari.properties" file to know exactly which DataBase your ambari is using and then accordingly you can login to the DB host and run/check the requested commands or the DB admin can assist with the queries. Example: (In my Case it is Postgres)
# grep 'jdbc' /etc/ambari-server/conf/ambari.properties
custom.postgres.jdbc.name=postgresql-9.3-1101-jdbc4.jar
previous.custom.postgres.jdbc.name=postgresql-9.3-1101-jdbc4.jar
server.jdbc.connection-pool=internal
server.jdbc.database=postgres
server.jdbc.database_name=ambari
server.jdbc.postgres.schema=ambari
server.jdbc.user.name=ambari
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
.
... View more
04-23-2018
02:05 PM
1 Kudo
I know this is an old post, but maybe it can still be usefull to someone. To start the standalone you need to pass both the worker config (connect-standalone.properties) and the connectors configurations (mongo_connector_config.properties in Robin's case). For someone with the same problem, bare in mind that for starting the standalone connector you need to pass ".properties" files. To update or add new connectors you would need to pass ".json" files. You can see in the confluent documentation page here. bin/connect-standalone worker.properties connector1.properties [connector2.properties connector3.properties ...] Vicente
... View more
07-03-2017
09:21 PM
That's correct https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-troubleshooting/content/resolving_cluster_install_and_configuration_problems.html
... View more
06-16-2017
02:44 PM
yes, this link provide exact the same commands. however, do you think Hortonworks better have these commands in
the Ambari installation. instead gave a warning and having installer to check all over the places to find out?
... View more
06-09-2017
02:18 AM
thank you so much. even I need more time to digest it, I can tell this is what I need to know. thank you for spending time to help. Robin
... View more
06-14-2017
03:47 AM
Thank you Cdraper, Finally I use this combination, RHEL 7.3 (m4.large) , ambari 2.4.10 and HDP 2.5 install 7 nodes on AWS. the only issues I deal with is change the namenode java heap size to 4gb in my case. then everything worked. your info is still very helpful. I am going to setup kafka 3 nodes and mongodb from here. will keep you posted. thanks again. Robin
... View more
06-12-2017
02:36 PM
@Robin Dong In Linux only iptables controls the Kernel based firewall. You might have firewalld in CentOS7 or ufw in Ubuntu but they're just an abstraction layer on top of iptables. So if 'iptables -L' doesn't show anything then it's all good. The Ambari iptables check is rudimentary and it doesn't know if the rules that exist still allow all the traffic. It only checks for 'service iptables status' or 'systemctl status firewalld', which means there are no filter tables. But please be aware of the cloud firewall as well. For example in AWS even instances in the same Security Group are not allowed by default to communicate with each another and this must be enabled explicitly: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html#sg-rules-other-instances
... View more
06-06-2017
05:39 AM
both of #openssl s_client -connect ip-172-31-13-143.us-west-2.compute.internal:8440 and 8441 have no output.
... View more
06-06-2017
05:23 AM
sorry, I terminated all nodes, however, I know this alert info: connection failed to http://ip-172-31-15-246.us-west-2.compute.internal on every node. some ports just dont open and directories not exist. my passwdless setup correctly for sure. however, connection refused, it is IP problem? if need static IP, limitation is 5. I have more than 10 nodes. Can you help?
... View more
06-05-2017
02:03 PM
oh, my ambari service is external mode, use http://ec2....amazon.com:8080, if this is so called external mode. after ambari-server reset, I got the same error. the repo is: http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.4.0.1/ambari.repo rpm -qa|grep ambari-metrics -- show all packages are installed. so the hbase configure info is the problem. it refuse the connection. I dont need to edit hbase configure with ambari 2.4 in the past at all. any changes on HDP? thanks, Robin
... View more
06-01-2017
04:14 AM
you are right, Since my setting all working before I never use VCP and just thought about this is may be a cause. I will work on it and get back to you. thanks again.
... View more
05-31-2017
06:27 PM
still like to know the best combination of RHEL version and ambari.repo version. document says it will fit all, but different error comeup.
... View more
05-31-2017
07:13 PM
thanks, I got your another post and tried ambari 2.2 so dont need to deal with smartsense.
... View more
05-22-2017
04:52 PM
According to the mongodb doc here, https://www.mongodb.com/blog/post/in-the-loop-with-hadoop-take-advantage-of-data-locality-with-mongo-hadoop This means that we can accomplish local reads by putting a DataNode process (for Hadoop) or a Spark executor on the same node with a MongoDB shard and a mongos. In other words, we have an equal number of MongoDB shards and Hadoop or Spark worker nodes. We provide the addresses to all these mongos instances to the connector, so that the connector will automatically tag each InputSplit with the address of the mongos that lives together with the shard that holds the data for the split. the above mentioned that we have an equal number of MongoDB shards and Hadoop or Spark worker nodes. I think it means number of mongodb shard = data nodes with spark. so my next step is to ensure sparks installed on all data nodes and then mongodb shards on these worker nodes. Let me know if you think differently. thanks Robin
... View more
05-19-2017
10:16 PM
I will close this case I got my answer here.
... View more
06-01-2017
05:50 AM
Thanks for the link @Eyad Garelnabi.I have HDP 2.4 installed on ubuntu,followed the same steps provided but getting the following errors during Install,start and test wizard in Ambari.Any help would be thankful. Fail: Applying File['/etc/yum.repos.d/mongodb.repo'] failed, parent directory /etc/yum.repos.d doesn't exist
... View more
04-24-2017
12:41 PM
Thank you so much Dave. after some search, we may have to use kafka producer API for this complex data streaming. However, your answer give me a new thought on nifi with kafka. Thank you very much for your time to help. Robin
... View more
03-24-2017
08:29 AM
How have you installed HDP/HDF? When you do a standard Ambari install of HDP it will ask you where you want to put each service, and you can choose to put Kafka on separate nodes from the other services. You can also move Services using the Ambari interface, generally by putting them in maintenance mode on a particular node, stopping them, then selecting to move them to another node. I would suggest that if you need more Kafka brokers in the same environment you should have them all controlled by the same Ambari, yes.
... View more
03-20-2017
11:06 AM
Q1. Change data capture in Ni-Fi is the easiest way to capture incremental records there are work around as well depending upon the use case. Q2. I believe yes. But if your target is hive then its better not go with all three. Capture just the incremental records into HDFS and do the comparison within HDFS and update the target. Q4. It depends. If you are looking for real-time processing then dont think of choosing sqoop. Sqoop is specifically desinged for large data processing. So if real- time processing is needed go with kafka/Nifi to ingest data into hadoop. Kafka/NiFi can handle incremental volume in a decent way.
... View more