Member since
04-24-2019
20
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4085 | 10-15-2018 06:26 AM |
12-06-2019
10:16 AM
@Muhammad_Waqas What is the configuration of the TailFile processor?
... View more
08-06-2019
08:44 AM
Check this playbook: https://github.com/cloudera/cloudera-playbook HTH, André
... View more
06-12-2019
10:32 PM
Hi Muhammad, Since the error mentioned that you only have one DN and it is excluded in the operation, which means you have no live DNs. Have you checked what happened to the DN? Any indications on it logs? Cheers Eric
... View more
05-29-2019
06:41 PM
The above was originally posted in the Community Help Track. On Wed May 29 18:31 UTC 2019, a member of the HCC moderation staff moved it to the Security track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
05-17-2019
11:53 AM
@Muhammad waqas I saw some discrepancy in the krb5.conf please copy and paste this one which I have updated with your entries [libdefaults]
default_realm = ABCDATA.ORG
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = true
udp_preference_limit = 1000000
default_tgs_enctypes = aes256-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96
kdc_timeout = 3000
[realms]
ABCDATA.ORG = {
kdc = cloudera.abcdata.org
admin_server = cloudera.abcdata.org
default_domain = ABCDATA.ORG
}
[domain_realm]
.abcdata.org = ABCDATA.ORG
abcdata.org = ABCDATA.ORG
[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
Problem [TOKEN, KERBEROS]; Host Details: local host is: "FQDN/X.X.X.X"; destination host is: "FQDN":PORT;" The above shows your hostname is not configured # kadmin.local
Authenticating as principal root/admin@ABCDATA.ORG with password.
kadmin.local: listprincs
Sample output on Hortonworks nm/cloudera.abcdata.org@ABCDATA.ORG
nn/cloudera.abcdata.org@ABCDATA.ORG
oozie/cloudera.abcdata.org@ABCDATA.ORG
rangeradmin/cloudera.abcdata.org@ABCDATA.ORG
rangerlookup/cloudera.abcdata.org@ABCDATA.ORG
rangertagsync/cloudera.abcdata.org@ABCDATA.ORG
rangerusersync/cloudera.abcdata.org@ABCDATA.ORG
rm/cloudera.abcdata.org@ABCDATA.ORG Can you share the output of $ hostname -f Does it match the entries in /etc/hosts? the format should be IP:FQDN:ALIAS After the validation and correction please regenerate the keytabs using Cloudera Manager Admin Console HTH
... View more
04-26-2019
05:18 AM
@Geoffrey Shelton Okot Thanks for the reply. Actually i am trying to send data from hortonworks to cloudera cluster. dfs.exclude file is empty. The distcp command you've written has some issues,we can't attach 50070 port with hdfs prefix to do this write webhdfs instead of hdfs. my distcp command is hadoop --config (path of directory containing hdfs-site or core-site.xml files of targeting cluster) distcp hdfs://nn1/path hdfs://nn2/path Aforementioned command can send data from cloudera to hortonworks cluster but i want to do it in reverse direction.
... View more
04-25-2019
03:01 AM
Please guide me I'm trying to send simple file from hortonworks to cloudera using distcp command but getting error "could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation". I am looking to hear from you. Thanks.
... View more
11-07-2018
09:50 PM
@Muhammad waqas
- The "java.nio.filenoSuchFileException:/var/log/snort/alerts.csv" excpetion leads me to belive NiFi cannot even see this file. - Does the user that owns the NiFi java process have the ability to navigate to the /var/log/snort/ directory and read the "Alerts.csv" file? - Thank you, Matt
... View more
10-15-2018
09:33 AM
@Muhammad waqas In reality you don't communication channel between NiFi and Kafka. It's your PublishKafka and ConsumeKafka processors that connect behind the scenes!!! First I see your processors are not started. Can you start them in this sequences
GenerateFlowFile PublishKafka LogAttribute ConsumeKafka That flow files should work you need to do the following Create a Kafka topic Need to create a Kafka topic eg ./bin/kafka-topics.sh kafka-topic --create --topic Mytest --zookeeper 127.0.0.1:2181 --partitions 3 replication-factor 1 In the PublishKafka configure the processor Property value Kafka brokers 127.0.0.1:9092
Security protocol PLAINTEXT
Topic Name mytest
Delivery Guarantee Guarantee Replicated Delivery
Start the GeneraFlowFile processor Start the PublishKafka processor Configure and Start a kafka consumer ./bin/kafka-console-consumer.sh console-consumer --topic Mytest bootstrap-server 127.0.0.1:9092 log attribute settings
Check Automatically Terminate Relationships Apply and start the processor Configure and Start a Kafka consumer In the properties tab Property Value Kafka Brokers 127.0.0.1:9092
Security Protocol PLAINTEXZT Topic Name
Mytest Group ID Test (can be anything)Offset
Reset latest or earliest Save by clicking apply Now all your processor should show sgreen et voila !!
... View more