Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 983 | 06-04-2025 11:36 PM | |
| 1564 | 03-23-2025 05:23 AM | |
| 779 | 03-17-2025 10:18 AM | |
| 2807 | 03-05-2025 01:34 PM | |
| 1852 | 03-03-2025 01:09 PM |
04-21-2019
09:44 AM
@Shilpa Gokul There is information that you should have provided to help members help you resolve your problem. Is you cluster kerberized? What is the command you are executing? Can you share your Ranger/ kafka policy configuration?
... View more
04-19-2019
01:22 PM
@siddharth pande Your error seems to suggest that the nifi cluster is trying to connect to a local zookeeper. Can you add an entry in your /etc/hosts with the IP and FQDN of your remote zk server and the alias like below? I would suggest you comment out the localhost entry. This entry should be applied to the all the 3 nifi nodes # 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.0.123 remote.zookeer.com remote And then restart your Nifi cluster
... View more
04-19-2019
06:12 AM
@chetan gehlot The problem was indeed that Docker image files are architecture specific (at least the default ones). So a Docker file built on Intel will only work on Intel, and a Docker file build for Arm32 will only work for Arm32. It seems there are ways to build an Arm build on an Intel device but that would still leave you with distributing two separate images. And if you have a physical Arm device it is much easier to build an Arm image directly on an Arm device. You also need to make sure your base image supports your architecture, but the official one has now been built as multi-arch images so this is not usually a problem. Another related problem and solution: standard_init_linux.go:207 In case your entry point is bash script check whether it contains correct shebang, something like that: #!/usr/bin/env bash
make -f /app/makefile $@
Also, have a look at standard_init_linux.go:190: exec user process caused "no such file or directory" - Docker, might be related too.
... View more
04-18-2019
06:12 PM
@Andy Sutan When your VM instance reboots it gets a new IP address. In fact, we even get a new hostname as the private IP address is baked into it. Change the hostname to match your Linux box the out of $ hostname -f e.g myhost.com Use the public IP vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 [public IP] myhost.com myhost Private IP addresses are not reachable over the Internet and can be used for communication between the instances in your VPC or data center. Public IP addresses are reachable over the Internet and can be used for communication between your instances and the Internet, or with other AWS services that have public endpoints.
... View more
04-18-2019
10:50 AM
@sk Any update?
... View more
04-18-2019
10:44 AM
1 Kudo
@Afroz Baig Firstly you really don't need to modify manually the krb5.conf as they MUST be identical on all the cluster nodes. What you should do is run scp from the Ambari server where you configured the passwordless connection. Assuming your Ambari Server hosts file entry has all the cluster node and egdenode1 is your target # scp /etc/krb5.conf root@edgnode1:/etc/ This will copy and overwrite the incorrect krb5.conf on the edge node. Assuming you have a user named analyst01 on the edge node who intends to run a job after the update you will do the following as user analyst1 assuming he has his keytab in his home directory # su - analyst01 To determine if he has a valid ticket, in the below he didn't have one # klist klist: No credentials cache found (filename: /tmp/krb5cc_0) Grab a ticket $ kinit -kt /home/analyst01/analyst01.keytab Now he should be able to grab a valid ticket and the klist should validate that $ klist
Ticket cache: FILE:/tmp/krb5cc_1013
Default principal: analyst01-xxx@{REALM}
Valid starting Expires Service principal
04/13/2019 23:25:32 04/14/2019 23:25:32 krbtgt/_host@{REALM}
04/13/2019 23:25:32 04/14/2019 23:25:32 HTTP/_host@{REALM} You don't need to restart any services on the edge node !
... View more
04-17-2019
05:19 PM
@sk Can you remove this 2 entries in ambari.properties and restart Ambari kerberos.keytab.cache.dir=/var/lib/ambari-server/data/cache
kerberos.operation.verify.kdc.trust=true Then proceed with starting the services
... View more
04-17-2019
10:40 AM
@Dennis Suhari The command you are running is wrong , it's the wrong variation you forgot a dash - Note the space before and after the dash !! As the root user run # su - yarn That should work HTH
... View more
04-16-2019
01:54 PM
@Naveenraj Devadoss Did you remember this part? "You'll also need to ensure that the machine where NiFi is running has network access to all of the machines in your Hadoop cluster." Please revert
... View more
04-16-2019
06:37 AM
@Naveenraj Devadoss You need to copy the core-site.xml and hdfs-site.xml from your HDP cluster to the machine where NiFi is running. Then configure PutHDFS so that the configuration resources are "/path/to/core-site.xml,/path/to/hdfs-site.xml". That is all that is required from the NiFi perspective, those files contain all of the information it needs to connect to the Hadoop cluster. You'll also need to ensure that the machine where NiFi is running has network access to all of the machines in your Hadoop cluster. You can look through those config files and find any hostnames and IP addresses and make sure they can be accessed from the machine where NiFi is running. HTH
... View more