Member since
06-05-2019
128
Posts
133
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1798 | 12-17-2016 08:30 PM | |
1344 | 08-08-2016 07:20 PM | |
2382 | 08-08-2016 03:13 PM | |
2489 | 08-04-2016 02:49 PM | |
2298 | 08-03-2016 06:29 PM |
04-12-2017
12:51 PM
@Ryan Cicak I followed procedure, but i am getting the handshake Exception and I made false "nifi.remote.input.secure" still didnt help. Please help me did I miss anything. 2017-04-12 06:13:50,696 INFO [StandardProcessScheduler Thread-1] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled GetFile[id=616d3e3e-015b-1000-0000-000000000000] to run with 1 threads 2017-04-12 06:13:51,006 ERROR [Timer-Driven Process Thread-1] o.a.n.r.c.socket.EndpointConnectionPool EndpointConnectionPool[Cluster URL=http://hdf.hadoop.com:9090/nifi/] failed to communicate with Peer[url=nifi://namenode.hadoop.com:10000,CLOSED] due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:13:51,008 ERROR [Timer-Driven Process Thread-1] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=from minifi,target=http://hdf.hadoop.com:9090/nifi/] failed to communicate with http://hdf.hadoop.com:9090/nifi/ due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:13:51,020 INFO [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@1656d5bc Successfully refreshed Peer Status; remote instance consists of 1 peers
2017-04-12 06:14:01,023 ERROR [Timer-Driven Process Thread-3] o.a.n.r.c.socket.EndpointConnectionPool EndpointConnectionPool[Cluster URL=http://hdf.hadoop.com:9090/nifi/] failed to communicate with Peer[url=nifi://namenode.hadoop.com:10000,CLOSED] due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:14:01,023 ERROR [Timer-Driven Process Thread-3] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=from minifi,target=http://hdf.hadoop.com:9090/nifi/] failed to communicate with http://hdf.hadoop.com:9090/nifi/ due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:14:11,036 ERROR [Timer-Driven Process Thread-2] o.a.n.r.c.socket.EndpointConnectionPool EndpointConnectionPool[Cluster URL=http://hdf.hadoop.com:9090/nifi/] failed to communicate with Peer[url=nifi://namenode.hadoop.com:10000,CLOSED] due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:14:11,036 ERROR [Timer-Driven Process Thread-2] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=from minifi,target=http://hdf.hadoop.com:9090/nifi/] failed to communicate with http://hdf.hadoop.com:9090/nifi/ due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode 2017-04-12 06:14:21,049 ERROR [Timer-Driven Process Thread-4] o.a.n.r.c.socket.EndpointConnectionPool EndpointConnectionPool[Cluster URL=http://hdf.hadoop.com:9090/nifi/] failed to communicate with Peer[url=nifi://namenode.hadoop.com:10000,CLOSED] due to org.apache.nifi.remote.exception.HandshakeException: org.apache.nifi.remote.exception.ProtocolException: Expected to receive ResponseCode, but the stream did not have a ResponseCode Thanks Chaitanya
... View more
10-03-2016
05:17 PM
2 Kudos
If you've received the error exitCode=7 after enabling Kerberos, you are hitting this Jira bug. Notice the bug outlines the issue but does not outline a solution. The good news is the solution is simple, as I'll document below. Problem: If you've enabled Kerberos through Ambari, you'll get through around 90-95% of the last step "Start and Test Services" and then receive the error: 16/09/26 23:42:49 INFO mapreduce.Job: Running job: job_1474928865338_0022
16/09/26 23:42:55 INFO mapreduce.Job: Job job_1474928865338_0022 running in uber mode : false
16/09/26 23:42:55 INFO mapreduce.Job: map 0% reduce 0%
16/09/26 23:42:55 INFO mapreduce.Job: Job job_1474928865338_0022 failed with state FAILED due to: Application application_1474928865338_0022 failed 2 times due to AM Container for appattempt_1474928865338_0022_000002 exited with
exitCode: 7
For more detailed output, check application tracking page:
http://master2.fqdn.com:8088/cluster/app/application_1474928865338_0022
Then, click on links to logs of each attempt.Diagnostics: Exception from container-launch.
Container id: container_e05_1474928865338_0022_02_000001
Exit code: 7
Stack trace: ExitCodeException exitCode=7:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:371)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell output: main : command provided 1
main : run as user is ambari-qa
main : requested yarn user is ambari-qa
Container exited with a non-zero exit code 7
Failing this attempt. Failing the application. You'll notice running "Service Checks" for Tez, MapReduce2, YARN, Pig (any service that involves creating a YARN container) will fail with the exitCode=7. This is because in YARN, the local-dirs likely has the "noexec" flag specified meaning the binaries that are added to these directories cannot be executed. Solution: Open /etc/fstab (with the proper permissions) and remove the noexec flag under all mounted drives specified under "local-dirs" in YARN. Then either remount or reboot your machine - problem solved.
... View more
09-24-2016
12:24 AM
1 Kudo
You may be in a bind if you need to install HDP on Azure with CentOS 6 or RHEL 6 and certain services (not everything). By following these steps below, you will be able to use ambari-server to install HDP on any of the supported Hortonworks/Azure VMs. 1) Configure your VMs - use the same VNet for all VMs Run the next steps as root or sudo the commands: 2) Update /etc/hosts on all your machines: vi /etc/hosts
172.1.1.0 master1.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net
172.1.1.1 master2.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net
172.1.1.2 master3.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net
172.1.1.3 worker1.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net
172.1.1.4 worker2.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net
172.1.1.5 worker3.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net * use the FQDN (find the fqdn by typing hostname -f). The ip address are internal and can be found on eth0 by typing ifconfig 3) Edit /etc/sudoers.d/waagent so that you don't need to type a password when sudoing a) change permissions on /etc/sudoers.d/waagent: chmod 600 /etc/sudoers.d/waagent
b) update the file "username ALL = (ALL) ALL" to "username ALL = (ALL) NOPASSWD: ALL": vi /etc/sudoers.d/waagent c) change permissions on /etc/sudoers.d/waagent: chmod 440 /etc/sudoers.d/waagent * change username to the user that you sudo with (the user that will install Ambari) 3) Disable iptables a) service iptables stop
b) chkconfig iptables off * If you need iptables enabled, please make the necessary port configuration changes found here 4) Disable transparent huge pages a) Run the following in your shell: cat > /usr/local/sbin/ambari-thp-disable.sh <<-'EOF'
#!/usr/bin/env bash
# disable transparent huge pages: for Hadoop
thp_disable=true
if [ "${thp_disable}" = true ]; then
for path in redhat_transparent_hugepage transparent_hugepage; do
for file in enabled defrag; do
if test -f /sys/kernel/mm/${path}/${file}; then
echo never > /sys/kernel/mm/${path}/${file}
fi
done
done
fi
exit 0
EOF
b) chmod 755 /usr/local/sbin/ambari-thp-disable.sh
c) sh /usr/local/sbin/ambari-thp-disable.sh * Perform a-c on all hosts to disable transparent huge pages 5) If you don't have a private key generated (where the host running ambari-server can use a privat key to login to all the hosts - please perform this step) a) ssh-keygen -t rsa -b 2048 -C "username@master1.jd32j3j3kjdppojdf3349dsfeow0.dx.internal.cloudapp.net"
b) ssh-copy-id -i /locationofgeneratedinaabove/id_rsa.pub username@master1 * Run b above on all hosts, this way you can ssh using the username into all hosts from the ambari-server host without a password 6) Install the ambari repo on the server where you'll install Ambari (documentation😞 wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo 7) Install ambari-server: yum install ambari-server 😎 Setup ambari-server: ambari-server setup * You can use the defaults by pressing ENTER 9) Start ambari-server: ambari-server start
* This could take a few minutes to startup depending on the speed of your machine 10) Open your browser and go to the ip address where ambari-server is running http://ambariipaddress:8080 * Continue with your HDP 2.4.3 installation
... View more
09-14-2016
05:25 AM
1 Kudo
Hi @Ryan Cicak Several processors that call an API (if not all) have a property Connection Timeout. You can set this property to wait for a fixed duration depending on your data source, network condition and so on (look at GetHttp for instance). You can use this property with a max retry strategy. The processor wait until the time out expire, and try again until it reaches a max retry number. If the max retry is reached, the flowfile goes into a processor that handle this special case (alert an admin, store data in a dir for errors, etc)
... View more
09-14-2016
11:43 AM
2 Kudos
What you described is the correct process, the NAR needs to be copied to the lib directory on each node of the cluster, and then the nodes need to be restarted. Nothing has changed in 1.0.0 that changes this approach.
... View more
08-08-2016
03:13 PM
1 Kudo
Hi @Mayank Pandey If you have existing tables (not in ORC format), I'd recommend creating the ORC tables. Then run: insert into yourorctable
select * from yourexistingtable; Is this how you are currently inserting data?
... View more
08-08-2016
07:32 PM
1 Kudo
Hi @john doe I recently ran PutKafka and GetKafka in NiFi (connecting to a local VM). I found that adding the FQDN and ip to /etc/hosts made this work for me. For example if the FQDN is host1.local and IP is 192.168.4.162 then adding 192.168.4.162 host1.local to /etc/hosts Made this work.
... View more
08-04-2016
03:50 PM
Hi @sbhat - this is certainly helpful, thank you for the reference!
... View more
08-03-2016
03:45 AM
Thank you! I believe java upgrade is not required. only openssl upgrade should fix this.
... View more
07-15-2016
11:28 PM
8 Kudos
Teradata's JDBC connector contains two jar files (tdgssconfig.jar and terajdbc4.jar) that must both be contained within the classpath. NiFi Database processors like ExecuteSQL or PutSQL use a connection pool such as DBCPConnectionPool which defines your JDBC connection to a database like Teradata. Follow the steps below to integrate Teradata JDBC connector into your DBCPConnectionPool: 1) Download the Teradata connectors (tdgssconfig.jar and terajdbc4.jar) - you can download the Teradata v1.4.1 connector on http://hortonworks.com/downloads/ 2) Extract the jar files (tdgssconfig.jar and terajdbc4.jar) from hdp-connector-for-teradata-1.4.1.2.3.2.0-2950-distro.tar.gz and move these files to your NIFI_DIRECTORY/lib/* 3) Restart NiFi 4) Under your DBCPConnectionPool (Controller > Controller Services), Edit your existing DBCPConnectionPool (if your pool is active, disable it before editing) 5) Under the Configuration Controller Service > Properties, define the following Database Connection URL: your Teradata jdbc connection url Database Driver Class Name: com.teradata.jdbc.TeraDriver Database Driver Jar Url: * Do not define anything, since you added the two jars to the NiFi classpath (nifi/lib), the driver jars will be automatically picked up -> you could only add one Jar here and you need two *which is why we added to the nifi/lib directory Database User: Provide Database user Password: Provide password for Database user You're all set, you'll now be able to connect to Teradata from NiFi!
... View more
Labels: