Member since
06-03-2016
66
Posts
21
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2366 | 12-03-2016 08:51 AM | |
939 | 09-15-2016 06:39 AM | |
1274 | 09-12-2016 01:20 PM | |
1165 | 09-11-2016 07:04 AM | |
1100 | 09-09-2016 12:19 PM |
12-23-2019
09:55 PM
@MattWho , Thank you for the details. I can understand your point that same file cannot be accessed across cluster nodes. But, I am using Nifi is single node (without cluster) and was thinking that this should work. Yes, i do use "Update Attribute" with the below file name conventions. This generates separate flow file for every message. I am trying to have this as one single file per node. ${filename:replace(${filename},"FileName_"):append(${now():format("yyyy-MM-dd-HH-mm-ss")})}.Json Thank you
... View more
09-02-2019
01:44 AM
What are the impacts on other ports if I change from TCP6 to TCP? And will my Ambari server work on TCP?
... View more
09-11-2018
01:58 PM
@Shu Im still having the same issue. I checked the firewalls and they are off. And i have downloded the jdbc drivers for sql and put it onto C:\Program Files\Java\jre1.8.0_171\lib\ext this directory In PutDatabaserecord:- Url:- jdbc:sqlserver://hjcorpsql-04:1433;databaseName=Test1;user=MyorganizationName\pbiuser;password=Secure@99; ClassName:- com.microsoft.sqlserver.jdbc.SQLServerDriver Driver Location:- E:/Software/sqljdbc_6.0/enu/jre8/sqljdbc42.jar untitled5.png Im running my nifi onto production system. And the sql server is SQL Server Enterprise Edition 2016 and Windows Server 2012 R2. Please help me. I have been struck here from last one week.
... View more
06-07-2017
12:38 PM
@Mohan V A few observations about your above flow... You are trying to pass an absolute path and filename to the "directory" property of the putFile processor. The PutFile is designed to write files to the target directory using the filename associated to the FlowFile it receives. What you are doing will not work. Instead you should add an updateAttribute processor between your MergeContent processor and your PutFile processor to set a new desired filename on your merged files. How do you plan on handling multiple merged FlowFiles since they will all then end up with same filename? I suggest making them unique by adding the FlowFile UUID to the filename. Below is example of doing this using UpdateAttribute: Out of your MergeContent processor you are routing both original (all your un-merged FlowFiles) and merged relationships to the putFile processor. Why? Typically the original relationship is auto-terminated or routed elsewhere if needed. I also see from your screenshot that the PutFile processor is producing a "bulletin" (red square in upper right corner. Floating your cursor over the red square will pop up the bulletin. The bulletin should explain why the putFile is failing. It appears as though you are auto-terminating the failure relationship on PutFile. This is a dangerous practice as it could easily result in data loss. A more typical scenario is to loop failure relationship back on PutFile processor to trigger a retry in the event of failure. Thanks, Matt
... View more
03-21-2017
05:43 PM
Check /var/log/hive/hivemetastore.out as well.
... View more
12-16-2016
07:45 AM
2 Kudos
Hello @Mohan V PutKafka is designed to work with Kafka version 0.8 series. If you're using Kafka 0.9, please use PublishKafka processor instead. Thanks, Koji
... View more
12-03-2016
08:51 AM
Thanks for the suggestion jss. But it could'nt solved the issue completely. I have moved those files into temp directory and again tried to start the server but, now it given another error as ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. when i checked into the logs, there i have found that the current version db is not comapatable with the server. then i have tried these steps
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.0/ambari.repo
yum install ambari-server -y ambari-server setup -y
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.1/ambari.repo
yum upgrade ambari-server -y
ambari-server upgrade ambari-server start when i run these commands after that ambari server did started but here is the amazing this has happened. actually, i removed the ambari completely and trying to reinstall it. when i completed all the above steps, and when i entered into the ambari ui, it is again pointing to the same host which i have removed previously. I was just shocked by seeing that with heartbeat lost. then i realised that ambari agent is not at installed, then i installed ambari agent and started it . yum -y install ambari-agent ambari-agent start then,when i tried to start the services it didnt worked. i checked in command prompt that, is these all serivecs still exist or not ?, by entering zookeeper . but that command is not found, because service is not installed in my host. Then i started to remove the services from the host which is present in a dead mode,using these commands. curl
-u admin:admin -H “X-Requested-By: Ambari” -X DELETE http://localhost:8080/api/v1/clusters/hostname/services/servicename but it didnt worked, I got a error msg as
message" : "CSRF protection is turned on. X-Requested-By HTTP header is required." then i have edited the ambari-server.properties file and added these lines into that vi /etc/ambari-server/conf/ambari.properties
api.csrfPrevention.enabled=false
ambari-server restart then again i have retried it, at this time it did worked. But when i tried to remove hive, it didnt,because mysql is running in my machine. when i tried this command it did worked. curl -u admin:admin -X DELETE -H 'X-Requested-By:admin' http://localhost:8080/api/v1/clusters/mycluster/hosts/host/host_components/MYSQL_SERVER then, when i tried to add the services starting with zookeeper,again, it given me error like "resource_management.core.exceptions.Fail: Applying
Directory['/usr/hdp/current/zookeeper-client/conf'] failed, looped
symbolic links found while resolving
/usr/hdp/current/zookeeper-client/con Then i have checked the directories, i got to know that these links were pointing back to the same directories. So, i have tried these commands to solve this issue. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf And it did worked. at last i have successfully reinstalled the ambari as well as hadoop in my machine. Thank you.
... View more
12-01-2016
08:28 AM
thanks for the reply jss. i have tried all what you have suggested already. but still getting the same issue. when i start the datanode through ambari ui follwoing error is occured, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
11-26-2016
09:35 PM
@cmcbugg, pls see details of the issue in link -> https://community.hortonworks.com/questions/68497/kafka-error-while-fetching-metadata-topicmetadata.html Essentially, this is a fresh HDP 2.4 instance, and i've just enabled Ranger-Kafka plugin. Zookeper nodes permission : [kafka1@sandbox ~]$ ls -lrt /hadoop/zookeeper/ total 8 -rw-r--r-- 1
root root 1 2016-03-14 14:17 myid drwxr-xr-x 2 zookeeper hadoop 4096
2016-11-26 19:44 version-2 kafka1 User permission (on hdfs) : [kafka1@sandbox ~]$ hadoop fs -ls /user/ Found 11 items drwxrwx--- -
ambari-qa hdfs 0 2016-03-14 14:18 /user/ambari-qa drwxr-xr-x - hcat hdfs
0 2016-03-14 14:23 /user/hcat drwxr-xr-x - hive hdfs 0 2016-03-14 14:23
/user/hive drwxr-xr-x - kafka1 hdfs 0 2016-11-26 20:31 /user/kafka1
drwxr-xr-x - kafka2 hdfs 0 2016-11-26 20:32 /user/kafka2 Any ideas on what needs to be changed, to enable this ?
... View more
09-20-2016
07:43 AM
2 Kudos
Its working now. I had to change my ambari.properties file... added db.mysql.jdbc.name=/var/lib/ambari-server/resources/mysql-connector-java-5.1.28.jar and modified these lines server.jdbc.rca.url=jdbc:mysql://localhost:3306/ambari server.jdbc.url=jdbc:mysql://localhost:3306/ambari
... View more