Member since
06-03-2016
66
Posts
21
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3295 | 12-03-2016 08:51 AM | |
1762 | 09-15-2016 06:39 AM | |
1972 | 09-12-2016 01:20 PM | |
2278 | 09-11-2016 07:04 AM | |
1888 | 09-09-2016 12:19 PM |
12-23-2019
09:55 PM
@MattWho , Thank you for the details. I can understand your point that same file cannot be accessed across cluster nodes. But, I am using Nifi is single node (without cluster) and was thinking that this should work. Yes, i do use "Update Attribute" with the below file name conventions. This generates separate flow file for every message. I am trying to have this as one single file per node. ${filename:replace(${filename},"FileName_"):append(${now():format("yyyy-MM-dd-HH-mm-ss")})}.Json Thank you
... View more
09-02-2019
01:44 AM
What are the impacts on other ports if I change from TCP6 to TCP? And will my Ambari server work on TCP?
... View more
03-21-2017
05:43 PM
Check /var/log/hive/hivemetastore.out as well.
... View more
12-16-2016
07:45 AM
2 Kudos
Hello @Mohan V PutKafka is designed to work with Kafka version 0.8 series. If you're using Kafka 0.9, please use PublishKafka processor instead. Thanks, Koji
... View more
12-03-2016
08:51 AM
Thanks for the suggestion jss. But it could'nt solved the issue completely. I have moved those files into temp directory and again tried to start the server but, now it given another error as ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. when i checked into the logs, there i have found that the current version db is not comapatable with the server. then i have tried these steps
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.0/ambari.repo
yum install ambari-server -y ambari-server setup -y
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.1/ambari.repo
yum upgrade ambari-server -y
ambari-server upgrade ambari-server start when i run these commands after that ambari server did started but here is the amazing this has happened. actually, i removed the ambari completely and trying to reinstall it. when i completed all the above steps, and when i entered into the ambari ui, it is again pointing to the same host which i have removed previously. I was just shocked by seeing that with heartbeat lost. then i realised that ambari agent is not at installed, then i installed ambari agent and started it . yum -y install ambari-agent ambari-agent start then,when i tried to start the services it didnt worked. i checked in command prompt that, is these all serivecs still exist or not ?, by entering zookeeper . but that command is not found, because service is not installed in my host. Then i started to remove the services from the host which is present in a dead mode,using these commands. curl
-u admin:admin -H “X-Requested-By: Ambari” -X DELETE http://localhost:8080/api/v1/clusters/hostname/services/servicename but it didnt worked, I got a error msg as
message" : "CSRF protection is turned on. X-Requested-By HTTP header is required." then i have edited the ambari-server.properties file and added these lines into that vi /etc/ambari-server/conf/ambari.properties
api.csrfPrevention.enabled=false
ambari-server restart then again i have retried it, at this time it did worked. But when i tried to remove hive, it didnt,because mysql is running in my machine. when i tried this command it did worked. curl -u admin:admin -X DELETE -H 'X-Requested-By:admin' http://localhost:8080/api/v1/clusters/mycluster/hosts/host/host_components/MYSQL_SERVER then, when i tried to add the services starting with zookeeper,again, it given me error like "resource_management.core.exceptions.Fail: Applying
Directory['/usr/hdp/current/zookeeper-client/conf'] failed, looped
symbolic links found while resolving
/usr/hdp/current/zookeeper-client/con Then i have checked the directories, i got to know that these links were pointing back to the same directories. So, i have tried these commands to solve this issue. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf And it did worked. at last i have successfully reinstalled the ambari as well as hadoop in my machine. Thank you.
... View more
12-01-2016
08:28 AM
thanks for the reply jss. i have tried all what you have suggested already. but still getting the same issue. when i start the datanode through ambari ui follwoing error is occured, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
11-26-2016
09:35 PM
@cmcbugg, pls see details of the issue in link -> https://community.hortonworks.com/questions/68497/kafka-error-while-fetching-metadata-topicmetadata.html Essentially, this is a fresh HDP 2.4 instance, and i've just enabled Ranger-Kafka plugin. Zookeper nodes permission : [kafka1@sandbox ~]$ ls -lrt /hadoop/zookeeper/ total 8 -rw-r--r-- 1
root root 1 2016-03-14 14:17 myid drwxr-xr-x 2 zookeeper hadoop 4096
2016-11-26 19:44 version-2 kafka1 User permission (on hdfs) : [kafka1@sandbox ~]$ hadoop fs -ls /user/ Found 11 items drwxrwx--- -
ambari-qa hdfs 0 2016-03-14 14:18 /user/ambari-qa drwxr-xr-x - hcat hdfs
0 2016-03-14 14:23 /user/hcat drwxr-xr-x - hive hdfs 0 2016-03-14 14:23
/user/hive drwxr-xr-x - kafka1 hdfs 0 2016-11-26 20:31 /user/kafka1
drwxr-xr-x - kafka2 hdfs 0 2016-11-26 20:32 /user/kafka2 Any ideas on what needs to be changed, to enable this ?
... View more
09-20-2016
07:43 AM
2 Kudos
Its working now. I had to change my ambari.properties file... added db.mysql.jdbc.name=/var/lib/ambari-server/resources/mysql-connector-java-5.1.28.jar and modified these lines server.jdbc.rca.url=jdbc:mysql://localhost:3306/ambari server.jdbc.url=jdbc:mysql://localhost:3306/ambari
... View more
09-12-2016
01:38 PM
@gkeys please try to look in to this.and suggest me where am i missing. https://community.hortonworks.com/questions/56017/pig-to-elasesticsearch-stringindexoutofboundsexcep.html
... View more
09-11-2016
09:30 PM
@Mohan V though there are efforts to make it work, there are no supported ways to do it directly with Kafka and Pig. You can leverage something like Apache Nifi to read from Kafka, dump to HDFS and then consume those messages with Pig. Since Kafka can produce messages continuously and Pig job has a start and end, it really isn't a good fit for it. All that said, here's an attempt to make it work. http://mail-archives.apache.org/mod_mbox/pig-user/201308.mbox/%3C-3358174115189989131@unknownmsgid%3E
... View more