Member since
04-27-2016
218
Posts
133
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2646 | 08-31-2017 03:34 PM | |
5528 | 02-08-2017 03:17 AM | |
2190 | 01-24-2017 03:37 AM | |
8554 | 01-19-2017 03:57 AM | |
4491 | 01-17-2017 09:51 PM |
12-21-2016
01:16 PM
1 Kudo
From solr admin ui please confirm if node is associated with all shrad. Try the following sequence. - Stop all Solr instances
- Stop all Zookeeper instances
- Start all Zookeeper instances
- Start Solr instances one at a time
... View more
12-20-2016
10:51 PM
its seems netowork connectivity issue "Failed connect to ganne-test0.field.hortonworks.com:50070; Connection refused" Please make sure in your access and Security section(for openstack) add the Ingress rule for port 50070.
... View more
12-20-2016
10:25 PM
1 Kudo
Looking at the exception p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; color: #454545} Caused by: java.nio.file.NoSuchFileException: /opt/lucidworks-hdpsearch/solr/server/solr/NifiCollection_shard2_replica1/data/index/segments_cm at sun.nio.fs.UnixException.translateToIOException It seems its looking for index file which empty or not present. I would stop solr, delete the index folder (take backup in case) and restart solr. It will create the index folder again. See if it solves your problem.
... View more
12-19-2016
11:36 PM
Its working for me , I did the following. 1. Opend the port 2181 2. Stop the local firewall on all nodes. (iptables).
... View more
12-19-2016
11:20 PM
1 Kudo
@Sunile Manjee I am able to connect in one shot only after using the same host in the url from where I am running the beeline. e.g if I am running beeline from host z , I was able to connect using beeline -u jdbc:hive2://z:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 I am thinking I might have to open the port 2181 explicitly for other host to access.
... View more
12-19-2016
08:28 PM
I am able to resolved this by using XMLHttpRequest object event handlers on the client side application.
... View more
12-19-2016
02:54 PM
Can you please share your 1. S3 Bucket policy 2. Cors configuration. 3. NiFI processor config. You can check https://community.hortonworks.com/articles/49467/integrating-apache-nifi-with-aws-s3-and-sqs.html for reference.
... View more
12-16-2016
10:25 PM
It seems nifi user dont have permissions to write into /tmp dir. You have two options. 1. Change the permission on /tmp folder to allow everyone to write into it. 2. If you have configured Ranger, make sure in the resource based policy for HDFS, nifi user is allowed access to all paths or specific paths you want write to.
... View more
12-16-2016
10:06 PM
1 Kudo
I was running into the issue NIFI-2828. I ended up using Hive NAR provided by @Matt Burgess as mentioned here https://community.hortonworks.com/questions/59681/puthivestreaming-nifi-processor-various-errors.html. It must have been already fixed with latest NiFi version.
... View more
12-16-2016
09:48 PM
1 Kudo
Make sure you set the correct spark home which points to spark2-client. export SPARK_HOME=/usr/hdp/current/spark2-client You must be pointing it to export SPARK_HOME=/usr/hdp/current/spark-client
... View more