Member since
02-16-2016
89
Posts
24
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8314 | 05-14-2018 01:54 PM | |
913 | 05-08-2018 05:07 PM | |
465 | 05-08-2018 04:46 PM | |
1852 | 02-13-2018 08:53 PM | |
1896 | 11-09-2017 04:24 PM |
02-07-2020
03:08 AM
Hello, By following the link shared by Umain, I have been able to install on my home laptop(Mac), because it install different images(sandbox-hdp & sandbox-proxy). But on my office network, sandbox-hdp-standalone image is available only(above images are not available- Not access to docker images available globally , We need to ask Admin for importing images on internal repo). Is it possible to make it work with sandbox-hdp-standalone docker image?, (Because with this image I am getting the same error mentioned in original post(container exited without error).Also I tried to run this container with debugging option, but no error there as well.)
... View more
01-15-2019
11:24 AM
First of all doublecheck all configurations (incl. password). Just to avoid moving in the right direction. Secondly confirm that you do not need TLS enabled. If these don't help, the following might help with troubleshooting: 1. Become nifi on the node where nifi is running 2. Send the message via Python 3. Share the python command here Note: Please explicity specify all things that you configure in nify when executing python (even if they are not needed because of good defaults for instance).
... View more
05-28-2018
04:42 PM
For Hive: https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference For HBase: https://hbase.apache.org/book.html#_rest
... View more
05-28-2018
05:10 PM
Have a look at this if you need to by pass Avro for Hive DDL: https://github.com/tspannhw/nifi-convertjsontoddl-processor If you need to convert JSON to ORC (for Hive) Avro will be required. You will need to write/manage Avro schemas (recommended will be to use Schema Registry for that). Alternatively you can use Infer Avro schema to detect incoming schema from JSON but it may not be 100% correct all the time.
... View more
06-12-2018
11:35 AM
@Shailesh Bhaskar If the answer addressed your question,Take a moment to Log in and Click on Accept button below to accept the answer, That would be great help to Community users to find solution quickly for these kind of issues and close this thread.
... View more
05-24-2018
01:44 AM
@Umair Will do tomorrow morn. Thanks. In the meantime we yesterday downloaded the 17 May 2018 HDP 265/Ambari262 release. Today I did a hopefully-complete uninstall of HDP264/Ambari2615 from our centos74 AM NN & SNN & 6 datanodes.
Justin
... View more
05-14-2018
03:49 PM
Hi, Thanks for your response. Both usr/hdp and /etc/hadoop directories do not exist. Also, I am trying to start the hiveserver on the VM box and the command is not running. The attached screenshot is of hiverserver2 file and the paths mentioned in the file do not exist. Port 9083 is not running on the machine.I need to connect to the hive metastore from third party application.
... View more
05-10-2018
06:21 PM
I am creating a DAG in Json format and ingesting data to hive and then to impala. I can't use SQL statement or --query option as my framework doesn't support it.I am using Airflow to run these jobs. PS:I am able to run this using Select statement in sqooo command when running manually not in airflow.As said my framework for Airflow doesnt support query so looking for any other alternatives like using '' or ``
... View more
05-10-2018
07:05 AM
I tried and verified in my 10 node cluster. It worked perfectly.
... View more
05-08-2018
05:14 PM
Very comprehensive answer here: https://community.hortonworks.com/questions/73302/hdinsight-vs-hdp-service-on-azure-vs-hdp-on-azure.html
... View more
02-14-2018
02:07 AM
I found we can configure Zookeeper Server address for Spark worker/slave nodes by setting spark.hadoop.hive.zookeeper.quorum at `Custom spark2-default`
... View more
02-13-2018
09:11 PM
Usually by default the ticket expires 24 hours and cache expires 7 days. Depends on your directory services policies. Within 7 days you can do kinit -R for users. klist will show ticket and cache expiry time. Or you can use keytabs to automate ticket renewal. You don't have to kinit for hadoop services (ever), ticket renewal is managed automatically.
... View more
11-07-2017
08:35 PM
This is beaucse I am batching the messages for better throughput, the Batch Messaging Delimiter is \n (by default) is there anyway I can get rid of this. and help or ideas without using replacetext processor
... View more
11-13-2017
07:03 AM
@Umair Khan
Thanks for the template . Have another question ,suppose i have a threshold interval of 10 min but would like monitor only twice (10 min + 10 min)and not every 10 min ,how can that be done?
... View more
11-07-2017
07:25 PM
1 Kudo
Try stopping Nifi and purging everything within your provenance repository then start Nifi. Check nifi-app.log file for any provenance related events. Check if the user running Nifi process has access to read/write in set directory. I had a similar issue today instead my provenance implementation was set to Volatile, which I changed to WriteAhead. Also note, by default implementation is PersistentProvenanceRepository and if you have been changing implementations back and forth you will need to delete provenance data. (WriteAhead can read PersistentProvenanceRepository but not other way around).
... View more
05-02-2018
01:49 PM
This issue is not limited to IE. A similar issue with cached credentials/session may occur in Chrome. Restarting Chrome resolves the issue as well. To restart Chrome without losing current tabs, type chrome://restart in a new tab.
... View more
03-17-2017
01:05 PM
@Constantin Stanca Thank for the follow up. This was very helpful in addition to Umair's answer. I changed the query to pick up the max rowid for the new records. Again, very helpful tip!
... View more
02-27-2017
03:46 PM
Here is the solution: https://community.hortonworks.com/articles/82106/ambari-manually-disable-kerberos-authentication-fo.html
... View more
04-05-2018
12:27 AM
1 Kudo
Pierre solution is correct. If you installed Atlas after Ranger UserSync has been configured to use LDAP, new local users will not get synced in ranger like atlas. This user is needed to setup hbase tables. To fix, revert UserSync to UNIX, restart only Ranger UserSync, Switch back to UserSync LDAP config. In Ranger add user atlas to HBase all policy. Restart Atlas.
... View more
10-17-2016
10:32 PM
The user does have permission, when I run klist before and after calling my script I find a valid ticket which means that the cron job was able read the keytab file. I used the link to be able to call multiple commands in the same cron job line. It still does not explain why am I having this error I am afraid 😞
... View more
03-03-2016
06:05 PM
1 Kudo
@Umair Khan You are correct! It is a problem with the Hive view. I created a new Hive view and now am able to see my data bases again and run queries. Thanks so much for your help!
... View more
03-03-2016
07:09 AM
1 Kudo
@Alan Gates This is continued from previous post: I have made required changes in hive-site.xml on datanode, but when i restarted hive service from ambari changes are not reflecting in hive-site.xml it takes previous working configuration.
... View more