Member since
06-20-2018
50
Posts
9
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1064 | 10-02-2018 07:54 AM | |
743 | 09-24-2018 08:54 AM | |
977 | 08-20-2018 06:52 AM | |
1058 | 08-20-2018 05:39 AM | |
1180 | 08-14-2018 12:21 PM |
10-03-2018
01:20 PM
Yes you can run ephemeral spark jobs via NiFi please refer https://community.hortonworks.com/repos/64179/launching-spark-jobs-from-nifi.html and https://github.com/diegobaez/PUBLIC/tree/master/NiFi-SnapSpark Note: Please upvote or mark this answer accepted if it resolves you found it useful
... View more
10-03-2018
01:14 PM
You can not create the jar from hive view you will need to create it using java IDE i.e. eclipse or through command line tools I have shared you the video which explains you on how to create UDF jar file you can refer that video tutorial https://www.youtube.com/watch?v=BDbMPfNw_Tc
... View more
10-02-2018
07:54 AM
@Sami Ahmad You can create hive udf jar file using this video tutorial https://www.youtube.com/watch?v=BDbMPfNw_Tc then you can follow the tutorial https://community.hortonworks.com/articles/138964/how-to-use-udfs-to-run-hive-queries-in-ambari-hive.html to run the jar file from the hive view Note: Please mark this answer accepted and upvote it if you found it useful
... View more
09-24-2018
08:54 AM
1 Kudo
Technical preview components or functions are generally not ready for production deployment. You should explore these features or components in a non production cluster. Note: Please upvote or mark this answer as accepted if you found it useful.
... View more
09-20-2018
05:48 AM
Please refer below answer from https://community.hortonworks.com/questions/140030/druid-installation.html First, yes you can co-locate all those service together. Second in order to get high availability you need to have at least 2 different physical nodes running all the services. Thus you will get HA with a replication of 2. Or you can choose an other combination of collocation where each service is run at least over 2 different nodes. Although ideally you want to have something like this. Node1 Broker Node2 Broker Node3 Router/Overlord/Coordinator/Superser Node4 Router/Overloard/Coordinator/Superset The reason what you need broker to be alone is the fact that broker usually needs way more memory than all the other together therefore you might have special hardware for that. But to keep it simple you can start with collocate all the services X 2 and make sure that broker is not running with another service that needs RAM as well.
... View more
09-13-2018
11:08 AM
It is generally set to 3
... View more
09-12-2018
11:38 AM
Please manually install that missing package libtirpc-devel on the nodes first and then try to proceed with the installation yum install libtirpc-devel Let me know how this goes
... View more
09-06-2018
08:24 AM
Please make sure that you are using server hostname to access strom UI You will need a valid kerberos ticket on your communication node and after that you will able to access the strom UI fine Note: Please mark this answer as accepted if you found it useful
... View more
09-03-2018
10:33 AM
Was there is any JDK changes made on the server like upgrade of java or java security changes ? Can you please paste the value of variable jdk.tls.disabledAlgorithms from file $JRE_HOME/lib/security/java.security I believe changes in above parameter may have caused this issue
... View more
08-31-2018
12:34 PM
have you tried on connecting the port 6667 from outside you can configure additional listners according to your need and can use those ports to connect kafka from outside
... View more
08-29-2018
06:17 PM
No it will not require those details I have upgraded kerberized HDP cluster from version HDP-2.6.4.0 to HDP-2.6.5.0 using express upgrade There was no need of admin principal password at any point. Upgrade went smoothly. Note: Please upvote and accept this answer if you found it useful
... View more
08-26-2018
02:30 PM
ranger plugin jar seems to be missing from the server you can copy that and then start the namenode
... View more
08-23-2018
08:34 AM
refer https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/bhvr_changes_upgrade_hdp3_amb27.html tez view pig view are removed from ambari now Note: Please upvote and accept this answer if this solves your issue
... View more
08-23-2018
06:06 AM
1 Kudo
8 characters with at least one alphabet and one numeric Note: Please upvote or accept the answer if you found this useful
... View more
08-22-2018
05:21 AM
are you able to ping the domain newsapi.org from the Nifi host Please confirm that first Use of InvokeHTTP seems correct but somehow that domain name is not resolving
... View more
08-21-2018
07:59 AM
Authorization params impacted: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_cloud-data-access/content/wasb-authorization.html Authentication params impacted: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_cloud-data-access/content/authentication-wasb.html Auditing param impacted: set these values in the ranger-hdfs-audit.xml , for Audit to HDFS on WASB <property> <name>xasecure.audit.destination.hdfs</name> <value>enabled</value> </property> <property> <name>xasecure.audit.destination.hdfs.dir</name> <value>wasb://ranger-audit1@youraccount.blob.core.windows.net</value> </property> <property> <name>xasecure.audit.destination.hdfs.config.fs.azure.account.key.youraccount.blob.core.windows.net</name> <value>YOUR ENCRYPTED ACCESS KEY</value> </property> <property> <name>xasecure.audit.destination.hdfs.config.fs.azure.account.keyprovider.youraccount.blob.core.windows.net</name> <value>org.apache.hadoop.fs.azure.ShellDecryptionKeyProvider</value> </property> <property> <name>xasecure.audit.destination.hdfs.config.fs.azure.shellkeyprovider.script</name> <value>/usr/lib/python2.7/dist-packages/hdinsight_common/decrypt.sh</value> </property> Note: Please upvote or accept the answer if you found it useful
... View more
08-21-2018
06:39 AM
In the installation process there is step of configure services you can go back to that step then correct the paths namenode directory and datanode directory save that config and then proceed with the installation process further Note: Please upvote and accept the answer if you find it useful
... View more
08-20-2018
12:27 PM
Yes you can do this by using regex expression Below regex expression should suffice your need /homepath/customer_*/inbox/ If there are only specific number of customers then you can user something like /homepath/customer_[0-9]/inbox/ /homepath/customer_[A-Z]/inbox/ Note: Please upvote and accept this answer if you found it useful
... View more
08-20-2018
06:52 AM
1 Kudo
It is removed from Ambari 2.7 refer https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/bhvr_changes_upgrade_hdp3_amb27.html Note: Please upvote or accept this answer if you found it useful
... View more
08-20-2018
05:39 AM
1 Kudo
Please check out below KB articles https://community.hortonworks.com/articles/46258/iot-example-in-apache-nifi-consuming-and-producing.html https://community.hortonworks.com/articles/178747/mqtt-with-apache-nifi.html Note: Please upvote and accept this answer if you found it useful
... View more
08-20-2018
05:28 AM
This is expected behavior if your cluster is stop for extended period of time. When you start the cluster back the last checkpoint becomes very old respective to current time on the cluster server hence it throws that error alert Note: Please upvote and accept this answer if you found it useful
... View more
08-17-2018
04:11 AM
is your cluster also having ranger enabled ? You will need to grant access in ranger policies too. If you have enabled kerberos then please do kinit after login as hbase user
... View more
08-16-2018
10:03 AM
@Sanchit
Arora
As ambari server is now installed you can follow tutorial https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation/content/ch_Deploy_and_Configure_a_HDP_Cluster.html to further create cluster using the ambari server above document contains step by step instruction to create cluster using apache ambari server UI Note: Please accept and upvote this answer if it helps you
... View more
08-16-2018
09:20 AM
1 Kudo
@ibrahima
diattaraYou can get that information using API call http://hostname:9090/nifi-api/processors/c4305271-3f44-1644-0000-00005c2a9cc7 Sample output would look like {"revision":{"clientId":"41fc6ab0-0165-1000-a1fc-4df6aad14b3f","version":2},"id":"42010d71-0165-1000-0000-000011d8c855","uri":"http://hostname:9090/nifi-api/processors/42010d71-0165-1000-0000-000011d8c855","position":{"x":572.0,"y":402.0},"permissions":{"canRead":true,"canWrite":true},"bulletins":[],"component":{"id":"42010d71-0165-1000-0000-000011d8c855","parentGroupId":"329be04d-0165-1000-a159-adcb3e6903f8","position":{"x":572.0,"y":402.0},"name":"GetFile","type":"org.apache.nifi.processors.standard.GetFile","bundle":{"group":"org.apache.nifi","artifact":"nifi-standard-nar","version":"1.5.0.3.1.2.0-7"},"state":"STOPPED","style":{},"relationships":[{"name":"success","description":"All files are routed to There in the name field you will get the processor name Note: Please mark this answer as accepted and upvote if it resolves your issue
... View more
08-16-2018
07:14 AM
@sganeshkumar databases are kept in folder /apps/hive/warehouse If you want to check it in HDFS to see the files associated with the database then use below command hdfs dfs -ls /apps/hive/warehouse If you want to access hive command line from the sandbox shell then first change the user to hive using su - hive then run command hive on hive command line hive> SHOW DATABASES; Note: Please mark this answer accepted and upvote if you find it useful
... View more
08-14-2018
06:57 PM
Please mark this answer as accepted if you found it useful
... View more
08-14-2018
12:21 PM
1 Kudo
you can checkout KB article https://community.hortonworks.com/articles/177561/streaming-tweets-with-nifi-kafka-tranquility-druid.html
... View more
08-13-2018
06:59 AM
can you please give us more information like exactly at what task upgrade is stuck Click in upgrade in progress link and provide screenshot of the task it have executed and task it is stuck at so we can troubleshoot this further accordingly.
... View more
08-11-2018
05:10 AM
I'm glad that all sorted now another way was deleting the particular node from the cluster and then readding it and after adding spark client on it. I have recently done that one of my test cluster recently and it worked
... View more
08-10-2018
06:16 AM
1 Kudo
The correct way to docker pull repository is docker pull "$registry/$name:$version" hence the correct command to pull docker image is docker pull hortonworks/sandbox-hdp:2:6:5 docker deploy script is also using above command to pull the sandbox image so when you are using so it is official command to pull the sandbox image "docker pull hortonworks/sandbox-hdp:2:6:5"
... View more