Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2824 | 04-27-2020 03:48 AM | |
| 5478 | 04-26-2020 06:18 PM | |
| 4659 | 04-26-2020 06:05 PM | |
| 3700 | 04-13-2020 08:53 PM | |
| 5604 | 03-31-2020 02:10 AM |
07-05-2019
05:48 AM
@dear Jay - I think we need the option uninstall lets say - in worse case we need to uninstall because some problems
... View more
07-05-2019
07:35 AM
@Michael Bronson You do not need to delete the whole Spark2 Service. You can delete the "SparkThrift Server" host component which are installed on the two hosts (as per your initial screenshot) and then check if your Spark2 is all working fine ... Prior to the HDP upgrade please check/run the Spark2 service check. If everything is green and running then you can start upgrading to HDP 2.6.5. Once the upgrade is completed then any time you can click on the Ambari UI --> Hosts (Tab) --> Specific Host --> "Add" (Button) and then add the "SparkThrift Server" when you need it.
... View more
07-04-2019
11:13 PM
@Michael Bronson Yes, it is possible. But the strongly recommended approach will be to upgrade Ambari to 2.6.2.2 first and then upgrade HDP to 2.6.5.
... View more
07-15-2019
07:18 PM
@Jay Kumar SenSharma after Changing if params.nifi_registry_url and params.stack_support_nifi_auto_client_registration and False: it worked. Thank you so much!
... View more
07-01-2019
04:32 AM
@Dear Jay - Apache Kafka 2.0.0 , is it the same version of confluent 2.0.0 ?
... View more
06-20-2019
01:28 PM
1 Kudo
@Gulshan Agivetova At least one problem i see in your config which is causing the following error: Executable command awk ended in an error: awk: fatal: cannot open file `print $0}" for reading (No such file or directory Thsis is because in your . "Command Arguments" you are using semicolon. And in "ExecuteStreamCommand" the "Argument Delimiter" is also set to ";" (which is default delimiter) May be you can try changing the "Argument Delimiter" to something else then check if you are still getting the same error or not?
... View more
06-18-2019
02:30 AM
1 Kudo
@Sandeep Gunda One aproach may be to use the "EvaluateJsonPath" Processor as following to get the total number of results (basically result is an Array here) so we can try something like following to store the size of the array to a new attribute resultCount = $.result.length() Example: Then later you can read the attribute resultCount in some other processor as following: ${resultCount} .
... View more
06-11-2019
10:52 AM
@Adil BAKKOURI We see the following message seems to be causing the DataNode startup failure. 2019-06-11 12:30:52,832 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data
java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851 Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. Please copy the clusterID from nematode "<dfs.namenode.name.dir>/current/VERSION" and put it in the VERSION file of datanode "<dfs.datanode.data.dir>/current/VERSION" and then try again. Also please check the following link: https://community.hortonworks.com/questions/79432/datanode-goes-dows-after-few-secs-of-starting-1.html
... View more
06-04-2019
03:43 AM
The above question and the entire reply thread below was originally posted in the Community Help Track. On Tue Jun 4 03:37 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself.
... View more