Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3144 | 01-11-2021 05:54 AM | |
2247 | 01-11-2021 05:52 AM | |
6001 | 01-08-2021 05:23 AM | |
5574 | 01-04-2021 04:08 AM | |
25793 | 12-18-2020 05:42 AM |
06-24-2020
06:34 AM
@math23 I suggest to try without /opt/ maybe in /tmp or /root (~/). Also make sure executing with the right user permissions (sudo if needed). You can also add --verbose into the main command to get more useful output. What version of Ambari and HDP? I love playing with mpacks and I have made quite a few of my own. I am going to experiment with this airflow one (link) and report back when I get it installed.
... View more
06-24-2020
05:11 AM
@bjornmy thanks for the updated info. It appears like you are using replaceText to replace the content of the flowfile with just the ${inn hold}. This over writes the existing flowfile content (entire original json object) with just the encoded data.content value, which you decode, and then convert to Json. What I have suggested is that you do this: EvaluateJson -> UpdateAttribute (decode here) -> AttributesToJson (choose to flowfile-content) -> PutS3Object This will eliminate the replaceText and base64Decode processors and still give you decoded xml for the content object as json. Next, you can decide to complete this same process for the rest of the original json object. For example, if you want header or metadata from the original json in the S3Object. You add these values to the EvaluateJson, get the data to attributes, then in attributesToJson you send multiple original attributes with the decoded attribute. If you complete all the data values you would end up with exact same json object w/ the decoded XML. The way you have it now, and the way I suggest are just two ways to do this. Neither is right or wrong mine just gives you the ability to break down the entire object and rebuild the original json with your modification. You may also want to look into JoltTransform and/or UpdateRecord. I do not have much experience with Jolt but you may find it can also accomplish a similar process and act only on data.content versus parsing large xml to an attribute. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.10.0/org.apache.nifi.processors.standard.JoltTransformJSON/ http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.UpdateRecord/index.html
... View more
06-24-2020
05:02 AM
@Zerath I agree NiFi is a great tool to do this, but you can also do it right in hive. One solution you could try would be to create the hive table of the original data format and schema (source_table). Make sure you can select * from this source_table and see the desired results. Next create a table of the avro data format and schema (final_table). With source_table and final_table created you simply execute: insert into final_table select * from source_table; The results in final_table will be stored in Avro format. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
06-23-2020
01:29 PM
@DavidGM You have a few options here: 1. Your yarn UI probably should not just be wide open to vulnerability scans. Consider securing the UI, blocking external access to unauthorized parties. Check out kerberos, yarn + SSL, LDAP/AD, etc. If the scanning application cannot see the UI, they cannot see or try to read the jQuery versions. This is then a pass. This is a standard practice for internally facing applications versus live web/ip public applications that are vulnerable to automated version exploits. That said, I am an advocate for passing the scans, not just firewalling them away. 2. You could build Yarn from source yourself with the jQuery versions that satisfy your scan requirements. This requires some serious thought and planning as it isn't a simple task and would not be supported through traditional channels. 3. You can hack into the file system and change the files directly. Similar to #2, this is going to be unsupported, but sometimes, you just have to do whatever it takes to pass a vulnerability scan. For example, lets look under the hood for where these files exist for #3. [root@c7301 /]# find . -name 'jquery-3.3.1.min.js' ./usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/jquery-3.3.1.min.js ./hadoop/yarn/local/filecache/10/mapreduce.tar.gz/hadoop/share/hadoop/hdfs/webapps/static/jquery-3.3.1.min.js [root@c7301 hadoop-hdfs]# grep -lr 'jquery-3.3.1.min.js' * hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar hadoop-hdfs-tests.jar webapps/datanode/datanode.html webapps/hdfs/dfshealth.html webapps/hdfs/explorer.html webapps/journal/index.html webapps/router/federationhealth.html webapps/secondary/status.html For #2, these are relevant file searches on the source code: [root@c7301 hadoop-3.2.1-src]# find . -name *.min.js | grep jquery ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.12.1.custom.min.js ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-3.3.1.min.js ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js ./hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js ./hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.3.1.min.js [root@c7301 hadoop-3.2.1-src]# grep -lr '.min.js' * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestUpgradeDomainBlockPlacementPolicy.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html hadoop-tools/hadoop-sls/src/test/resources/simulate.html.template hadoop-tools/hadoop-sls/src/test/resources/track.html.template hadoop-tools/hadoop-sls/src/main/html/simulate.html.template hadoop-tools/hadoop-sls/src/main/html/showSimulationTrace.html hadoop-tools/hadoop-sls/src/main/html/track.html.template hadoop-tools/hadoop-sls/pom.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js LICENSE.txt [root@c7301 hadoop-3.2.1-src]# grep -lr 'jquery-3.3.1.min.js' * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml LICENSE.txt If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
06-23-2020
05:11 AM
@Bibhusisa You can often already find a solution here by searching: https://community.cloudera.com/t5/Support-Questions/migrate-postgres-database-ambari-to-mysql/td-p/121142 Hope this helps, Steven
... View more
06-23-2020
05:02 AM
@bjornmy The solution you are looking for is to use updateAttribute to operate on the attribute you want to modify with the NiFi Expression Language for base64encode/base64decode. This will operate on the flowfile attribute in contrast to the Base64EncodeContent processor which acts on the flowfile content. Assuming encoded content => $.data.content in EvaluateJson, the updateAttribute for content will look like: ${content:base64decode()} You may need to evaluate all the attributes you need to use. Once you have the attribute(s) formatted the way you want, you then use AttributesToJson (configuration set to flowfile-content) to rebuild the json object you want to send content of the flowfile downstream to the final S3 Bucket. I am sorry I can't be more specific on the last parts, as I did not create a test sample, and I cannot see exactly what you are doing with ReplaceText->Base64EncodeContent->ConvertRecord. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
06-21-2020
05:50 AM
@ajay_mamgain200 You need to quote your json values like this: {
"format": "CSV",
"filter": {
"createdAt": {
"startAt": "2020-06-20 16:33:19.780Z",
"endAt": "2020-06-21 16:33:19.780Z"
},
"activityTypeIds": [
1
]
}
} You may need to modify the activityTypeIds too. I commend your use of Postman for testing before implementing the test in NiFi. This is exactly what I do to determine operational base sample call formats to a service before starting to implement in NiFi. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
06-19-2020
07:28 AM
<edit> Sorry I thought this was a new post. @guido please do not respond to old posts with new solutions. Ambari 2.6.0 Bug is not applicable here. @apappu The version on the end of the package is how they handled having multiple versions available in the repos as well as facilitating the upgrade process from one version to another. Using the variable scope in the python allowed ambari code to be dynamic across all the different versions, environments, etc. If your repos are setup right, this should not be an issue. I have seen some failure in the public repos lately, slow to respond, or blocked by certain cloud providers causing the "no package found" errors. If you are running your own private repos and have an issue like this, you can just create the packages you need in the version you want using the rpmrebuild command on an existing rpm.
... View more
06-15-2020
08:02 AM
@JohnA Per documentation: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-email-nar/1.11.0/org.apache.nifi.processors.email.ConsumeIMAP/ State management: This component does not store state. If you click Additional Details: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-email-nar/1.11.0/org.apache.nifi.processors.email.ConsumeIMAP/additionalDetails.html It goes into some debug steps that may help you understand what is going on. Unfortunately I think the only way to reset it, would be to delete and create again.
... View more
06-10-2020
06:36 AM
@Mondi I have not seen this specific 800 error. You should be able to find deeper information in the actual application logs. They are hard to find, several clicks deep in the Yarn UI. Depending on your application, there can be many log files, and you will need to inspect each one for deeper details in the failure.
... View more