Member since
08-16-2015
97
Posts
16
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
893 | 07-11-2021 08:05 PM | |
1676 | 07-11-2021 06:37 PM | |
39627 | 06-04-2021 12:01 AM | |
1058 | 06-03-2021 11:43 PM | |
3467 | 04-26-2021 06:58 PM |
12-24-2024
08:46 AM
1 Kudo
Hi @Riyadbank If you want to install psycopg2 package. See https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/installation/topics/cdpdc-installing-psycopg2-package.html
... View more
02-26-2023
10:29 PM
Hello Mike Likely your existing 3 Zookeeper nodes can serve your expansion requirements You can monitor the CPU and network of the Zookeeper nodes when your Kafka cluster is growing, when reaching the throughput limit, you can expand your zookeeper to 5 nodes Remember the zookeeper nodes need to keep in sync all the time, so the more zookeeper nodes the more traffic will be added to keep them in sync, while those nodes handling the Kafka requests; so it doesn't mean the more the better I would suggest to stay with 3 zookeeper nodes while expanding your kafka cluster with close monitoring, and consider to grow to 5 when the CPU/network throughput reaching the limit You can also consider to tune the zookeeper nodes e.g. dedicated disks, better network throughput, isolate zookeeper process, disable swaps
... View more
03-17-2022
11:54 AM
Hello @Koffi The balancer will do the job for you, please refer to the below Official docs before configuring it. 1- Overview of the HDFS Balancer 2- Configuring the Balancer Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-15-2022
08:30 AM
2 Kudos
Found a dirty workaround: generate a cookie from an API call and then use it to login into the console. As an example, in order to get the HDFS usage report, #!/bin/bash -x COOKIES=./cookies.txt USER="" PASS="" LOGIN="https://<cloudera_manager>:7183/api/version" REPORT="https://<cloudera_manager>:7183/cmf/services/11/nameservices/nameservice1/reports/currentDiskUsage?groupBy=DIRECTORY&format=CSV " SALIDA=./hdfs_usage.csv function login_cloudera(){ USER=$1 PASS=$2 wget --save-cookies ${COOKIES} --keep-session-cookies --user "${USER}" --password "$(echo ${PASS}|base64 -d)" --delete-after ${LOGIN} } function download_report(){ USER=$1 read -p "Password:" -s PASS PASS=$(echo $PASS | base64) login_cloudera $USER $PASS wget --load-cookies ${COOKIES} ${REPORT} -O ${SALIDA} rm ${COOKIES} } # MAIN USER=$1 download_report $USER
... View more
02-17-2022
10:21 AM
@gdfranco Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
02-08-2022
06:26 PM
Hello Have you tried to export the metadata first? Once you export the existing metadata, you can have a working copy of the template for your version of Atlas, you can fit your bulk metadata into that and import back to Atlas
... View more
01-18-2022
08:34 AM
@Kilynn So as i mentioned in my last response, once memory usage go to high, OS level OOM Killer was most likely killing the NiFi service to protect the OS. The NiFi bootstrap process would have detected the main process died and started it again assuming OOM killer did not kill the parent process.
... View more
07-30-2021
01:09 PM
Hi, Once you extracted the header into Flowfile attribute using ExtractText processor, next you are going to convert the header flowfile attribute into Flow file content OR you can keep the header value as in attribute ..The stack overflow explains about extracting header into flowfile attribute and next they have pass the headers as file into destination .To convert flowfile attribute to file/flowfile content ,we will have to use ReplaceText processor where you can pass flowfile attributes .. The success relationship of ReplaceText will only contains header in flowfile content and the original csv file will be replaced with header as content . The flowfile content, you can transfer to destination or next processor in the flow. Hope this information you are looking for .. Thanks
... View more
07-30-2021
05:51 AM
Hi Daming, I have configured that change too. But still, after 2 hr the hue is down with a proxy error. Please help me with a solution. Regards, GD
... View more
07-29-2021
11:53 PM
Hello This is not a NiFi problem, you may try to use some online tool to help https://jolt-demo.appspot.com/#inception
... View more