Member since
02-01-2022
257
Posts
85
Kudos Received
57
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
95 | 04-12-2024 06:05 AM | |
804 | 12-07-2023 04:50 AM | |
448 | 12-05-2023 06:22 AM | |
928 | 11-28-2023 10:54 AM | |
587 | 11-28-2023 06:01 AM |
03-13-2023
07:18 AM
@larsfrancke Unfortunately I do not have the exact solution or information you need. However, I do have multiple customers whom have gotten their CDP on Isilon kerberized and in production. There were some tickets on our support side leading through the kerberos setup, but the specific technical solution came from Dell's side since this is supported solution for Isilon. My recommendation is to work with Cloudera Support to see if they have suggestions, and then work with Dell Support coming out of that. Your Cloudera account team and Dell Partner should have access to deeper resources if both support's cannot resolve.
... View more
03-02-2023
06:44 AM
1 Kudo
@fahed What you see with the CDP Public Cloud Data Hubs using GCS (or object store) is a modernization of the platform around object storage. This removes differences across aws, azure, and on-prem (when Ozone is used). It is a change by customer demand so that workloads are able to be built and deployed with minimal changes from on prem to cloud or cloud to cloud. Unfortunately that creates a difference you describe above, but those are risks we are willing to take ourselves in favor of modern data architecture. If you are looking for performance, you should take a look at some of the newer options for databases: impala and kudu (this one uses local disk). Also we have Iceberg coming into this space too.
... View more
03-01-2023
04:14 AM
Nice and Quick! Excellent!
... View more
03-01-2023
04:13 AM
@Pierro6AS First thing you should do is increase the size of the message queue. The default size is quite low (10,000 records and 1gb). It is possible to see this error if the flowfiles have been in the queue for too long. It is also possible to see this error if the file system has other usage outside of nifi. For best performance nifi's backing folder structure (content/flowfile repository) should be dedicated disks that are larger than the demand of the flow (especially during heavy unexpected volume). You can find more about this in these posts: https://community.cloudera.com/t5/Support-Questions/Unable-to-write-flowfile-content-to-content-repository/td-p/346984 https://community.cloudera.com/t5/Support-Questions/Problem-with-Merge-Content-Processor-after-switch-to-v-1-16/m-p/346096/highlight/true#M234750
... View more
02-24-2023
05:59 AM
@saketa Magic sauce right here, great article!!
... View more
02-24-2023
05:55 AM
1 Kudo
@kishan1 In order to restart a specific processor group you will need to use some command line magic against the Nifi API. For example, this could be done by using a command to stop the processor group, then the restart nifi command, then start processor group. You can certainly be creative in how you handle that approache once you have experimented with the API. https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
... View more
02-24-2023
05:51 AM
@mmaher22 You may want to run the python job inside of ExecuteScript. In this manner, you can send output to a flowfile during your loops iterations with: session.commit() This command is inferred at the end of the code execution in ExecuteScript to send output to next processor (1 flow file). So if you just put that in line with your loop, then the script will run, and send flowfiles for every instance. For a full rundown of how to use ExecuteScript be sure to see these great articles: https://community.hortonworks.com/articles/75032/executescript-cookbook-part-1.html https://community.hortonworks.com/articles/75545/executescript-cookbook-part-2.html https://community.hortonworks.com/articles/77739/executescript-cookbook-part-3.html
... View more
02-23-2023
05:09 AM
1 Kudo
@fahed That size is to be able to grow and serve in production manner. At first that disk usage could be low. For DataHubs, My recommendation is to start small and grow as needed. Most of your work load data should be in object store(s) for the data hubs, so dont think of that "hdfs" disk as being size constrained to initial creations of the hub.
... View more
02-22-2023
08:28 AM
1 Kudo
@merlioncurry Lacking a bit of deatils, so making some assumptions that you used an Ambari UI to upload to HDFS. So those files are going to be in hdfs://users/maria_dev, not on the actual machine location for the same users. You will need use hdfs commands to view them. If they do not work, then the path you uploaded may be different. From the sandbox prompt: hdfs dfs -ls /users/ hdfs dfs -ls /users/maria_dev
... View more
02-22-2023
08:16 AM
1 Kudo
@fahed The HDFS Service inside of the DataLake is supporting of the environment, and its services. For example: Atlas. Ranger, Solr, Hbase. It's size, is based on the environment scale. You are correct in the assumption that your end user HDFS Service is part of Data Hubs deployed around the environment. You should not try to use the environment's HDFS Service for applications and workloads that would be part of deeper Data Hubs.
... View more