Member since
07-21-2021
405
Posts
10
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
803 | 05-06-2022 11:10 AM | |
1238 | 04-12-2022 11:59 PM | |
971 | 03-17-2022 09:57 AM | |
438 | 03-17-2022 09:54 AM | |
736 | 03-14-2022 08:49 AM |
10-26-2022
06:43 AM
Are you able to login with default username and password. username:admin and password:admin?
... View more
10-26-2022
05:01 AM
1 Kudo
Hello @D5ha We have a community article that explains nifi's content repository. https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Easy way to get hold of your file is from Nifi Data Provenance. https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.1.0/bk_getting-started-with-apache-nifi/content/data-provenance.html Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
10-26-2022
04:45 AM
Hello @stefankambiz_be To access the repository, you will need to have paywall credentials. And these credentials are provided by the cloudera accounts team.
... View more
10-26-2022
04:37 AM
Do you see any error in amabari-server logs ?
... View more
05-13-2022
03:31 AM
Hello @jacektrocinski I see there is a similar discussion here: https://community.cloudera.com/t5/Support-Questions/Getting-Unable-to-obtain-listing-of-buckets-org-apache-nifi/m-p/227856#M189716 Thanks, Azhar Was your question answered ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
05-06-2022
11:10 AM
Hello @jacektrocinski I understand you want to connect to a CDP Data Hub NiFi Registry from a local machine? - To connect NiFi to a Registry, select Controller Settings from the Global Menu. - This displays the NiFi Settings window. Select the Registry Clients tab and click the + button in the upper-right corner to register a new Registry client. - In the Add Registry Client window, provide a name and URL. - Click "Add" to complete the registration. You can refer to : https://docs.cloudera.com/cdf-datahub/7.2.8/nifi-user-guide/topics/nifi-connecting-to-nifi-registry.html Thanks, Azhar Was your question answered ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
04-13-2022
12:05 AM
Hello Please refer to https://community.cloudera.com/t5/Community-Articles/Using-RStudio-as-an-Editor-with-ML-Runtimes/ta-p/325166 Was your question answered on cloudera community portal ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
04-12-2022
11:59 PM
hello @sa Yes you can create a custom processor in Nifi. You can refer to https://stackoverflow.com/questions/68937735/how-to-convert-csv-to-excel-using-python-with-pandas-in-apache-nifi Thanks, Azhar Was your question answered ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-23-2022
02:18 AM
Thanks @grlzz Was your question answered on cloudera community portal ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
09:57 AM
Hello @Koffi Once the disk is dynamically increased ( from 1T to 3T) there should be no impact and once added you can run hdfs balancer to balance the data across the datanodes. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
09:54 AM
Hello @Soa
Hive partition divides the table into a number of partitions and these partitions can be further subdivided into more manageable parts known as Buckets or Clusters. The Bucketing concept is based on Hash function, which depends on the type of the bucketing column. Records which are bucketed by the same column will always be saved in the same bucket.
The Bucketing concept is based on Hash function, which depends on the type of the bucketing column. Records which are bucketed by the same column will always be saved in the same bucket. Here, CLUSTERED BY clause is used to divide the table into buckets. each partition will be created as a directory. But in Hive Buckets, each bucket will be created as a file. Bucketing can also be done even without partitioning on Hive tables.
Bucketed tables allow much more efficient sampling than the non-bucketed tables. Allowing queries on a section of data for testing and debugging purpose when the original data sets are very huge. Here, the user can fix the size of buckets according to the need. This concept also provides the flexibility to keep the records in each bucket to be sorted by one or more columns. Since the data files are equal sized parts, map-side joins will be faster on the bucketed tables.
Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
09:43 AM
Hello @grlzz It would be great if you can open a support case for this, we will need to investigate on this and see the options available to implement this. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-14-2022
08:49 AM
@RajeshReddy for tag based policies you can refer to https://docs.cloudera.com/runtime/7.2.10/security-ranger-authorization/topics/security-ranger-tag-based-policies.html Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-14-2022
01:47 AM
Hello @grlzz Thanks for sharing your findings with us. I understand Rstudio needs a user on the host (and it should be present under /etc/passwd (where all the local users belongs to)) You can create and add local user on the host as workaround, but If the node where you added the user is repaired or restarted it will loose the user from /etc/passwd that you have recently added and you will need to add it again. Cloudbreak will use the predefined images and steps and as the user will be through manual creation It will not consider the added user Incase of node reboot/repair. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. Thanks, Azhar
... View more
03-13-2022
12:50 AM
@RajeshReddy Can you please give a try with changing the role to "environment admin"?
... View more
03-13-2022
12:48 AM
1 Kudo
Hello @Griggsy I can see from the error (Service:Amazon S3 Error code :403, Access Denied) The error code 403 means to be an authorization issue from cloud provider end. If you are using any service account with the processor, can you please check and confirm if you the service account have access to the bucket that you have configured in Nifi ? Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. Thanks, Azhar
... View more
03-10-2022
03:18 AM
1 Kudo
@mehmetersoy CM does not have dependency on samba, and does not use any samba packages. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-10-2022
03:13 AM
Hello @grlzz the worload user have the access to the instances in the environment. refer to https://docs.cloudera.com/management-console/cloud/user-management/topics/mc-create-machine-user.html the documentation will help you to create a machine user. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.
... View more
03-10-2022
03:05 AM
Hello @vishal_ Yes you are right. Machine users in CDP have programmatic access. If you have IDP integration with CDP you can create one user at Azure and add the user to the Azure AD group that is mapped with CDP and ask the user to login from Azure end to access the CDP application. If you are using CDP local users (users are directly created in CDP) you can reach out to your accounts team or open an administrative case from support portal to add the user to CDP and then you can manage access accordingly. I hope I have answered your question. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.
... View more
03-03-2022
12:45 AM
Hello Andrea, You can create a machine user for programmatic access. URL : https://docs.cloudera.com/management-console/cloud/user-management/topics/mc-create-machine-user.html If you want a complete new user who can access the application (CDP) from support portal then you will need to have administrative case for new user creation and then you can enable SSO for that use. I hope I have answered your question. If your question is answered can you please mark this comment "Accept as Solution" Thanks, Azhar
... View more
03-03-2022
12:05 AM
Hello @corestack Good Day. We have a community article for Keycloak and CDP Integration. Can you please try and validate If you have followed the steps as mentioned in the above article. Thanks, Azhar https://community.cloudera.com/t5/Community-Articles/How-to-configure-Single-Sign-On-SSO-for-CDP-Public-Cloud-the/ta-p/300222
... View more
03-02-2022
11:27 PM
Hello @andrea_pretotto Good Day. CDP Account Administrator : You can create a administrative case to the cloudera support with the name and email address of the user (whom you want to have as CDP admin user). You can also reach out to your Cloudera Accounts Team. I understand you already have an external IDP integrated with CDP and would like to use the new user (CDP admin user) with SS0. You can later enable SS0 for the user from User Management Tab. Regards, Azhar Shaikh
... View more
01-14-2022
09:37 AM
Hello @jludac Thanks for letting us know. Yes. To access the archive repos, you will need to access it through the shared wall credentials
... View more
01-13-2022
02:39 AM
Hello @jludac , I understand you are not able to get an "archive" build with your Cloudera login credentials. You will need different credentials(paywall credentials) to download the build. Paywall credentials are required to access the same. Can you please reach contact the Cloudera support team to progress ahead? Thanks,
... View more
01-07-2022
12:35 AM
Hello @Pravin93 1) Within Cloudera Navigator, you can apply metadata and tagging via the Navigator UI or the Navigator API. Please see the below links for each: https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cn_iu_metadata_modify.html#xd_583c10... https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cn_nav_hive-hdfs_api.html#tag_hive_h... 2) Apache Sentry provides only Role-Based Access Control (RBAC) policies. Creating access policies based on metadata and tags is more commonly referred to as Attribute-Based Access Control (ABAC). Navigator+Sentry in CDH6.3 does not provide ABAC functionality. You will need Atlas and Ranger available in CDP which can support both RBAC and ABAC policies. Please see an overview of ABAC in this Cloudera Youtube video on ABAC. Thanks, Azhar
... View more
12-29-2021
07:34 AM
Hello @ta Good Day. Were you able to access the Cloudera Manager Ui earlier? - Do you have access to the environment? You can check the access you have on the left bottom of the environment page (it will display your username). You can click on your username and check the access you have to the environment. - You can grant yourself the environment access as per your role. If you want complete access you can opt yourself for the "Environment admin" access role. Once you update the same please synchronize the user (you can see it top right tab of your screen). - Logout from the cdp console and try to access the UI again. Please let us know If you are able to access the same. Thanks, Azhar
... View more
12-16-2021
12:07 AM
Hello @Sumitra Can you please share the snippet of the putsftp processor for the configuration section? Are you able to sftp with the user configured on the processor to the server apart from nifi ?
... View more
12-08-2021
08:35 AM
Hello @KhangNguyen , Was there any recent commissioning/decommissioning of nodes? Do you see any health alerts on data nodes related to space? Can you share the output of hdfs dfsadmin -report command
... View more
12-08-2021
08:27 AM
Hello @KhangNguyen Can you please share the command that you are trying to list the kafka topics and just wanted to confirm If zookeeper is in active state.
... View more
12-08-2021
08:15 AM
Hello @IAJ, Good Day. From the error in the community post I see " Proposed configuration is not inheritable by the flow controller because of flow differences: Found difference in Flows: Local Fingerprint" - I see the flow.xml.gz is not in sync with the coordinator (nifi node). - You can follow the below steps to get the node connected to the cluster. 1) SSH to the nifi node which is disconnected from the nifi cluster. 2) Take a backup of the existing flow.xml.gz and move it to a different location. 3) Remove the flow.xml.gz after taking the backup (make sure you note the permission and ownership of the flow.xml.gz ) 4) SSH into the coordinator nifi node, you can see the coordinator node on nifi cluster, where you see all the nodes are disconnected/disconnected 5) SCP the flow.xml.gz from the coordinator node to the disconnected nifi node (in your home folder). 6) Now copy the flow.xml.gz to the exact location from where you have removed the flow.xml.gz (step3) , once you copy it to the original location make sure the permissions and ownership are updated properly. 7) Once the steps are followed try to restart the node from .from backend uisng ./nifi.sh start (you need to find the location where are those scripts in your cluster. Don't connect the node from nifi UI. Another way is to take backup of flow.xml.gz on the disconnected node and remove the flow.xml.gz from the location and start the nifi service. make sure there is no defunct/zoombie process already running for nifi on the disconnected node. === Reason for disconnection and reconnection : - Can you please confirm If there are multiple processors that are in disabled states? And How many templates are there in your nifi registry?
... View more