Member since
04-30-2020
43
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
999 | 03-20-2023 10:55 PM | |
1720 | 12-01-2022 09:57 PM | |
2194 | 09-21-2021 11:59 PM |
03-20-2023
10:55 PM
Hi @BrianChan , In the screenshot, we could see that you have added CFM Parcel instead please add the parcel directory like below and try once: https://archive.cloudera.com/p/cfm2/2.1.5.0/redhat7/yum/tars/parcel/
... View more
12-25-2022
09:22 PM
Hello @jannat , All Cloudera software will require a valid subscription and only be accessible from behind the paywall. You can use paywall credentials to access the repos. If you don’t have them, then you can reach out to Cloudera Support for more details. You can also refer below article to know more: https://my.cloudera.com/knowledge/Cloudera-Customer-Advisory-Paywall-Update-External?id=306085 Regards, Pallavi
... View more
12-01-2022
09:57 PM
1 Kudo
@Nigal , Currently , When you create hive/sqoop/falcon/storm entity which has an association to HDFS path, it shows up in Atlas. Otherwise , any file/folder created in HDFS doesn't show up in Atlas. For example, when you create a directory in HDFS , Atlas doesn't ingest it . But when you create a hive table like : "CREATE EXTERNAL TABLE test_table ( id int,value string) LOCATION '/user/cloudera/text' " Atlas creates a lineage graph which shows relationship between the hive table and the HDFS path. You can see the HDFS directories by searching "hdfs_path" and the hive tables by searching "hive_table".
... View more
11-29-2022
11:56 PM
Hello @Nigal , Yes right. There is no 'HDFS hook' pre-defined in Atlas. Atlas mainly collects information from Hive - Spark - Hbase - Impala https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/cdp-governance-overview/topics/atlas-metadata-collection-overview.html hdfs_path is synced only if this belongs to a Hive table's lineage (as is explained in https://issues.apache.org/jira/browse/ATLAS-599). By default, Atlas won't fetch HDFS paths. Unlike HIVE entities, HDFS entities within Atlas are created manually using the Create Entity link within the Atlas Web UI. Please check out the list of available 'hooks' in Atlas: https://atlas.apache.org Here's a document on creating hdfs_path manually in Atlas: https://atlas.apache.org/2.0.0/Export-HDFS-API.html
... View more
08-03-2022
04:38 AM
Hi All, You can check the below article for more details on Configuring Apache ranger to send syslog to a SIEM system: https://community.cloudera.com/t5/Support-Questions/Configuring-Apache-Ranger-to-send-Syslog-to-a-SIEM-system/m-p/181733#M143921 We officially support two destination location for ranger audits i.e. HDFS & SOLR Audits to log4j is a community feature and not certified by Cloudera Engineering Other KB reference: https://my.cloudera.com/knowledge/How-to-send-Ranger-audit-logs-to-log4j-appenders?id=276802
... View more
09-21-2021
11:59 PM
1 Kudo
Please follow the below mentioned steps whenever you face such issue: Cloudera Manager provides Atlas action option to initialize Atlas, which creates all the required backend resources HBase table, Solr collection and Kafka topics. Navigate to Atlas from the Cloudera Manager UI. Under actions, the Initialize Atlas option appears. Initiate it; it will create the required resources. Once done, verify the Altas UI. If the issue is still not resolved, perform the following steps: Login to Solr host and kinit with Solr principal as follows: # klist -kt /var/run/cloudera-scm-agent/process/XXX-solr-SOLR_SERVER/solr.keytab # kinit -kt /var/run/cloudera-scm-agent/process/XXX-solr-SOLR_SERVER/solr.keytab solr/host_name.principal.com@COE.CLOUDERA.COM Delete the /solr and /solr-infra znodes using Zookeeper client. Do the following: #/opt/cloudera/parcels/CDH/bin/zookeeper-client # ls / [solr, solr-infra] # deleteall /solr # deleteall /solr-infra # ls / (should not show any solr or solr-infra znodes) While performing the above steps, if you face error as "Authentication is not valid : /solr-infra/security" or "Node not empty: /solr-infra/security/zkdtsm/ZKDTSMRoot", kindly perform the following steps, or else skip to Step 3. Temporarily disable ACL by setting 'zookeeper.skipACL=yes' in ZooKeeper's Java Advance configuration in Cloudera Manager Navigate to Cloudera Manager > Zookeeper Service In Configuration, add '-Dzookeeper.skipACL=yes' to Java Configuration Options for Zookeeper Server Restart ZooKeeper with the new configuration. Run the following:#export JAVA_HOME=/usr/java/jdk1.8.0 #/opt/cloudera/parcels/CDH-X.X.X/lib/solr/server/scripts/cloud-scripts/zkcli.sh -cmd clear -z "localhost:2181" /solr-infra Repeat Step a Stop Solr service Execute Initialize Solr action from Solr service. Start Solr service Stop the Atlas service Execute Initialize Atlas action from Atlas service On Solr host, check if the following command returns the atlas_configs instancedir: #solrctl instancedir --list #sample output _default atlas_configs managedTemplate If it returns atlas_configs, skip this create command. If it does not return atlas_configs, execute the following create command: #solrctl instancedir --create atlas_configs /opt/cloudera/parcels/CDH/etc/atlas/conf.dist/solr/ Check if the following command returns the edge_index, fulltext_index, and vertex_index collections: #solrctl collection --list. #sample output vertex_index (5) edge_index (5) fulltext_index (5) If it does return the 3 collections above, skip the create commands below. If it does not return the 3 collections above, execute the following commands:#solrctl collection --create edge_index -c atlas_configs -s 1 -r 1 -m 1 #solrctl collection --create fulltext_index -c atlas_configs -s 1 -r 1 -m 1 #solrctl collection --create vertex_index -c atlas_configs -s 1 -r 1 -m 1 Start the Atlas service and bring up the UI. For Reference: https://my.cloudera.com/knowledge/quotError-from-server-at-httpsolr-hostapacheorg8993solr-Can?id=300885 https://community.cloudera.com/t5/Customer/ERROR-quot-Can-not-find-the-specified-config-set-vertex/ta-p/310611
... View more