Member since
08-24-2018
91
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3560 | 09-16-2021 06:57 AM | |
1763 | 01-08-2021 04:40 AM | |
859 | 03-25-2020 10:07 PM |
10-11-2022
07:44 PM
@Profred @ as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
08-08-2022
02:31 AM
@ammukana, did any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-27-2021
08:43 AM
Hi @slambe , still related to this thread, I would like to know how it's possible to see data lineage of an existing Hive DB. Better to explain me; if I execute the data lineage executing the scripts of the article https://community.cloudera.com/t5/Community-Articles/Using-Apache-Atlas-to-view-Data-Lineage/ta-p/246305 it works perfectly now. But imagine to suggest this solution to an existing prod environment with tons of tables, relations - the scripts of creation are hidden to me but I would guess Atlas can read table fields and dependencies; how could the data lineage be available in Atlas in this case? the logic between tables is hidden to Atlas cause the creation of structures has been defined during Hive DB creation, years ago, and it's not a live process. Thanks, Best Regards, Daniele.
... View more
04-22-2021
03:22 AM
Understood. In our case, the culprit turned out to be 50075 port in firewall. I've added as an answer. In any case, thanks for your support. Thanks, Megh
... View more
01-08-2021
07:46 AM
CM uses BitTorrent to distrubte parcels so it's already very efficient. My recommendation would be to let CM do the work and not bother with trying to save yourself a few minutes.
... View more
12-26-2020
01:39 PM
@slambe @woodcock_mike Take the back up of the DB first just in-case. This has to be done from ranger DB, following querries may help to get rid of deleted server form the plugin status page:- select id,service_name,app_type,host_name from x_plugin_info where host_name='HOST_NAME'; Where HOST_NAME is the host that has been removed form the cluster, verify the details. To remove the plugin status info for the removed host run following query:- DELETE FROM x_plugin_info where host_name='HOST_NAME';
... View more
11-23-2020
02:33 AM
Hi J, is NetIQ a SAML authentication mechanism? If yes, then the below link could be useful to you: https://knox.apache.org/books/knox-1-4-0/user-guide.html#Pac4j+Provider+-+CAS+/+OAuth+/+SAML+/+OpenID+Connect Generally, Knox supports two types of identity providers: Authentication providers Federation providers Does NetIQ fall under federation provider category? Please correct me if I'm wrong. Read more here: https://knox.apache.org/books/knox-1-4-0/user-guide.html#Authentication Thanks, Saurabh
... View more
11-02-2020
02:57 AM
Asif, please see if the below table creation query helps. This is used while restoring HBase table: create 'atlas_titan' , {NAME => 'e', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} ,{NAME => 'g', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} , {NAME => 'i', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} , {NAME => 'l', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} , {NAME => 'm', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} , {NAME => 's', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => '2592000', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
... View more
10-27-2020
07:12 AM
@itsmeramcan you share the complete failure log from Ambari > Operations?
... View more
03-25-2020
10:25 PM
This is HDP setup. And now I solved this problem. 'atlas.graph.index.search.solr.zookeeper-url' was wrong value. localhost:2181/solr -> localhost:2181/infra-solr Also add atlas.graph.storage.backend=hbase atlas.graph.index.search.backend=solr Now works well. Thanks
... View more