Member since
10-06-2015
273
Posts
202
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4047 | 10-11-2017 09:33 PM | |
3568 | 10-11-2017 07:46 PM | |
2576 | 08-04-2017 01:37 PM | |
2217 | 08-03-2017 03:36 PM | |
2243 | 08-03-2017 12:52 PM |
12-21-2016
12:50 AM
1 Kudo
Thanks @Anders Boje Below are some additional examples: Let's suppose we want to annotate a table (GUID f4019a65-8948-
46f1-afcf-545baa2df99f) with the Trait PublicData to indicate it is a data asset that is
created by crawling public sites. Also, suppose that we want to set a "Retainable" trait
on the column family contents (GUID 9e6308c6-1006-48f8-95a8-a605968e64d2) with a
retention period of 100 days. The following are the requests to send:
POST
http://<atlas-server-host:port>/api/atlas/entities/f4019a65-8948-46f1-afcf- 545baa2df99f/traits
BODY
{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Struct",
"typeName":"PublicData",
"values":{}
} POST
http://<atlas-server-host:port>/api/atlas/entities/9e6308c6-1006-48f8-95a8- a605968e64d2/traits
BODY
{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Struct", "typeName":"Retainable",
"values":{"retentionPeriod":"100" }
}
@Stig Hammeken To associate a Term/Taxonomy to an entity use the below POST
http://<atlas-server-host:port>/api/atlas/v1/entities/{entity_guid}/tags/{fully_qualified_name_of_term}
for example: POST http://<atlas-server-host:port>/api/atlas/v1/entities/f4019a65-8948-46f1-afcf-545baa2df99f/tags/d.term1.term12
You can get the "fully_qualified_name_of_term" using: #Listing all terms under the Catalog Taxonomy
GET http://<atlas-server-host:port>/api/atlas/v1/taxonomies/Catalog/terms
#Listing all terms under a given Term
GET http://<atlas-server-host:port>/api/atlas/v1/taxonomies/Catalog/terms/term_name/terms/.../terms/term_name/terms
... View more
12-21-2016
12:00 AM
I have a docker HDP 2.5 installed on Azure. Atlas was working fine but after running for a while it became inaccessible. Looking at the logs I see the below HBase permission error: ... 38 more
Caused by: org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=atlas, scope=default, params=[namespace=default,table=default:atlas_titan,family=s],action=CREATE)
at org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:624)
at ... However, in Ranger UI the correct permissions seem to be in place; user "atlas" has full permissions to HBase table "atlas_titan". Any thoughts, or pointers on this? Thanks
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Ranger
12-19-2016
10:03 PM
Great! Can you share it with the community?
... View more
12-16-2016
05:12 PM
I haven't tested it, but I don't believe it should be an issue since the decryption should happen transparently by the platform, before the data is passed to the processor.
... View more
12-16-2016
03:33 PM
You are not doing anything wrong and neither is the book. The limitation iswith the formula itself. This formula does not account for scenarios where [ (caNT-dsII)/dsF ] leads to fractions. In such situations, the caNT will not match current(0) through calculation without eyeballing it. If you take a look at the text book it says “Notably, the nominal time 2014-10-19T06:00Z and current(0) do not exactly match in this example”
... View more
12-16-2016
03:10 PM
In your calculation, the initial instance (dsII) is 2014-10-06T06:00Z, the frequency (dsF) is 3 days, and the coordinator's nominal time (caNT) is 2014-10-19T06:00Z. Using that information, you'll have data instances for 2014-10-06T06:00Z, 2014-10-09T06:00Z, 2014-10-12T06:00Z, 2014-10-15T06:00Z, 2014-10-18T06:00Z. The next data instance will occur at 2014-10-21T06:00Z which is after the caNT. So, the last "useable" data instance will occur at 2014-10-18T06:00Z
... View more
12-16-2016
08:04 AM
@Vijaya Narayana Reddy Bhoomi Reddy At this point Ranger KMS import/export scripts do not allow for copy of a subset of keys. It's all or nothing.
... View more
12-16-2016
07:58 AM
@rbiswas If you just use GetHDFS processor it should decrypt the data before moving it to DR (assuming it has the necessary read permissions), and once there, you can write it to an encryption zone. The data in-flight will be decrypted though. Alternatively you can copy the data in it's encrypted form. You'd need to share the keys between the clusters and use HDF or DistCP to copy the files from the "/.reserved/..." folder rather than the regular folder. Take a look at the article below for clarification: https://community.hortonworks.com/articles/51909/how-to-copy-encrypted-data-between-two-hdp-cluster.html
... View more
12-16-2016
07:39 AM
@Smart Solutions Not sure of the answer to that, but if you're concerned about tmp data being unencrypted/intercepted then you may consider copying it over in it's unencrypted form. This will also reduce the encryption/re-encryption overhead. The link below talks about the different options to do this. https://community.hortonworks.com/articles/51909/how-to-copy-encrypted-data-between-two-hdp-cluster.html
... View more
12-16-2016
07:31 AM
1 Kudo
@Rahul Reddy You can use the Nifi-Soap processor in the below link: https://github.com/apsaltis/nifi-soap
... View more