Member since
08-07-2018
12
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7601 | 10-31-2018 05:42 PM | |
6000 | 08-24-2018 09:41 PM |
11-05-2018
06:43 PM
This does not work in HDP 3.0.1 hbase(main):001:0> disable 'atlas_titan' ERROR: Table atlas_titan does not exist.
... View more
10-31-2018
05:42 PM
1 Kudo
Changing offsets.topic.replication.factor in Kafka config to 1 (number if brokers) addressed the issue.
... View more
10-13-2018
09:07 AM
I'm trying to build Atlas from source: https://github.com/hortonworks/atlas-release/tree/HDP-3.0.1.0-187-tag When executing mvn clean install the build fails with [ERROR] Failed to execute goal on project atlas-testtools: Could not resolve dependencies for project org.apache.atlas:atlas-testtools:jar:1.0.0.3.0.0.0-SNAPSHOT: The following artifacts could not be resolved: org.apache.hadoop:hadoop-annotations:jar:3.0.0.3.0.0.0-SNAPSHOT, org.apache.hadoop:hadoop-auth:jar:3.0.0.3.0.0.0-SNAPSHOT, org.apache.hadoop:hadoop-common:jar:3.0.0.3.0.0.0-SNAPSHOT, org.apache.hadoop:hadoop-hdfs:jar:3.0.0.3.0.0.0-SNAPSHOT, org.apache.zookeeper:zookeeper:jar:3.4.6.3.0.0.0-SNAPSHOT: Could not find artifact org.apache.hadoop:hadoop-annotations:jar:3.0.0.3.0.0.0-SNAPSHOT in apache.snapshots.repo (https://repository.apache.org/content/groups/snapshots) -> [Help 1] How can I make it resolve these dependencies?
... View more
Labels:
- Labels:
-
Apache Atlas
09-12-2018
11:28 PM
@Jay Kumar SenSharma Yes, I have Hive clients installed on all nodes. I do have Ranger installed, but all plugins are disabled, so I do not think it affects it in any way. I do see the messages about the new database posted into Kafka ATLAS_HOOK stream with ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic ATLAS_HOOK --from-beginning but for some reason, it does not work with --bootstrap-server: ./kafka-console-consumer.sh --bootstrap-server localhost:6667 --topic ATLAS_HOOK --from-beginning
[2018-09-12 14:41:32,010] WARN [Consumer clientId=consumer-1, groupId=console-consumer-67769] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
... View more
09-12-2018
02:58 AM
I have Hive and Atlas installed on HDP 2.6.5 with Hive hook for Atlas enabled and no changes to the configuration. I am able to successfully import Hive metadata with import-hive.sh, but the Hive hook does not seem to work. When I create a database in Hive, it does not show up in Atlas. The only things I see in the logs are atlas/application.log:
ERROR - [pool-2-thread-5 - de30e17a-1db7-4aad-8f34-a61a27b33cff:] ~ graph rollback due to exception AtlasBaseException:Instance __AtlasUserProfile with unique attribute {name=admin} does not exist (GraphTransactionInterceptor:73)
hive logs: ./hiveserver2.log.2018-09-10:2018-09-10 14:55:42,587 INFO [HiveServer2-Background-Pool: Thread-61]: hook.AtlasHook (AtlasHook.java:<clinit>(99)) - Created Atlas Hook ./hiveserver2.log:2018-09-11 09:04:20,100 INFO [HiveServer2-Background-Pool: Thread-3201]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=PostHook.org.apache.atlas.hive.hook.HiveHook from=org.apache.hadoop.hive.ql.Driver> ./hiveserver2.log:2018-09-11 09:04:20,100 INFO [HiveServer2-Background-Pool: Thread-3201]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=PostHook.org.apache.atlas.hive.hook.HiveHook from=org.apache.hadoop.hive.ql.Driver> When I search for the database name in /kafka-logs/, it only shows up in ./ATLAS_HOOK-0/00000000000000000014.log, and the entry looks like
{"msgSourceIP":"172.18.181.235","msgCreatedBy":"hive","msgCreationTime":1536681860109,"message":{"entities":{"referredEntities":{},"entities":[{"typeName":"hive_db","attributes":{"owner":"hive","ownerType":"USER","qualifiedName":"oyster9@bigcentos","clusterName":"bigcentos","name":"oyster9","location":"hdfs://host.com:8020/apps/hive/warehouse/oyster9.db","parameters":{}},"guid":"-82688521591125","version":0}]},"type":"ENTITY_CREATE_V2","user":"hive"},"version":{"version":"1.0.0"},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1}
Which tells me that the message about the new DB gets passed to the Kafka stream, but is not read by Atlas. I do not know where to look next, and my goal is to make it so that when a database is created in Hive, it shows up in Atlas automatically.
... View more
Labels:
- Labels:
-
Apache Atlas
09-10-2018
01:16 AM
Thank you for your answer, @Vinicius Higa Murakami, this was very helpful! It looks like I was missing the last part, I had to add root and intermediate certificates to the default java keystore "cacerts". I wonder why we need to add the certs to the default java keystore, and not the rangeradmin & client keystores?
... View more
09-07-2018
07:52 PM
I am trying to set up SSL between Ranger Admin and Ranger HDFS plugin (HDP 2.6.5)
I am able to successfully have Ranger Admin UI served via HTTPS (browser says certificate is valid), but my HDFS plugin is not able to sync the policies with Ranger.
My setup:
4-node cluster
Ranger Admin on Node 1
HDFS NameNode on Node 1
HDFS SNameNode on Node 2
HDFS DataNode on Node 4 As far as I understand, I need to set up keystores and truststores on Node 1 and configure Ranger & HDFS to use those keystores and truststores.
Steps performed:
Obtained the PKCS7b (.pem) certificate file from my CA (my CA only offers DER, PKCS7b and CRT files)
Created a keystore for Ranger Admin Service from the certificate
cp cert.pem cert.p7b openssl pkcs7 -print_certs -in cert.p7b -out cert.cer openssl pkcs12 -export -in cert.cer -inkey certkey.key -out rangeradmin.p12 -name rangeradmin keytool -importkeystore -deststorepass pass2word -destkeypass pass2word -srckeystore rangeradmin.p12 -srcstoretype PKCS12 -destkeystore ranger-admin-keystore.jks -deststoretype JKS -alias rangeradmin Similarly, created a keystore for HDFS
keytool -importkeystore -deststorepass pass2word -destkeypass pass2word -srckeystore rangeradmin.p12 -srcstoretype PKCS12 -destkeystore hdfs-plugin-keystore.jks -deststoretype JKS -alias hdfsplugin Created Ranger Admin truststore
keytool -export -keystore hdfs-plugin-keystore.jks -alias hdfsplugin -file hdfsplugin.cer -storepass pass2word keytool -import -file hdfsplugin.cer -alias hdfsplugin -keystore /etc/ranger/admin/conf/ranger-admin-truststore.jks -storepass pass2word
Similarly, created HDFS truststore
keytool -export -keystore ranger-admin-keystore.jks -alias rangeradmin -file rangeradmin.cer -storepass pass2word keytool -import -file rangeradmin.cer -alias rangeradmin -keystore /etc/hadoop/conf/hdfs-plugin-truststore.jks -storepass pass2word
At this point I have my Ranger keystore and truststore files in /etc/ranger/admin/conf and HDFS has its in /etc/hadoop/conf/
Set up Ranger to use SSL using Ranger Admin keystore by following https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_security/content/configure_ambari_ranger_ssl_public_ca_certs_admin.html . At this point Ranger Admin UI is served via HTTPS (browser says certificate is valid).
Set up HDFS to use SSL via Ambari
HDFS > Configs > Advanced > ranger-hdfs-policymgr-ssl and set the following properties:
xasecure.policymgr.clientssl.keystore = /etc/hadoop/conf/hdfs-plugin-keystore.jks
xasecure.policymgr.clientssl.keystore.password = pass2word
xasecure.policymgr.clientssl.truststore = hdfs-plugin-truststore.jks
xasecure.policymgr.clientssl.truststore.password = pass2word HDFS > Configs > Advanced > Advanced ranger-hdfs-plugin-properties
common.name.for.certificate = hdfsplugin
The problem is that the HDFS Ranger policies do not get synced. Ranger admin'sxa_portal.log says "Unauthorized access. Unable to get client certificate."
What am I missing here?
... View more
Labels:
- Labels:
-
Apache Ranger
08-24-2018
09:41 PM
It looks like I was missing Configuring SQL Standard-Based Authorization.
... View more
08-24-2018
07:54 PM
It appears that neither "hive" nor "root" are in the admin group by default. Here is what I tried: su - hive > beeline beeline> !connect jdbc:... Enter username for jdbc:...: hive Entered password
0: jdbc:... > set role admin; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. hive doesn't belong to role admin (state=08S01,code=1)
I have tried the same as user "root", but also got "root doesn't belong to role admin".
... View more
Labels:
- Labels:
-
Apache Hive