Member since
07-24-2019
46
Posts
31
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1412 | 01-30-2017 09:57 PM | |
9018 | 12-17-2016 12:11 AM | |
2673 | 07-06-2016 06:54 PM | |
2550 | 07-05-2016 05:41 PM | |
3126 | 06-16-2016 04:03 PM |
11-15-2023
09:55 AM
Hola a todos, Actualice los paquetes de krb5-* a su ultima versión y se solucionó el problema. saludos.
... View more
09-04-2020
10:03 PM
@Wynner What do you mean "pull the data" and "reads the data" with GetTCP processor?
... View more
11-01-2018
09:02 PM
2 Kudos
Below are some FAQ's which helps you to quickly identify some important info for DPS-DLM deployment. Pre-req's for DPS and DLM DB version- postgres 9.3 to 9.6
OS - RHEL 7.0 and above
Ambari - 2.6.2
HDP 2.6.5
Distcp should work b/w source and target clusters.
Beacon user should be created in AD there is no choice of using custom user in this DLM 1.1 release
the onboarded service user for your application should exists in AD and needs
to be resolved(id <username> on both source and target clusters .
docker version
Required ports needs to be open b/w source and target clusters and also to access DPS UI
where to install DLM and DPS software components? DLM engine needs to be installed as m-pack on both clusters using Ambari server
DLM app is dockerized container needs to be installed on DPS host
Which URL needs to be given to register cluster in DPS UI the Ambari URL integrated with knox http://<>:8443
I'm unable to see the DLM icon in DPS UI after enabling DLM component in DPS User needs to be part of Infra-admin role Verify DLM Engine install Verify that Beacon was added as a user to the HDFS superuser group.
hdfs groups beacon
The output should display HDFS (or value of the dfs.permissions.superusergroup config) as one of the groups.
Beacon user should be part of ranger policies
https://docs.hortonworks.com/HDPDocuments/DLM1/DLM-1.2.0/installation/content/dlm_verify_the_dlm_engine_installation.html Mostly used commands for for troubleshooting On DPS host use below commands
docker ps -- check ports,containers and uptime
docker images
docker exec -it <docker-name>
docker exec -it 029ec380bb3d /bin/ls -alrt /usr/dp-app/
docker logs --follow dp-app
docker exec -it d6390b6c0c50 /bin/ls -alrt /usr/dp-app/ Required Machine config for DPS and DLM DPS runs on separate machine which will run all docker containers. <br>Master Node config is recommended for this host with atleast 64 GB of memory
if you are using external database for same host consider more memory and CPU For hive replication the in target cluster beacon is auto creating deny policy in ranger ..is this expected behavior or bug in DLM 1.1? This is to prevent any writes from happening outside of replication to the target database
the deny policy is only on the replication target database For Hive replication can we schedule job per table basis? No ,in this current DLM 1.1 release only database level is supported. Please upvote if its helpful.
... View more
08-27-2018
07:36 PM
4 Kudos
HIVE LLAP - a one-page architecture overview https://community.hortonworks.com/articles/149894/llap-a-one-page-architecture-overview.html Hive - Understanding concurrent sessions + queue
allocation + preemption https://community.hortonworks.com/articles/56636/hive-understanding-concurrent-sessions-queue-alloc.html Hive LLAP Dashboards https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-operations/content/grafana_hive_llap_dashboards.html Hive LLAP Logs info https://community.hortonworks.com/articles/149896/llap-debugging-overview-logs-uis-etc.html Monitoring LLAP metrics http://www.kartikramalingam.com/hive-llap/ Debugging Hive LLAP Query https://community.hortonworks.com/articles/149896/llap-debugging-overview-logs-uis-etc.html Question
on Hive LLAP benchmarks Please share if any Hive
LLAP benchmarks? https://hortonworks.com/blog/3x-faster-interactive-query-hive-llap/ LLAP Tuning Here is an excellent article on
LLAP tuning. https://community.hortonworks.com/articles/149486/llap-sizing-and-setup.html
... View more
Labels:
09-30-2017
10:18 AM
2 Kudos
Thanks to @Matt Clarke for resolving this Major issue. In a typical customer environment there is a challenge while deploying HDF Cluster & enabling LDAPS Authentication because of Username case. In Active directory userid exists as (Ex for Empid:- X1122)
but When I have imported users in Ranger by setting lowercase=true all imported users are displayed like this in lower case (x1122) . I have created all required policies for kafka and nifi .verified smoke tests for Kafka and they are PASSED.
But smoke tests for NiFi are FAILED because because NiFi respects only AD value(X1122) and there is no inbuilt intelligence todo a case conversion.
All the NiFI ranger policies has userid as (x112233).So Ranger Nifi policies are not applicable in this scenario and ranger nifi plugin authorization is not working correctly.
So,NiFi Ranger Authorization has Failed to access View NiFI UI under /flow ranger policy. NiFi does not have a option to change case sensitive of returned results, but with the ldap-provider there are two configuration options for "identity Strategy": 1. (default) USE_DN --> This strategy will use the users complete DN returned by LDAP upon successful authentication for authorization.<br>
2. USE_USERNAME --> This strategy will use the username as typed in the login screen for authorization upon successful authentication with LDAP. No matter what method of authentication is used, the value used above based on configuration is passed through and identity mapping patterns configured in NiFi and the result sent to the configured authorizer. That authorizer in your case is Ranger.
We resolve this issue by using "USE_USERNAME"
So as long as user logs in as all lowercase, it will work We also changed user search filter to:
<property name="User Search Filter">(&(sAMAccountName={0})(memberOf=CN=hwx,OU=Groups,OU=Global,OU=XX,DC=XX,DC=XX))
</property>
and proper search base needed to be:
<property name="User Search Base">OU=Users,OU=XX,DC=XX,DC=XX</property>
... View more
Labels:
09-16-2017
11:52 PM
2 Kudos
HIVE Beeline: ============ Binary mode > !connect 'jdbc:hive2://prod07.app.hwx.com:10000/;transportMode=binary'
http mode
beeline -u 'jdbc:hive2://prod07.app.hwx.com:10001/;transportMode=http;httpPath=cliservice' In HS2 HA Environment
with zookeeper out auto-discovery mode !connect jdbc:hive2://prod09.app.hwx.com:2181,prod10.app.hwx.com:2181,prod11.app.hwx.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 In Kerberos Environment - Hive Beeline command
!connect 'jdbc:hive2://prod07.app.hwx.com:10001/default;principal=hive/prod07.app.hwx.com@EXAMPLE.COM;transportMode=http;httpPath=cliservice' KNOX with Beeline
!connect jdbc:hive2://knox101.app.hwx.com:8443/default;transportMode=http;httpPath=gateway/default/hive;ssl=true
knox with webhdfs curl -iku raj_ops -X GET https://knox101.app.hwx.com:8443/gateway/default/webhdfs/v1/tmp?op=LISTSTATUS if this article helps you.please up vote it.
... View more
11-13-2018
08:30 PM
Hi Avoma, Does it mean that we need to disable SSL in Knox? Is it a http connection between load balancer and Knox gateway? Thanks, Balu Rajendran
... View more
04-27-2017
06:45 PM
1 Kudo
if you want to verify the Certificate contents of KNOX
Server execute below command openssl s_client -showcerts
-connect 127.0.0.1:8443 if developers want to connect to KNOX with SSL enabled
copy cert contents from above
command to knox.crt file and import to a Keystore by executing below command keytool -import -keystore
myLocalTrustStore.jks -file knox.crt Now developers use as below beeline> !connect "jdbc:hive2://hadoop-knox.dev.XXXX.com:8443/default;transportMode=http; httpPath=gateway/default/hive;ssl=true;sslTrustStore=/tmp/knoxhacerts/new/myLocalTrustStore.jks;trustStorePassword=knoxdev" Hive JDBC jdbc:hive2://{gateway-host}:{gateway-port}/; ssl=true; sslTrustStore={gateway-trust-store-path}; trustStorePassword={gateway-trust-store-password}; transportMode=http; httpPath={gateway-path}/{cluster-name}/hive If you want to list the imported certs in a JKS file
execute below command. keytool -v -list -keystore
gateway.jks command to create new truststore myNewTrustSTore.jks keytool -import -alias knox
-keystore ./myNewTrustStore.jks -file ./knox-cert.pem knox-cert.pem is the cert you
saved knox.crt certificate in pem format if you want to change SSL certificate for KNOX http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/knox_ca_signed_certificates_production.html Pls upvote if this article helps.
... View more
Labels:
07-06-2016
06:54 PM
The problem got fixed instead of passing <java-opts> -Dp1=v1 -Dp2=v2 </java-opts> we did in one-liner <java-opts>-Dp1=v1 -Dp2=v2 </java-opts>
... View more