Member since
09-01-2016
52
Posts
13
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
16945 | 02-26-2018 02:37 PM | |
3237 | 01-25-2017 07:41 PM |
04-26-2024
07:12 AM
2 Kudos
if you want to force cdp onto rocky8: echo "ID_LIKE=\"Red Hat Enterprise Linux release 8.7 (Ootpa)\"" >> /etc/rocky-release this fixes install-agents hang echo "ID=\"rhel\"" >> /usr/lib/os-release this fixes install-parcels hang removing this ID var from os-release after deployment will cause hadoop to fail restart
... View more
06-02-2023
04:22 PM
1 Kudo
This response is NOT to fix "files with corrupt replicas" but to find and fix files that are completely corrupt, that is that there are not good replicas to recover the files. The warning of files with corrupt replicas is when the file has at least one replica which is corrupt, but the file can still be recovered from the remaining replicas. In this case hdfs fsck /path ... will not show these files because it considere these healty. These files and the corrupted replicas are only reported by the command hdfs dfsadmin -report and as far as I known there is no direct command to fix this. Only way I have found I to wait for the Hadoop cluster to health itself by reallocating the bad replicas from the good ones.
... View more
11-16-2022
01:24 AM
How would you check logs related to ldap , In mine all docker-container like superset_app , superset-worker showing no error, but i can't be able to log from normal user either or ldap one My configured things from flask_appbuilder.security.manager import AUTH_LDAP
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_LDAP_SERVER = "ldap://localhost:389"
# AUTH_LDAP_SEARCH="ou=people,dc=superset,dc=com"
AUTH_LDAP_SEARCH= "cn=admin,dc=ramhlocal,dc=com"
# AUTH_LDAP_APPEND_DOMAIN = "XXX.com"
AUTH_LDAP_UID_FIELD="cn"
AUTH_LDAP_FIRSTNAME_FIELD= "Rohit"
AUTH_LDAP_LASTTNAME_FIELD= "sn"
AUTH_LDAP_USE_TLS = False
# AUTH_LDAP_UID_FIELD=sAMAccountName
# AUTH_LDAP_BIND_USER=CN=Bind,OU=Admin,dc=our,dc=domain
AUTH_LDAP_ALLOW_SELF_SIGNED= True
AUTH_LDAP_APPEND_DOMAIN= False
... View more
01-14-2019
08:37 PM
1 Kudo
@Fernando Lopez Bello Based on the above configuration file for queue Zeppelin although user A submits a job first utilizes all the resources as the Queue's minimum-user-limit-percent is set to 20 the queue resources will be shared among subsequent users below is the link which explains with an example. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/setting_user_limits.html But if you don't want all the resources to shared by user A even if there no other users then you can use user-limit-factor below is the link to a nice article about it. I can see user limit factor is 3 for Zeppelin Queue which means each user can utilize 3 times of queue capacity if resources are available & elasticity permits. https://community.hortonworks.com/content/supportkb/49640/what-does-the-user-limit-factor-do-when-used-in-ya.html In a nutshell minimum-user-limit-percent is a soft limit & user-limit-factor is a hard limit.
... View more
03-07-2018
12:40 PM
For me, proxy settings (no matter if they were set at Intellij, SBT.conf or environment variables), did not work. A couple of considerations that solved this issue (for me at least): - if you use SBT 0.13.16 (not newer that that) - set Use Auto Import Then, no "FAILED DOWNLOADS" messages appear.
... View more
12-12-2017
11:23 AM
2 Kudos
Hi @Fernando Lopez Bello Sharing of interpreter processes is easily adjustable Go to the interpreter setting page : And scroll down to the spark interpreter : By default interpreters are shared globaly : ie - all notes/users share the same interpreter instance (hence the same spark context) Change the setting to either "per note" or "per user" depending on your use case : - Per Note : ie - each note will instantiate a separate interpreter process - Per User : ie - each user instantiates a seperate interpreter process (which is shared amongst the notes for which he/she has ownership) Below an article written by one of the original developpers of zeppelin describing interpreter modes : https://medium.com/@leemoonsoo/apache-zeppelin-interpreter-mode-explained-bae0525d0555 Zeppelin documentation: https://zeppelin.apache.org/docs/latest/manual/interpreters.html#interpreter-binding-mode
... View more
10-24-2017
04:03 PM
User holger_gov does not have privileges to create policy. User has to have ADMIN role in ranger or needs to be delegated admin for the specified resource. Can you check this? See https://community.hortonworks.com/content/kbentry/88202/apache-ranger-delegated-admin.html for delegated admin feature. Also, ranger version in HDP 2.6.1 should be ranger 0.7.
... View more
07-19-2018
02:41 PM
I copied /usr/hdp/current/atlas-client/hook/storm/atlas-storm-plugin-impl/storm-bridge-xxx.xxxxxxxxx.jar to /usr/hdp/current/storm-client/lib
/usr/hdp/current/storm-client/lib/usr/hdp/current/storm-client/extlib but it didin't work.
... View more
01-26-2017
12:26 PM
1 Kudo
@Fernando Lopez Bello To push the graph for the customer you can do this: - in zeppelin UI where you have the graph generated - at the top-right corner, change Default to Report - now get the URL and send to the customer In my test it looks like
... View more