Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 503 | 06-04-2025 11:36 PM | |
| 1047 | 03-23-2025 05:23 AM | |
| 547 | 03-17-2025 10:18 AM | |
| 2045 | 03-05-2025 01:34 PM | |
| 1280 | 03-03-2025 01:09 PM |
01-19-2020
12:04 AM
1 Kudo
@anki63 Can you share the updates on this thread?
... View more
01-14-2020
11:43 PM
1 Kudo
@Seaport As the permission is with the zeppelin user [other] you will need to do that at a user level, remember fine-grained security ONLY give what is necessary !! $ hdfs dfs -getfacl /warehouse/tablespace/managed/hive
# file: /warehouse/tablespace/managed/hive
# owner: hive
# group: hadoop
user::rwx
group::---
other::---
default:user::rwx
default:user:hive:rwx
default:group::---
default:mask::rwx
default:other::--- The command below will set [ r-x } bits to the correct ACL you can change to rwx if you wish hdfs dfs -setfacl -R -m user:zeppelin:r-x /warehouse/tablespace/managed/hive Thereafter the zeppelin user can [zeppelin~]$ hdfs dfs -ls /warehouse/tablespace/managed/hive
Found 3 items
drwxrwx---+ - hive hadoop 0 2018-12-12 23:42 /warehouse/tablespace/managed/hive/information_schema.db
drwxrwx---+ - hive hadoop 0 2018-12-12 23:41 /warehouse/tablespace/managed/hive/sys.db
drwxrwx---+ - hive hadoop 0 2020-01-15 00:20 /warehouse/tablespace/managed/hive/zepp.db The earlier error is gone ls: Permission denied: user=zeppelin, access=READ_EXECUTE, inode="/warehouse/tablespace/managed/hive":hive:hadoop:drwx------ Happy hadooping
... View more
01-14-2020
07:42 AM
@TVGanesh Great, it worked out for you. So if you think my answer helped resolve the issue then accept it to close the thread. Happ hadooping.
... View more
01-14-2020
01:11 AM
@TVGanesh Isn't the Pyspark file expected in hdfs if using YARN instead of LOCAL? What is the configuartion of your livy.conf if you dont have it in place do the following. {
"pyFiles": ["/user/tvganesh/test1.py"]
} Copy the template file is rename it by stripping off .template in livy.conf.template.Then make sure that the following configurations are present in it. Make sure that forward slash is present in the end of the path. py files you should add the test1.py to hdfs and point to the hdfs location instead of from file system level since that won't be present for Livy locally. Go to the the livy conf directory cp /usr/hdp/3.1.0.0-78/etc/livy2/conf.dist/conf then copy livy.conf.template to livy.conf Check the below parameters and set them accordingly # What spark master Livy sessions should use. livy.spark.master = local # What spark deploy mode Livy sessions should use. livy.spark.deploy-mode = # Whether to enable HiveContext in livy interpreter, if it is true hive-site.xml will be detected # on user request and then livy server classpath automatically. livy.repl.enable-hive-context = # List of local directories from where files are allowed to be added to user sessions. By # default it's empty, meaning users can only reference remote URIs when starting their # sessions. livy.file.local-dir-whitelist = For local execution livy.spark.master = local
livy.file.local-dir-whitelist =/home/tvganesh/ LOCAL For YARN execution livy.spark.master = yarn
livy.file.local-dir-whitelist =/user/tvganesh/ HDFS Please do that and revert
... View more
01-13-2020
10:11 AM
@peterpiller I think you will need to setup cross-realm trust between two MIT KDC for REALM_01 and REALM_02 if you have a mix of MIT KDC and AD then have a look at this MIT/AD Kerberos setup this will ensure you have a valid ticket for both domains. HTH
... View more
01-13-2020
09:32 AM
@SShubhendu Whicout sharing the command being executed, it's difficult to help, please can you include the Kafka version whether standalone orCDH/HDP. Your kafka-console-producer.sh command could be the source of problems. HTH
... View more
01-12-2020
12:56 PM
1 Kudo
@mike_bronson7 When your cluster is in HA it uses a namespace that acts as a load balancer to facilitate the switch from active to standby and vice versa. The dfs-site-xml holds these values filter using dfs.nameservices the nameservice-id should be your namespace or in HA look for dfs.ha.namenodes.[nameservice ID] dfs.ha.namenodes.[nameservice ID] e.g dfs.ha.namenodes.mycluster And that's the value to set eg hdfs://mycluster_namespace/user/ams/hbase The refresh the stale configs , now HBase should sending the metrics to that directory HTH
... View more
01-11-2020
01:19 PM
1 Kudo
@anki63 Could you try this solution? Go to the folder "C:\Users\COMPUTER_NAME\.VirtualBox\Machines\VM_NAME\" and if you see two xml files with different suffixes. 1. VM_NAME.xml-prev 2. VM_NAME.xml-tmp So it simply means the Sandbox couldn't find "VM_NAME.xml" because it didn't exist. Make a copy of the "VM_NAME.xml-prev" file and renamed the copy to "VM_NAME.xml" Restarted VirtualBox and it worked just fine.
... View more
01-11-2020
06:23 AM
@Niruu Here is a link changing ambari hostnames that should help you. I have used it successfully before but there are 2 hidden undocumented caveats, you should manually change the hostname in the ambari.properties there should be 2 or 3 properties to match the new VM hostname and also run some SQL like alter in the ambari, ranger, oozi, hive databases you have to do this for all the fore mentioned components. The contents of your host_names_changes.json should look like below make sure you have the correct cluster name, you will be prompted if you have already backed up your database etc in my case I usually just accept , with a VM you can easily create another snapshot and you are good to go. You must have completely stopped ambari and the agents host_names_changes.json {
"cluster1" : {
"ambari01.example.com" : "ambari02.example.com"
}
} The command should look like this # ambari-server update-host-names + host_names_changes.json # ambari-server update-host-names host_names_changes.json After completion, you should see successful Note : Remember to update the ambari-agent.ini to point to the new ambari02.example.com If you use the above document you could stop on number 8 you don't need to format you Zk For ambari database I am assuming you are using MariaDB or Mysql mysql -u <ambari_user> -p<ambari_user_password> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'<new_FQDN_new_VM>'; GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'<new_FQDN_new_VM>' IDENTIFIED BY '<ambari_user_password>'; Hive to that for all the rest mysql -u <hive_user> -p<hive_user_password> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'<new_FQDN_new_VM>'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'<new_FQDN_new_VM>' IDENTIFIED BY '<hive_user_password>'; Hope that helps Happy hadooping
... View more
01-11-2020
05:33 AM
@wernermarcel CM doesn't have that capability as compared to Ambari where you simply add the ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh in the /etc/ambari-server/conf/ambari.properties and that auto-creates the user home in HDFS directories. Contrary if you add users in Hue it presents an option to perform this for you in its user add wizard screen. But there is a nice article you could try out I haven't tested it myself automatically creation of AD user directory in hdfs Hope that helps, please share your success with the community Happy hadooping
... View more