Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 994 | 06-04-2025 11:36 PM | |
| 1566 | 03-23-2025 05:23 AM | |
| 782 | 03-17-2025 10:18 AM | |
| 2816 | 03-05-2025 01:34 PM | |
| 1859 | 03-03-2025 01:09 PM |
01-20-2020
06:21 PM
@saivenkatg55 That's the desired results once you enable Ranger plugin for hive. As you said permissions managed in ranger. Guessing from your scambled URLjdbc:hive2://w0lxqhdp03:2181/w0lxq check under the ranger policies and ensure the user executing the SQL has SELECT on the underlying database w0lxq have a look at this hive /ranger security guidance HTH
... View more
01-20-2020
01:29 AM
@kirkade I thought you had the output directory and it was the connect command which was giving you issues! If you had time to read through the sqoop documentation you will see that you need to give sqoop the destination directory. See the high lighted options This is how your command should have been , the below assumes you have the privileges to write to HDFS directory /user/kirkade sqoop import \
--connect jdbc:postgresql://127.0.0.1:54322/postgres \
--username postgres \
--password xxxx \
--table dim_parameters \
--schema dwh \
--target-dir /user/kirkade Hope that helps
... View more
01-19-2020
09:42 AM
@kirkade I can see 2 syntax errors one on -table dim_parameters should be double dash [ -- ]and between dim_parameters -- --schema dwh the [-- -- ] sqoop import --connect jdbc:postgresql://127.0.0.1:54322/postgres --username postgres -P -table dim_parameters -- --schema dwh Could you try the below syntax I intentionally added the --password sqoop import \
--connect jdbc:postgresql://127.0.0.1:54322/postgres \
--username postgres \
--password xxxx \
--table dim_parameters \
--schema dwh Please let me know if that helped.
... View more
01-19-2020
12:04 AM
1 Kudo
@anki63 Can you share the updates on this thread?
... View more
01-14-2020
11:43 PM
1 Kudo
@Seaport As the permission is with the zeppelin user [other] you will need to do that at a user level, remember fine-grained security ONLY give what is necessary !! $ hdfs dfs -getfacl /warehouse/tablespace/managed/hive
# file: /warehouse/tablespace/managed/hive
# owner: hive
# group: hadoop
user::rwx
group::---
other::---
default:user::rwx
default:user:hive:rwx
default:group::---
default:mask::rwx
default:other::--- The command below will set [ r-x } bits to the correct ACL you can change to rwx if you wish hdfs dfs -setfacl -R -m user:zeppelin:r-x /warehouse/tablespace/managed/hive Thereafter the zeppelin user can [zeppelin~]$ hdfs dfs -ls /warehouse/tablespace/managed/hive
Found 3 items
drwxrwx---+ - hive hadoop 0 2018-12-12 23:42 /warehouse/tablespace/managed/hive/information_schema.db
drwxrwx---+ - hive hadoop 0 2018-12-12 23:41 /warehouse/tablespace/managed/hive/sys.db
drwxrwx---+ - hive hadoop 0 2020-01-15 00:20 /warehouse/tablespace/managed/hive/zepp.db The earlier error is gone ls: Permission denied: user=zeppelin, access=READ_EXECUTE, inode="/warehouse/tablespace/managed/hive":hive:hadoop:drwx------ Happy hadooping
... View more
01-14-2020
07:42 AM
@TVGanesh Great, it worked out for you. So if you think my answer helped resolve the issue then accept it to close the thread. Happ hadooping.
... View more
01-14-2020
01:11 AM
@TVGanesh Isn't the Pyspark file expected in hdfs if using YARN instead of LOCAL? What is the configuartion of your livy.conf if you dont have it in place do the following. {
"pyFiles": ["/user/tvganesh/test1.py"]
} Copy the template file is rename it by stripping off .template in livy.conf.template.Then make sure that the following configurations are present in it. Make sure that forward slash is present in the end of the path. py files you should add the test1.py to hdfs and point to the hdfs location instead of from file system level since that won't be present for Livy locally. Go to the the livy conf directory cp /usr/hdp/3.1.0.0-78/etc/livy2/conf.dist/conf then copy livy.conf.template to livy.conf Check the below parameters and set them accordingly # What spark master Livy sessions should use. livy.spark.master = local # What spark deploy mode Livy sessions should use. livy.spark.deploy-mode = # Whether to enable HiveContext in livy interpreter, if it is true hive-site.xml will be detected # on user request and then livy server classpath automatically. livy.repl.enable-hive-context = # List of local directories from where files are allowed to be added to user sessions. By # default it's empty, meaning users can only reference remote URIs when starting their # sessions. livy.file.local-dir-whitelist = For local execution livy.spark.master = local
livy.file.local-dir-whitelist =/home/tvganesh/ LOCAL For YARN execution livy.spark.master = yarn
livy.file.local-dir-whitelist =/user/tvganesh/ HDFS Please do that and revert
... View more
01-13-2020
10:11 AM
@peterpiller I think you will need to setup cross-realm trust between two MIT KDC for REALM_01 and REALM_02 if you have a mix of MIT KDC and AD then have a look at this MIT/AD Kerberos setup this will ensure you have a valid ticket for both domains. HTH
... View more
01-13-2020
09:32 AM
@SShubhendu Whicout sharing the command being executed, it's difficult to help, please can you include the Kafka version whether standalone orCDH/HDP. Your kafka-console-producer.sh command could be the source of problems. HTH
... View more
01-12-2020
12:56 PM
1 Kudo
@mike_bronson7 When your cluster is in HA it uses a namespace that acts as a load balancer to facilitate the switch from active to standby and vice versa. The dfs-site-xml holds these values filter using dfs.nameservices the nameservice-id should be your namespace or in HA look for dfs.ha.namenodes.[nameservice ID] dfs.ha.namenodes.[nameservice ID] e.g dfs.ha.namenodes.mycluster And that's the value to set eg hdfs://mycluster_namespace/user/ams/hbase The refresh the stale configs , now HBase should sending the metrics to that directory HTH
... View more