Member since
09-25-2015
82
Posts
93
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3848 | 06-06-2017 09:57 AM | |
1066 | 03-01-2017 10:26 AM | |
1083 | 11-22-2016 10:32 AM | |
902 | 08-09-2016 12:05 PM | |
1579 | 08-08-2016 03:57 PM |
07-05-2021
05:01 AM
Hi, I have a requirement like, i need to create hive policy with two groups .one group with "ALL" permissions to some "x" user and 2nd group with "select" permission to "y" user. i have created policy through REST APi with one group but with "all" permissions but how to mention 2nd group with "select" permission in same create policy command. Thanks in advance! Srini Podili
... View more
12-01-2020
03:47 AM
I'm trying to run a dag with airflow 1.10.12 and HDP 3.0.0 when i run the dag it gets stuck in ```Connecting to jdbc:hive2://[Server2_FQDN]:2181,[Server1_FQDN]:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2``` when i run ```beeline -u "jdbc:hive2://[Server1_FQDN]:2181,[Server2_FQDN]:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2"``` from shell, it connect to hive with no problem. I've also made a connection like this ``` Conn Id * hive_jdbc ------------- Conn Type ------------- Connection URL jdbc:hive2://centosserver.son.ir:2181,centosclient.son.ir:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 ------------- Login hive ------------- Password ****** ------------- Driver Path /usr/hdp/3.0.0.0-1634/hive/jdbc/hive-jdbc-3.1.0.3.0.0.0-1634-standalone.jar ------------- Driver Class org.apache.hive.jdbc.HiveDriver ``` and I'm not using kerberos I've also added ```hive.security.authorization.sqlstd.confwhitelist.append``` in the ambari ```Custom hive-site``` ``` radoop\.operation\.id|mapred\.job\.name||airflow\.ctx\.dag_id|airflow\.ctx\.task_id|airflow\.ctx\.execution_date|airflow\.ctx\.dag_run_id|airflow\.ctx\.dag_owner|airflow\.ctx\.dag_email|hive\.warehouse\.subdir\.inherit\.perms|hive\.exec\.max\.dynamic\.partitions|hive\.exec\.max\.dynamic\.partitions\.pernode|spark\.app\.name ``` any suggestions? I'm desperate, I've tried every way that i know but still nothing @nsabharwal @agillan @msumbul1 @deepesh1
... View more
08-13-2020
09:08 AM
@torafca5 Could you please try downloading the jar from the below link, http://www.congiu.net/hive-json-serde/1.3.8/hdp23/json-serde-1.3.8-jar-with-dependencies.jar Once the jar is downloaded, move the jar to the location /usr/hdp/3.0.1.0-187/hive/lib. Please place the jar on all the nodes hosting Hive services. Also, please make sure you are not using LLAP(HiveserverInteractive) to connect to the hive. add jar command does not work with LLAP. implementing the above recommendation should help overcome this issue.
... View more
04-28-2020
07:28 AM
Please check below command, here 2> /dev/null will consume all the logs and error. It will now allow standard output to be shown: beeline -u jdbc:hive2://somehost_ip/ -f 2> /dev/null hive.hql >op.txt if you like this please give me kudos. Thanks!!!
... View more
03-16-2020
12:48 AM
Hi agillan, Sorry to bother you. I figured out the issue. I had to grant access to my iam user through the grant access option. Thanks.
... View more
01-07-2019
05:31 AM
@Owez Mujawar Could you run kafka console consumer on topic ATLAS_HOOK and ATLAS_ENTITIES when you create a table and check if the messages are flowing to the topic ?
... View more
03-07-2017
04:44 PM
8 Kudos
Picture the scene... You've run the HDFS Balancer on your cluster and have your data balanced nicely across your DataNodes on HDFS. Your cluster is humming along nicely, but your system administrator runs across the office to you with alerts about full disks on one of your DataNode machines! What now? The Low-Down Uneven data distribution amongst disks isn't dangerous as such, though in some rare cases you may start to notice the fuller disks becoming bottlenecks for I/O. As of Apache Hadoop 2.7.3, it is not possible to balance disks within a single node (aka intra-node balancing) - the HDFS balancer only balances across DataNodes and not within them. HDFS-1312 is tracking work to introduce this functionality into Apache HDFS, but it will not be available before Hadoop 3.0. The conservative approach: Modify the following property to your HDFS configurations or add it if it isn't already there: dfs.datanode.du.reserved (reserved space in bytes per volume). This will always leave this much space free on all DataNode disks. Set it to a value that will make your sysadmin happy and continue to use the HDFS balancer as before until HDFS-1312 is complete. The brute force method (careful!): Run fsck and MAKE SURE there are no under-replicated blocks (IMPORTANT!!). Then just wipe the contents of the offending disk. HDFS will re-replicate those blocks elsewhere automatically! NOTE: Do not wipe more than one disk across the cluster at a time!!
... View more
Labels:
03-01-2017
10:26 AM
Ok, turns out it's because the "availability flag" property is now mandatory and the old ingest script didn't generate "_success" to trigger the feed. modified ingest.sh to generate the flag: curl -sS http://bailando.sims.berkeley.edu/enron/enron_with_categories.tar.gz | tar xz && hadoop fs -mkdir -p $1 && hadoop fs -chmod 777 $1 && hadoop fs -put enron_with_categories/*/*.txt $1 && hadoop fs -touchz $1/_success
... View more
06-07-2017
09:19 AM
1 Kudo
clear /etc/resolv.conf
I think the problem is resolved dns。
@Silvio del Val , clear /etc/resolv.conf I think the problem is resolved dns。 @Silvio del Val
... View more
09-26-2017
08:54 PM
1 Kudo
This procedure has been replaced by https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/about_ranger_policies.html#enable_deny_conditions_for_policies . It is located on a page about implementing tag-based policies. ranger enableDenyAndExceptionsInPolicies=true deny and exception.
... View more