Member since
04-07-2016
22
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1627 | 03-20-2017 03:06 PM |
06-21-2017
07:04 PM
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=35
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.queues=ds,default,sp
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.ds.acl_administer_queue=*
yarn.scheduler.capacity.root.ds.acl_submit_applications=*
yarn.scheduler.capacity.root.ds.capacity=35
yarn.scheduler.capacity.root.ds.ds1.acl_administer_queue=*
yarn.scheduler.capacity.root.ds.ds1.acl_submit_applications=*
yarn.scheduler.capacity.root.ds.ds1.capacity=50
yarn.scheduler.capacity.root.ds.ds1.maximum-capacity=100
yarn.scheduler.capacity.root.ds.ds1.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.ds.ds1.ordering-policy=fair
yarn.scheduler.capacity.root.ds.ds1.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.ds.ds1.priority=0
yarn.scheduler.capacity.root.ds.ds1.state=RUNNING
yarn.scheduler.capacity.root.ds.ds1.user-limit-factor=3
yarn.scheduler.capacity.root.ds.ds2.acl_administer_queue=*
yarn.scheduler.capacity.root.ds.ds2.acl_submit_applications=*
yarn.scheduler.capacity.root.ds.ds2.capacity=50
yarn.scheduler.capacity.root.ds.ds2.maximum-capacity=100
yarn.scheduler.capacity.root.ds.ds2.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.ds.ds2.ordering-policy=fair
yarn.scheduler.capacity.root.ds.ds2.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.ds.ds2.priority=0
yarn.scheduler.capacity.root.ds.ds2.state=RUNNING
yarn.scheduler.capacity.root.ds.ds2.user-limit-factor=3
yarn.scheduler.capacity.root.ds.maximum-capacity=100
yarn.scheduler.capacity.root.ds.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.ds.priority=0
yarn.scheduler.capacity.root.ds.queues=ds1,ds2
yarn.scheduler.capacity.root.ds.state=RUNNING
yarn.scheduler.capacity.root.ds.user-limit-factor=2
yarn.scheduler.capacity.root.default.ordering-policy=fair
yarn.scheduler.capacity.root.default.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.default.priority=0
yarn.scheduler.capacity.root.maximum-capacity=100
yarn.scheduler.capacity.root.priority=0
yarn.scheduler.capacity.root.sp.acl_administer_queue=*
yarn.scheduler.capacity.root.sp.acl_submit_applications=*
yarn.scheduler.capacity.root.sp.capacity=30
yarn.scheduler.capacity.root.sp.maximum-capacity=30
yarn.scheduler.capacity.root.sp.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.sp.ordering-policy=fair
yarn.scheduler.capacity.root.sp.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.sp.priority=0
yarn.scheduler.capacity.root.sp.state=RUNNING
yarn.scheduler.capacity.root.sp.user-limit-factor=1
... View more
06-21-2017
04:39 PM
hello we installed HDP 2.6.1 and would like to setup ssl for zeppelin. on the server that zeppelin is installed, the port 8443 is already been used by other service, how do I change the ssl port for zeppelin?
... View more
Labels:
- Labels:
-
Apache Zeppelin
06-21-2017
02:58 PM
Hello We had setup 3 queues - default (minimum 30% maximum 100% - priority 0 and ordering policy - FIFO) queue 1 ( minimum 35% maximum 35% - priority 0 and ordering policy - FIFO) queue 2 ( minimum 35% maximum 100% - priority 0 and ordering policy - FIFO) before upgrade all the jobs were by default assigned to default queue. After the upgrade all the jobs are going to queue 1. How do I make sure that all the jobs by default goes to default queue instead of other queue unless it is specified.
... View more
Labels:
- Labels:
-
Apache Tez
-
Apache YARN
06-19-2017
08:15 PM
Hello @Dominika Bialek thanks for the reponse. that was the issue. After adding S3 location, the issue resolved
... View more
06-19-2017
04:31 PM
1 Kudo
Hello, When I tried running the following command, I am getting the error: alter table btest.testtable add IF NOT EXISTS partition (load_date='2017-06-19') location 's3a://testbucket/data/xxx/load_date=2017-06-19'; I am getting the following error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hive] does not have [READ] privilege on [s3a://testbucket/data/xxx/load_date=2017-06-19] FYI: Select statement works fine. I can run select statement having data located in S3. it is just that insert statement is failing. We are using ranger for authorization but hive user has full permission on all the databases, tables
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
06-09-2017
03:22 PM
Hello Eyad, I dont see that option in my ranger
... View more
06-09-2017
02:55 PM
Hello, I upgraded the HDP from 2.4.2 to 2.6.1 but I dont see row level filtering when login to Ranger UI as admin. Do I need to configure anything? The Ranger version in 2.6.1 shows: 0.7 The Ranger version in 2.4.2 was: 0.5
... View more
Labels:
- Labels:
-
Apache Ranger
04-19-2017
08:33 PM
Hello, we would like to ingest MS Sql/Oracle database periodically in hive. I am looking for a nifi processor that can read the database/table detail from source database and create equivalent table in hive as external table and copy the data from source table to hive table in orc format.
... View more
Labels:
- Labels:
-
Apache NiFi
04-19-2017
08:23 PM
Thank you for the response. I did it using creating the temp hive table
... View more
04-17-2017
04:09 AM
Hello, is there a way to split 2 GB ORC file in to 50MB files? We have many ORC files (larger than 1GB) in HDFS. We are planning to move those files to S3 and configure Hive external table to S3. The performance has been significantly affected by copying the larger files. If I split those files in to multiple files of 50MB or less and copy to S3 than the performance is comparable to HDFS (to test I created another table stored as ORC and insert the existing table data which created multiple files but that is not a viable solution as I have tables with multiple partition and many tables). Is it possible to split the ORC files in to multiple files?
... View more
Labels:
- Labels:
-
Apache Hadoop