Member since
09-29-2015
186
Posts
63
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3375 | 08-11-2017 05:27 PM | |
2294 | 06-27-2017 10:58 PM | |
2386 | 04-09-2017 09:43 PM | |
3390 | 04-01-2017 02:04 AM | |
4611 | 03-13-2017 06:35 PM |
03-18-2021
10:03 PM
@ryu Are you able to connect to beeline? And can you make sure that the output of the below command has all components pointing to the current version in your cluster: hdp-select Here is the command help: https://docs.cloudera.com/HDPDocuments/Ambari-2.1.2.1/bk_upgrading_Ambari/content/_Run_HDP_Select_mamiu.html
... View more
03-18-2021
09:51 PM
@balo Please refer to "Generating download credentials": https://docs.cloudera.com/csa/1.2.0/download/topics/csa-download-cred.html
... View more
05-11-2020
05:52 PM
1 Kudo
To set up health_percent of LLAP, do the following:
On the Hiveserver2Interactive server nodes, edit /usr/hdp/<hdp-version>/hive/scripts/llap/yarn/package.py
Example: /usr/hdp/3.1.0.224-3/hive/scripts/llap/yarn/package.py The --health-percent defaults to 80. Change this to desired-number.
Move the following to a temporary backup location: a. /usr/hdp/<hdp-version>/hive/scripts/llap/yarn/package.pyc
b. /usr/hdp/<hdp-version>/hive/scripts/llap/yarn/package.pyo
Restart Hive.
... View more
Labels:
11-19-2018
10:11 PM
1 Kudo
This article just gives an example of how 'grant'/'revoke' works when the Hive plugin is enabled with Ranger in CDP.
A user who is 'admin' in Ranger, can manage access to Hive tables via 'grant'/'revoke' operation.
In Ranger UI > Settings > Users and Groups > Users
Note: User 'hive' is in role 'Admin'
On the beeline, login as user 'hive'. Run the grant command to give select privileges on a table:
0: jdbc:hive2://a.b.c.co> grant select on table mix to user mugdha;
INFO : Compiling command(queryId=hive_20211021024819_c3de84a7-a312-4a1f-9a8d-8b328cced054): grant select on table mix to user mugdha
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20211021024819_c3de84a7-a312-4a1f-9a8d-8b328cced054); Time taken: 0.022 seconds
INFO : Executing command(queryId=hive_20211021024819_c3de84a7-a312-4a1f-9a8d-8b328cced054): grant select on table mix to user mugdha
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20211021024819_c3de84a7-a312-4a1f-9a8d-8b328cced054); Time taken: 0.548 seconds
INFO : OK
No rows affected (0.634 seconds)
In Ranger, a new policy is created by that command:
Similarly, in a 'revoke' run, user 'mugdha', will be removed from the policy:
0: jdbc:hive2://a.b.c.co> revoke select on table mix from user mugdha;
INFO : Compiling command(queryId=hive_20211021025423_cdf81a8a-df0d-4c40-9509-f4325d3ba112): revoke select on table mix from user mugdha
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20211021025423_cdf81a8a-df0d-4c40-9509-f4325d3ba112); Time taken: 0.032 seconds
INFO : Executing command(queryId=hive_20211021025423_cdf81a8a-df0d-4c40-9509-f4325d3ba112): revoke select on table mix from user mugdha
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20211021025423_cdf81a8a-df0d-4c40-9509-f4325d3ba112); Time taken: 0.274 seconds
INFO : OK
No rows affected (0.323 seconds)
This also works the same way in HDP, see Provide User Access to Hive Database Tables from the Command Line
... View more
06-27-2018
06:49 PM
5 Kudos
Step by step instructions to set up acls on the queue. For Adding/removing queues, see:- https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-views/content/ch_using_yarn_queue_manager_view.html Setting up queue acls: 1. Enable yarn acl: a. In Yarn -> Configs -> Advanced -> Resource Manager Set yarn.acl.enable to true and Save. b. Restart Yarn service. 2. Restrict the access on the “root” queue first. Child queues inherit the access configuration from the root queue. If this is not done, all users will be able to submit the jobs to the child queues. On the YARN Queue Manager view instance configuration page, a. Click on the “root” queue. b. Under “Access Control and Status” -> Submit Applications -> Choose custom. Leave this blank. c. Now click on the child queue. d. Under “Access Control and Status” -> Submit Applications -> Choose custom -> In Users/Groups, enter the username. e. Save and Refresh queue. 3. Notice that in capacity-scheduler config in Yarn -> Configs-> Advanced -> (Section below) Two properties are changed: a. yarn.scheduler.capacity.root.acl_submit_applications= Note: A little about this, this is not blank in the config, there is a space at the end. If this property is removed from this config, this will reset the acl_submit_applications to * for root queue. If the parent queue uses the "*" (asterisk) value (or is not specified) to allow access to all users and groups, its child queues cannot restrict access. b. yarn.scheduler.capacity.root.test.acl_submit_applications=hive Confirming that ACL is set: Now that acl is set, to confirm if acl is active for the user, login to linux terminal as hive user and run: hadoop queue -showacls (This command is deprecated, but works) mapred queue -showacls (Alternative command) Output: For hive user: For any other user: We can do similar for Administer queue. Restrict the access on the “root” queue first: Under “Access Control and Status” -> Administer Queue -> Choose custom -> In Users/Groups, enter the username/groupname. Now when you run mapred queue -showacls command, it will show access of all users like: root: hive: yarn:
... View more
Labels:
08-17-2017
12:12 AM
@tsharma In my cluster I do see that after enabling Kerberos yarn.acl.enable was set to true. I am not sure though..
... View more
08-11-2017
05:27 PM
@tsharma Please check this:
This is the default ACL: This right now has only 3 users
dr.who,activity_analyzer,yarn.
And user logged in to Ambari is “admin”. Now it says “No
records available!”
This is because, there are no jobs which are run by ‘admin’
user yet.
Say you have a user hr1, who also runs the job in the
cluster. When I log into Ambari as hr1 user, I can see the jobs only for hr1:
Now I added “admin” user into yarn.admin.acl:
Then I can see all the jobs run by all the users:
If you want the users who login into Ambari but do not run
any jobs in the cluster to see the queries here, either add that user in
“yarn.admin.acl” or set it to “*”
... View more
06-30-2017
11:49 PM
PROBLEM: Click Alerts and then Actions > Manage Alert Group -> Custom Alert Group. Then, click the + sign on the right side and pick any alert definition and press OK.
Click Save and you will see the 500(server error) Server Error on the alert group screen: And in the ambari-server.log there is error: WARN [qtp-ambari-client-510524] ServletHandler:563 - /api/v1/clusters/<cluster-name>/alert_groups/155
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1429)
at java.util.HashMap$KeyIterator.next(HashMap.java:1453)
at org.eclipse.persistence.indirection.IndirectSet$1.next(IndirectSet.java:471)
at org.apache.ambari.server.orm.entities.AlertGroupEntity.setAlertTargets(AlertGroupEntity.java:313)
at org.apache.ambari.server.controller.internal.AlertGroupResourceProvider.updateAlertGroups(AlertGroupResourceProvider.java:344)
at org.apache.ambari.server.controller.internal.AlertGroupResourceProvider.access$100(AlertGroupResourceProvider.java:60)
at org.apache.ambari.server.controller.internal.AlertGroupResourceProvider$2.invoke(AlertGroupResourceProvider.java:187)
at org.apache.ambari.server.controller.internal.AlertGroupResourceProvider$2.invoke(AlertGroupResourceProvider.java:184)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:450)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.modifyResources(AbstractResourceProvider.java:331)
ROOT CAUSE: https://issues.apache.org/jira/browse/AMBARI-19259 RESOLUTION: Upgrade Ambari to 2.5
... View more
Labels:
06-30-2017
11:48 PM
Consider the example: Total input paths = 7
Input size for job = 510K 1) While are using a custom InputFormat which extends ‘org.apache.hadoop.mapred.FileInputFormat’ and having ‘isSplitable’ as false. Expected : 7 splits [As FileInputFormat doesn't split file smaller than blockSize (128 MB) so there should be one split per file]
Actual: 4 splits 2) Default value for 'hive.input.format' is CombineHiveInputFormat.
After setting ‘set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;’, there are 7 splits as expected. From above two points, it looks hive uses ‘CombineHiveInputFormat’ on top of the custom InputFormat to determine number of splits. How splits were calculated: For deciding the number of mappers when using CombineInputFormat, data locality plays a role. Now to find where those files belong we can get it from command:
hadoop fsck /<file-path> -files -blocks -locations 1. On. a.a.a.a /user/user1/hive/split/file1_0000
[/default-rack/a.a.a.a:1019, /default-rack/e.e.e.e:1019]
/user/user1/hive/split/file1_0002
[/default-rack/a.a.a.a:1019, /default-rack/e.e.e.e:1019]
2. On b.b.b.b /user/user1/hive/split/file1_0003
[/default-rack/b.b.b.b:1019, /default-rack/a.a.a.a:1019]
/user/user1/hive/split/file1_0005
[/default-rack/b.b.b.b:1019, /default-rack/a.a.a.a:1019]
/user/user1/hive/split/file1_0006
[/default-rack/b.b.b.b:1019, /default-rack/e.e.e.e:1019]
3. On c.c.c.c /user/user1/hive/split/file1_0001
[/default-rack/c.c.c.c:1019, /default-rack/a.a.a.a:1019]
4. On d.d.d.d /user/user1/hive/split/file1_0004
[/default-rack/d.d.d.d:1019, /default-rack/a.a.a.a:1019]
Hive is picking up blocks from these 4 DNs. Files on 1 DN are combined into 1 task. If a maxSplitSize is specified, then blocks on the same node are combined to form a single split. Blocks that are left over are then combined with other blocks in the same rack. If maxSplitSize is not specified, then blocks from the same rack are combined in a single split; no attempt is made to create node-local splits. If the maxSplitSize is equal to the block size, then this class is similar to the default splitting behavior in Hadoop: each block is a locally processed split.
Ref: https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/mapred/lib/CombineFileInputFormat.html The reason it has picked the first block location for each blocks while combining is any Hadoop Client will use the first block location and will consider the next only if reading the first fails. Usually NameNode will return the block locations of a block sorted based upon the distance between the client and location. NameNode will give all block locations but CombineHiveInputFormat / Hadoop Client / MapReduce Program uses the first block location.
... View more
Labels:
06-30-2017
11:46 PM
PROBLEM: After moving the Zookeeper servers and setting correctly in the yarn configs, Resource managers come up but are in standby state. Even after removing the znode - rmstore, none of the nodes transition to active. ROOT CAUSE: Zookeeper data is stored in znode yarn-leader-election. This is used for RM leader election. This znode has stale data about zookeeper leader. RESOLUTION: 1. Login into zkcli 2. rmr /yarn-leader-election 3. Restart Resource managers.
... View more