Member since
02-18-2016
72
Posts
19
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
948 | 07-10-2017 04:10 PM | |
1992 | 07-10-2017 04:01 PM | |
4893 | 04-25-2017 05:01 PM | |
5145 | 03-02-2017 06:35 PM | |
6562 | 12-20-2016 02:13 PM |
07-10-2017
04:10 PM
2 Kudos
I believe this is related to the setting on your capacity scheduler. If you didn't setup the scheduler, then all resources will be allocated to the first user, and other user have to wait. Please refer the documentation for detail: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_hive-performance-tuning/content/section_create_configure_yarn_capacity_scheduler_queues.html
... View more
07-10-2017
04:01 PM
so far not many mechanisms can be used to do what you asked for. 1. You can check LLAP UI to see which queries is running, and how long it has been running. You can then dig into each node to see how many executors are used to run. 2. You can tail the LLAP log to see the mappers/reducers are used running for queries. --- This is most close one to what you ask for. 3. In the Tez view who are the owner of the query, you can check running status as well, and you can see the swimming line to further understand the cost for each steps within the query. But this will be available after the query is finished. The intention of LLAP is for queries with short turn around time, long running/larger query should not be part of LLAP process.
... View more
04-25-2017
05:01 PM
1 Kudo
I am not sure about your use case. If you want just include file1 into hive table, you have to copy those files into separate folders. The alternative way might be you can including all data into the hive table, and let hive to control what data can be selected/seen etc.
... View more
03-07-2017
02:58 PM
Could you open a new thread, so that somebody else will chime in if I am not available?
... View more
03-03-2017
06:18 PM
The interfaces are required component. I think when you enter into that page, it will give you sample value, i.e. hftp://sandbox.hortonworks.com:50070, you then need to change the url to you own setting if not on the sandbox. Detail information on these setting, you can find at https://falcon.apache.org/EntitySpecification.html. The properties are optional, you can define your own properties. You can check in the link I listed above as well.
... View more
03-03-2017
02:59 PM
have you seen any other views by setting correct permission? If just only the files view, you may follow the steps below: You need to set up an HDFS proxy user for the Ambari daemon account. For example, if ambari-server daemon is runnng as root , you set up a proxy user for root in core-site by adding and changing properties in HDFS > Configs > Custom core-site: hadoop.proxyuser.root.groups=* hadoop.proxyuser.root.hosts=* Restart the required components as indicated by Ambari.
... View more
03-02-2017
09:22 PM
the views is the group that was pre-defined in a tutorial before this one. So, you need to create a group called views before you can assign the falcon user to this group. In order to add user/group to access views, you can just follow this path: admin->manage ambari->views->click one of the views->click that view link-> go to permission->add user or group that can access to that view.
... View more
03-02-2017
06:54 PM
I am copying the part of tutorial below for your reference. You can see the newly added falcon user. Click on it to assign it a group so that it can access Ambari views.
Write "views" and select it in LocalGroupMembership box and then click on tick mark to add a falcon user in the "views" group.
... View more
03-02-2017
06:35 PM
First, I think you meant HDFS/Falcon replication. Hbase has its own replication method. Regarding you see screen empty, that's the permission issue. I think you missed one step in the tutorial that to setup permission (assign to views group in the tutorial) for the newly created falcon user.
... View more
02-24-2017
06:05 PM
You can kill it from linux command from the server that run Ambari metrics and grafana. But also you may need to check the log why it is not able to shutdown. pkill -KILL -u ams
... View more