Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 04-27-2020 03:48 AM | |
4891 | 04-26-2020 06:18 PM | |
3977 | 04-26-2020 06:05 PM | |
3222 | 04-13-2020 08:53 PM | |
4930 | 03-31-2020 02:10 AM |
02-20-2019
10:21 AM
@Joe There is a behaviour change documented from ranger perspective when the HDP upgrade happens from HDP 2.4 to HDP 2.5 (you have upgraded to HDP 2.6) so you must refer to this: Release Notes: (HDP 2.5 Behavior Change) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/behavior_changes.html Scenario: Ranger Audits users who are currently using Audit to DB must migrate to Audit to Solr. Previous Behavior: Ranger Audit can be configured to go with any of the following destinations: DB, SOLR, and HDFS. New Behavior: Ranger Audit can no longer be configured to the destination DB. Ranger Audit can only be configured to go with the following destinations: SOLR and HDFS. During upgrade to HDP 2.5, If you have not enabled ranger-audit to SOLR, then you will have to configure audit to Solr post-upgrade. Otherwise, you will not see audit activities in Ranger UI. You can either use an externally managed Solr or Ambari managed Solr. For details on configuring these, refer to the Solr Audit configuration section in installation guide. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_security/content/migrating_audit_logs_from_db_to_solr_in_ambari_clusters.html .
... View more
02-20-2019
02:49 AM
@Dan Hops Can you try changing the "yarn.scheduler.capacity.root.default.capacity=0.0" to "yarn.scheduler.capacity.root.default.capacity=75" "yarn.scheduler.capacity.root.default.maximum-capacity=0.0" to "yarn.scheduler.capacity.root.default.maximum-capacity=100" Similarly change the "yarn.scheduler.capacity.root.llap.capacity=25.0" to "yarn.scheduler.capacity.root.llap.capacity=25" Example: yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=75
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.queues=default,llap
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.acl_submit_applications=*
yarn.scheduler.capacity.root.default.priority=0
yarn.scheduler.capacity.root.llap.acl_administer_queue=hive
yarn.scheduler.capacity.root.llap.acl_submit_applications=hive
yarn.scheduler.capacity.root.llap.capacity=25
yarn.scheduler.capacity.root.llap.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.llap.maximum-capacity=100
yarn.scheduler.capacity.root.llap.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.llap.ordering-policy=fifo
yarn.scheduler.capacity.root.llap.priority=0
yarn.scheduler.capacity.root.llap.state=RUNNING
yarn.scheduler.capacity.root.llap.user-limit-factor=1
yarn.scheduler.capacity.root.priority=0
... View more
02-20-2019
01:14 AM
1 Kudo
@sanyun di For testing purpose can you please try this: 1. In Ambari UI "admin" --> "Manage Ambari" --> Users --> "Add User" 2. Add a new User and give that user the following access: User Access: Cluster Administrator
Is this user an Ambari Admin? : Yes
User Status : Active 3. Now logout from the Browser and then login with this new user credentials and then check if youa re able to see the "Metrics" dashboard?
... View more
02-19-2019
10:44 AM
@sanyun di Have you tried clearing your browser cache? Or may be you can try opening the ambari UI in "Incognito Mode" (in google chrome) Or in "Private Mode" (In Firefox) to see if that works. Also in your ambari-server.log do you see any AMS connectivity related error?
... View more
02-18-2019
10:15 AM
1 Kudo
@Ilia K May be you can try commenting the following 4 lines from the below mentioned script on all your cluster nodes. Followed by "metrics-monitor" restart on all the cluster nodes. # grep -iR 'disk_io_counters' /usr/lib/python2.6/site-packages/resource_monitoring/core/metric_collector.py
metrics.update(self.host_info.get_combined_disk_io_counters())
metrics.update(self.host_info.get_disk_io_counters_per_disk())
metrics.update(self.host_info.get_combined_disk_io_counters())
metrics.update(self.host_info.get_disk_io_counters_per_disk()) After commenting / removing those lines you will need to restart the metrics monitor from ambari UI on all hosts (or better restart AMS service)
... View more
02-16-2019
12:02 PM
Similar threads: https://superuser.com/questions/1153470/vt-x-is-not-available-but-is-enabled-in-bios
... View more
02-16-2019
12:01 PM
1 Kudo
@Adam J VT-x is Intel's technology for virtualization on the x86 platform. VT-x allows multiple operating systems to simultaneously share x86 processor resources in a safe and efficient manner. Make sure Virtualization is enabled in your BIOS. To check the status of Hyper-v in Windows 10,<right click start> | Run | OptionalFeatures.exe, and look for the "Hyper-V" option. The box should be empty, not checked or shaded. If you want to be absolutely sure that Hyper-v is gone then open an administrator command console and type "bcdedit /set hypervisorlaunchtype off". Make sure to fully power down and reboot the host after changing the Hyper-v setting. . On some Windows hosts with an EFI BIOS, DeviceGuard or CredentialGuard may be active by default, and interferes with OS level virtualization apps in the same way that Hyper-v does. These features need to be disabled. On Pro versions of Windows you can do this using gpedit.msc (setLocal Computer Policy > Computer Configuration > Administrative Templates > System > Device Guard > Turn on Virtualization Based SecuritytoDisabled. CredentialGuard is a subset of DeviceGuard, so disabling the former should be enough. If you cannot use gpedit for some reason then the equivalent registry hack is to find the keyHKLM|SYSTEM|CurrentControlSet|Control|DeviceGuard|EnableVirtualizationBasedSecurity|Enabledand set it to 0. . On Win10 hosts, checkWindows Defender > Device Security > Core Isolation Detailsand make sure settings in this panel are turned off, reboot the host from power down if you needed to make changes. "Core isolation [includes] security features available on your device that use virtualization-based security"
... View more
02-14-2019
10:26 AM
@Sampath Kumar However Ambari Provides users an option to register their own custom Actions and execute them via API calls. For that you will need to write your own Python scripts (and inside those python screipt you can write your logic to force failover) like "yarn rmadmin -failover" and then make it run as part of your python script. . You can find similar info on how to register custom commands and execute them via ambari API calls in the following article: https://community.hortonworks.com/articles/139788/running-custom-scripts-through-ambari.html
... View more
02-14-2019
10:22 AM
@Sampath Kumar Ambari does not control this part. Hence it does not have any inbuilt API to achieve the same. . The RMs have an option to embed the Zookeeper-based ActiveStandbyElector to decide which RM should be the Active. When the Active goes down or becomes unresponsive, another RM is automatically elected to be the Active which then takes over. ActiveStandbyElector embedded in RMs acts as a failure detector. Similarly in case of NameNode HA, a separate ZKFC daemon is needed which is responsible for deciding when to failover.
... View more
02-14-2019
04:52 AM
@Dan Hops
As we see the following error: Permission denied: user=admin, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x . Which means you are trying to run the job as 'admin' user. Hence please make sure that the "/user/admin" directory is created to the HDFS using supreuser and then you can run your jobs. So please do the following: # su - hdfs -c "hdfs dfs -mkdir /user/admin"
# su - hdfs -c "hdfs dfs -chown -R admin:hdfs /user/admin"
# su - hdfs -c "hdfs dfs -chmod 755 /user/admin" . Then try running your jobs. .
... View more