Member since
09-11-2015
17
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1651 | 03-10-2016 10:17 PM | |
1446 | 09-30-2015 02:33 PM |
10-13-2023
07:31 PM
@vaishaakb I noticed this same activity after deploying to the latest version of CM and after deploying parcels in my Lab cluster. I started getting P2P violations from my IDS and IPS. Is there any way to control the external p2p process? I've gone ahead and attached screen captures from my firewall. CDP - 7.1.9-1.cdh7.1.9.p0.44702451 - CM - 7.11.3 Example of the detection: All 5 of my nodes repeatedly trying to talk across the globe.
... View more
06-13-2019
02:25 PM
I ran into a similar issue to this as well. It looks like when it derives the RegEX based off the DEFAULT. We found that it used the URL from the Provider we used setting up the SSO client we had typed in just the Hostname for the Knox Server. It then resulted in the following Default white list: INFO knox.gateway (WhitelistUtils.java:getDispatchWhitelist(63)) - Applying a derived dispatch whitelist because none is configured in gateway-site: ^/.*$;^https?://ambari30l:[0-9]+/?.*$ After redoing the SSO setup with the FQDN for the host it resolved the "DEFAULT" Lookup and was able to find the hosts properly. INFO knox.gateway (WhitelistUtils.java:getDispatchWhitelist(63)) - Applying a derived dispatch whitelist because none is configured in gateway-site: ^/.*$;^https?://ambari30l.mydomain.com:[0-9]+/?.*$ Hopefully this help others in troubleshooting this issue in the future.
... View more
03-09-2017
07:25 AM
1 Kudo
I'm trying to build a Namenode HA cluster on OpenStack. I have two mount points for my instance when it's created: / for all the os related bits and /hadoopfs/fs1 for all the HDFS/YARN data. I think the /hadoopfs/fs{1..n} is standard. When I deploy my cluster and completes I end up with dfs.datanodes.data.dir:/hadoopfs/fs1/hdfs/data set, but then all the config groups that get generated during the build process have null values set. So this is causing the datanode process to create its data dir in /tmp/hadoop-hdfs/dfs/data/ which is on the root filesystem instead of the 20TB data store for the instance. What am I missing that could be causing this to happen? From Ambari: From the command line: Finally here's a copy of my blueprint: {
"Blueprints": {
"blueprint_name": "ha-hdfs",
"stack_name": "HDP",
"stack_version": "2.5"
},
"host_groups": [
{
"name": "gateway",
"cardinality" : "1",
"components": [
{ "name": "HDFS_CLIENT" },
{ "name": "MAPREDUCE2_CLIENT" },
{ "name": "METRICS_COLLECTOR" },
{ "name": "METRICS_MONITOR" },
{ "name": "TEZ_CLIENT" },
{ "name": "YARN_CLIENT" },
{ "name": "ZOOKEEPER_CLIENT" }
]
},
{
"name": "master_1",
"cardinality" : "1",
"components": [
{ "name": "HISTORYSERVER" },
{ "name": "JOURNALNODE" },
{ "name": "METRICS_MONITOR" },
{ "name": "NAMENODE" },
{ "name": "ZKFC" },
{ "name": "ZOOKEEPER_SERVER" }
]
},
{
"name": "master_2",
"cardinality" : "1",
"components": [
{ "name": "APP_TIMELINE_SERVER" },
{ "name": "JOURNALNODE" },
{ "name": "METRICS_MONITOR" },
{ "name": "RESOURCEMANAGER" },
{ "name": "ZOOKEEPER_SERVER" }
]
},
{
"name": "master_3",
"cardinality" : "1",
"components": [
{ "name": "JOURNALNODE" },
{ "name": "METRICS_MONITOR" },
{ "name": "NAMENODE" },
{ "name": "ZKFC" },
{ "name": "ZOOKEEPER_SERVER" }
]
},
{
"name": "slave_1",
"components": [
{ "name": "DATANODE" },
{ "name": "METRICS_MONITOR" },
{ "name": "NODEMANAGER" }
]
}
],
"configurations": [
{
"core-site": {
"properties" : {
"fs.defaultFS" : "hdfs://myclusterhaha",
"ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181"
}}
},
{
"hdfs-site": {
"properties" : {
"dfs.client.failover.proxy.provider.myclusterhaha" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled" : "true",
"dfs.ha.fencing.methods" : "shell(/bin/true)",
"dfs.ha.namenodes.myclusterhaha" : "nn1,nn2",
"dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
"dfs.namenode.http-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:50070",
"dfs.namenode.http-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:50070",
"dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
"dfs.namenode.https-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:50470",
"dfs.namenode.https-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:50470",
"dfs.namenode.rpc-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:8020",
"dfs.namenode.rpc-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/myclusterhaha",
"dfs.nameservices" : "myclusterhaha",
"dfs.datanode.data.dir" : "/hadoopfs/fs1/hdfs/data"
}
}
},
{
"hadoop-env": {
"properties": {
"hadoop_heapsize": "4096",
"dtnode_heapsize": "8192m",
"namenode_heapsize": "32768m"
}
}
}
]
}
Any advice you can provide would be great. Thanks, Scott
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
01-24-2017
05:57 PM
I'm using Ambari-Infra for auditing of Ranger instance inside of my HDP cluster. I understand that adding multiple Ambari infra servers through Ambari will provide me a certain level redundancy, scalability, and backup of my SolrCloud instance. What I'm trying to determine is there a recommended way of backing up the core data from the Solr instance? Since it's using SolrCloud should I develop my backup procedure based off of documentation from: https://cwiki.apache.org/confluence/display/solr/Making+and+Restoring+Backups Looking through the above doc it looks like I could create a snapshot and then save that snapshot offline somewhere. Would that be the best practice for the Ambari Infra instance or is Ambari handling backups of that data some other way? Thanks for you input everyone!
... View more
Labels:
- Labels:
-
Apache Ambari
11-02-2016
02:45 PM
I'm in need of some advice with regard to a hardware configuration of Management nodes in a soon to be production cluster. Hardware was purchased for an HDP Production cluster prior my involvement. The hardware specs for 6 Management Nodes is as follows: HP DL380 - 2x 10 core Xeon - 128GB of RAM - 2x 300GB SAS for OS - 4x 400GB SAS SSD's for application related Services in a RAID 10 configuration. We're trying to determine if the disk space will be adequate to handle a 150 Node Cluster and expansion up to 250 in the future.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-10-2016
10:17 PM
3 Kudos
If nothing installed to the cluster you can do an ambari-server reset and start over effectively. 1. Stop Ambari Server ambari-server stop 2. Reset Ambari Server ambari-server reset 3. Start Start Ambari server ambari-server start I would verify that nothing got deployed to any of the hosts in the cluster prior to re-installing.
... View more
10-21-2015
04:47 PM
1 Kudo
In HDP 2.2.8 is there any way to enable quota management within HBase that would limit namespaces to a specific size for example: For Example: Create two namespaces and apply a Quota limit: Namespace1: 50 TB Table1, Table2 Namespace2 150 TB Table3, Table4, Table5 But the sum of the tables would be allowed to use up to that limit. Example: Table1 + Table2 <= 50 TB Table3 + Table4 + Table5 <= 150TB Thank you for any advice and guidance.
... View more
Labels:
09-30-2015
02:33 PM
@terry@hortonworks.com I seem to remember having some issues with Quicklinks in Version 2.0.0/1 but I haven't seen an issues with 2.1.x. But this could be a how your cluster is configured, I was reading some of the 2.1.x JIRA's that if you have non-standard ports for services that there could be an issue. Do you think you can elaborate on your configuration? Example of the bugs I saw: BUG-43906, BUG-44507
... View more