Member since
12-01-2017
8
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1905 | 12-05-2017 02:32 PM |
12-05-2017
02:32 PM
1 Kudo
@Aditya Sirna I found the error... By default "hbase_regionserver_heapsize" was set to 4096m, greater than my server, therefore, regionservers were not able to start. I Changed that value to 1024 and everything went ok! "hbase_regionserver_heapsize" : "4096m",
"hbase_regionserver_heapsize" : "1024",
... View more
12-05-2017
12:16 PM
@Aditya Sirna When region servers are restarted just the following is displayed when tail (same as obove) :(: Tue Dec 5 13:11:53 CET 2017 Starting regionserver on datanode2
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 13671
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
... View more
12-05-2017
12:00 PM
@Aditya Sirna This is the output: $ cat /var/log/hbase/hbase-hbase-regionserver-namenode1.log 2017-12-05 12:11:29,525 INFO [timeline] availability.MetricSinkWriteShardHostnameHashingStrategy: Calculated collector shard namenode2 based on hostname: namenode1
2017-12-05 12:15:09,962 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0
2017-12-05 12:20:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=59, evicted=0, evictedPerRun=0.0
2017-12-05 12:25:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=89, evicted=0, evictedPerRun=0.0
2017-12-05 12:30:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=119, evicted=0, evictedPerRun=0.0
2017-12-05 12:35:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=149, evicted=0, evictedPerRun=0.0
2017-12-05 12:40:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=179, evicted=0, evictedPerRun=0.0
2017-12-05 12:45:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=209, evicted=0, evictedPerRun=0.0
2017-12-05 12:50:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=239, evicted=0, evictedPerRun=0.0
2017-12-05 12:55:09,961 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.59 GB, max=1.59 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=269, evicted=0, evictedPerRun=0.0
$ cat /var/log/hbase/hbase-hbase-regionserver-datanode1.log Tue Dec 5 12:08:28 CET 2017 Starting regionserver on datanode1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 13671
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Tue Dec 5 12:18:37 CET 2017 Starting regionserver on datanode1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 13671
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Tue Dec 5 12:21:20 CET 2017 Starting regionserver on datanode1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 13671
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Tue Dec 5 12:46:37 CET 2017 Starting regionserver on datanode1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 13671
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 16000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
... View more
12-05-2017
11:35 AM
Hello I have the following blueprint:
cluster_configuration.json {
"Blueprints": {
"stack_name": "HDP",
"stack_version": "2.6"
},
"host_groups": [
{
"name": "namenode1",
"cardinality" : "1",
"components": [
{ "name" : "HST_AGENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZKFC" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "HST_SERVER" },
{ "name" : "HBASE_CLIENT"},
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "HBASE_MASTER"},
{ "name" : "NAMENODE" },
{ "name" : "APP_TIMELINE_SERVER" },
{ "name" : "METRICS_GRAFANA" }
]
},
{
"name": "namenode2",
"cardinality" : "1",
"components": [
{ "name" : "ACTIVITY_EXPLORER" },
{ "name" : "HST_AGENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZKFC" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "HBASE_CLIENT"},
{ "name" : "HISTORYSERVER" },
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "HBASE_MASTER"},
{ "name" : "NAMENODE" },
{ "name" : "METRICS_COLLECTOR" }
]
},
{
"name": "namenode3",
"cardinality" : "1",
"components": [
{ "name" : "ACTIVITY_ANALYZER" },
{ "name" : "HST_AGENT" },
{ "name" : "MAPREDUCE2_CLIENT" },
{ "name" : "YARN_CLIENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "HBASE_CLIENT"},
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "RESOURCEMANAGER" }
]
},
{
"name": "hosts_group",
"cardinality" : "3",
"components": [
{ "name" : "NODEMANAGER" },
{ "name" : "HST_AGENT" },
{ "name" : "MAPREDUCE2_CLIENT" },
{ "name" : "YARN_CLIENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "HBASE_REGIONSERVER"},
{ "name" : "DATANODE" },
{ "name" : "HBASE_CLIENT"},
{ "name" : "METRICS_MONITOR" },
{ "name" : "ZOOKEEPER_CLIENT" }
]
}
],
"configurations": [
{
"core-site": {
"properties" : {
"fs.defaultFS" : "hdfs://HACluster",
"ha.zookeeper.quorum": "%HOSTGROUP::namenode1%:2181,%HOSTGROUP::namenode2%:2181,%HOSTGROUP::namenode3%:2181",
"hadoop.proxyuser.yarn.hosts": "%HOSTGROUP::namenode2%,%HOSTGROUP::namenode3%"
}
}
},
{ "hdfs-site": {
"properties" : {
"dfs.client.failover.proxy.provider.HACluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled" : "true",
"dfs.ha.fencing.methods" : "shell(/bin/true)",
"dfs.ha.namenodes.HACluster" : "nn1,nn2",
"dfs.namenode.http-address" : "%HOSTGROUP::namenode1%:50070",
"dfs.namenode.http-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:50070",
"dfs.namenode.http-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:50070",
"dfs.namenode.https-address" : "%HOSTGROUP::namenode1%:50470",
"dfs.namenode.https-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:50470",
"dfs.namenode.https-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:50470",
"dfs.namenode.rpc-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:8020",
"dfs.namenode.rpc-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::namenode1%:8485;%HOSTGROUP::namenode2%:8485;%HOSTGROUP::namenode3%:8485/mycluster",
"dfs.nameservices" : "HACluster"
}
}
},
{ "yarn-site": {
"properties": {
"yarn.resourcemanager.ha.enabled": "true",
"yarn.resourcemanager.ha.rm-ids": "rm1,rm2",
"yarn.resourcemanager.hostname.rm1": "%HOSTGROUP::namenode2%",
"yarn.resourcemanager.hostname.rm2": "%HOSTGROUP::namenode3%",
"yarn.resourcemanager.webapp.address.rm1": "%HOSTGROUP::namenode2%:8088",
"yarn.resourcemanager.webapp.address.rm2": "%HOSTGROUP::namenode3%:8088",
"yarn.resourcemanager.webapp.https.address.rm1": "%HOSTGROUP::namenode2%:8090",
"yarn.resourcemanager.webapp.https.address.rm2": "%HOSTGROUP::namenode3%:8090",
"yarn.resourcemanager.recovery.enabled": "true",
"yarn.resourcemanager.store.class": "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
"yarn.resourcemanager.zk-address": "%HOSTGROUP::namenode1%:2181,%HOSTGROUP::namenode2%:2181,%HOSTGROUP::namenode3%:2181",
"yarn.client.failover-proxy-provider": "org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider",
"yarn.resourcemanager.cluster-id": "yarn-cluster",
"yarn.resourcemanager.ha.automatic-failover.zk-base-path": "/yarn-leader-election"
}
}
},
{
"hdfs-site" : {
"properties_attributes" : { },
"properties" : {
"dfs.datanode.data.dir" : "/mnt/secondary1,/mnt/secondary2"
}
}
},
{
"hadoop-env" : {
"properties_attributes" : { },
"properties" : {
"namenode_heapsize" : "2048m"
}
}
},
{
"activity-zeppelin-shiro": {
"properties": {
"users.admin": "admin"
}
}
},
{
"hbase-site" : {
"properties" : {
"hbase.rootdir" : "hdfs://HACluster/apps/hbase/data"
}
}
}
]
}
hostmap.json {
"blueprint":"HACluster",
"default_password":"admin",
"host_groups": [
{
"name": "namenode1",
"hosts":
[
{ "fqdn": "namenode1" }
]
},
{
"name": "namenode2",
"hosts":
[
{ "fqdn": "namenode2" }
]
},
{
"name": "namenode3",
"hosts":
[
{ "fqdn": "namenode3" }
]
},
{
"name": "hosts_group",
"hosts":
[
{ "fqdn": "datanode1" },
{ "fqdn": "datanode2" },
{ "fqdn": "datanode3" }
]
}
]
}
When I launch this configuration, HBase is the only service that doesn´t work. I get the following errors (screeshot attached). What I am missing? Thank you.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
12-01-2017
12:34 PM
Thanks @Aditya Sirna !!! It worked!! I didn´t reallize of that small detail! Thanks a lot!
... View more
12-01-2017
12:04 PM
Hello I have the following files: cluster_configuration.json {
"Blueprints": {
"stack_name": "HDP",
"stack_version": "2.6"
},
"host_groups": [
{
"name": "namenode1",
"cardinality" : "1",
"components": [
{ "name" : "HST_AGENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZKFC" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "HST_SERVER" },
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "NAMENODE" },
{ "name" : "APP_TIMELINE_SERVER" },
{ "name" : "METRICS_GRAFANA" }
]
},
{
"name": "namenode2",
"cardinality" : "1",
"components": [
{ "name" : "ACTIVITY_EXPLORER" },
{ "name" : "HST_AGENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZKFC" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "HISTORYSERVER" },
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "NAMENODE" },
{ "name" : "METRICS_COLLECTOR" }
]
},
{
"name": "namenode3",
"cardinality" : "1",
"components": [
{ "name" : "ACTIVITY_ANALYZER" },
{ "name" : "HST_AGENT" },
{ "name" : "MAPREDUCE2_CLIENT" },
{ "name" : "YARN_CLIENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "ZOOKEEPER_SERVER" },
{ "name" : "METRICS_MONITOR" },
{ "name" : "JOURNALNODE" },
{ "name" : "RESOURCEMANAGER" }
]
},
{
"name": "hosts_group",
"cardinality" : "3",
"components": [
{ "name" : "NODEMANAGER" },
{ "name" : "HST_AGENT" },
{ "name" : "MAPREDUCE2_CLIENT" },
{ "name" : "YARN_CLIENT" },
{ "name" : "HDFS_CLIENT" },
{ "name" : "DATANODE" },
{ "name" : "METRICS_MONITOR" },
{ "name" : "ZOOKEEPER_CLIENT" }
]
}
],
"configurations": [
{
"core-site": {
"properties" : {
"fs.defaultFS" : "HACluster",
"ha.zookeeper.quorum": "%HOSTGROUP::namenode1%:2181,%HOSTGROUP::namenode2%:2181,%HOSTGROUP::namenode3%:2181",
"hadoop.proxyuser.yarn.hosts": "%HOSTGROUP::namenode2%,%HOSTGROUP::namenode3%"
}}
},
{ "hdfs-site": {
"properties" : {
"dfs.client.failover.proxy.provider.mycluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled" : "true",
"dfs.ha.fencing.methods" : "shell(/bin/true)",
"dfs.ha.namenodes.HACluster" : "nn1,nn2",
"dfs.namenode.http-address" : "%HOSTGROUP::namenode1%:50070",
"dfs.namenode.http-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:50070",
"dfs.namenode.http-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:50070",
"dfs.namenode.https-address" : "%HOSTGROUP::namenode1%:50470",
"dfs.namenode.https-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:50470",
"dfs.namenode.https-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:50470",
"dfs.namenode.rpc-address.HACluster.nn1" : "%HOSTGROUP::namenode1%:8020",
"dfs.namenode.rpc-address.HACluster.nn2" : "%HOSTGROUP::namenode2%:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::namenode1:8485;%HOSTGROUP::namenode2%:8485;%HOSTGROUP::namenode3%:8485/mycluster",
"dfs.nameservices" : "HACluster"
}}
},
{ "yarn-site": {
"properties": {
"yarn.resourcemanager.ha.enabled": "true",
"yarn.resourcemanager.ha.rm-ids": "rm1,rm2",
"yarn.resourcemanager.hostname.rm1": "%HOSTGROUP::namenode2%",
"yarn.resourcemanager.hostname.rm2": "%HOSTGROUP::namenode3%",
"yarn.resourcemanager.webapp.address.rm1": "%HOSTGROUP::namenode2%:8088",
"yarn.resourcemanager.webapp.address.rm2": "%HOSTGROUP::namenode3%:8088",
"yarn.resourcemanager.webapp.https.address.rm1": "%HOSTGROUP::namenode2%:8090",
"yarn.resourcemanager.webapp.https.address.rm2": "%HOSTGROUP::namenode3%:8090",
"yarn.resourcemanager.recovery.enabled": "true",
"yarn.resourcemanager.store.class": "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
"yarn.resourcemanager.zk-address": "%HOSTGROUP::namenode1%:2181,%HOSTGROUP::namenode2%:2181,%HOSTGROUP::namenode3%:2181",
"yarn.client.failover-proxy-provider": "org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider",
"yarn.resourcemanager.cluster-id": "yarn-cluster",
"yarn.resourcemanager.ha.automatic-failover.zk-base-path": "/yarn-leader-election"
}
}
}
]
}
hdputils-repo.json {
"Repositories":{
"base_url":"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu16",
"verify_base_url":true
}
}
hostmap.json {
"blueprint":"HACluster",
"default_password":"admin",
"host_groups": [
{
"name": "namenode1",
"hosts":
[
{ "fqdn": "namenode1" }
]
},
{
"name": "namenode2",
"hosts":
[
{ "fqdn": "namenode2" }
]
},
{
"name": "namenode3",
"hosts":
[
{ "fqdn": "namenode3" }
]
},
{
"name": "hosts_group",
"hosts":
[
{ "fqdn": "datanode1" },
{ "fqdn": "datanode2" },
{ "fqdn": "datanode3" }
]
}
]
} repo.json {
"Repositories":{
"base_url":"http://public-repo-1.hortonworks.com/HDP/ubuntu16/2.x/updates/2.6.3.0/",
"verify_base_url":true
}
}
Then I launch this blue print this way: $ curl -i -H "X-Requested-By: ambari" -X POST -u admin:admin http://ambariserver:8080/api/v1/blueprints/HACluster -d @cluster_configuration.json
$ curl -i -H "X-Requested-By: ambari" -X PUT -u admin:admin http://ambariserver:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/ubuntu16/repositories/HDP-2.6 -d @repo.json
$ curl -i -H "X-Requested-By: ambari" -X PUT -u admin:admin http://ambariserver:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/ubuntu16/repositories/HDP-UTILS-1.1.0.21 -d @hdputils-repo.json
$ curl -i -H "X-Requested-By: ambari" -X POST -u admin:admin http://ambariserver:8080/api/v1/clusters/HACluster -d @hostmap.json
I am trying to create a 3 namenode and 3 datanode structure with HA HDFS. When I launch the blue print, no errors are displayed but then hosts are not registered as you can see on the attached screenshots. I get the following error at /var/log/ambari-server/ambari-server.log java.lang.IllegalArgumentException: Unable to match blueprint host group token to a host group: namenode1:8485;
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.getHostStrings(BlueprintConfigurationProcessor.java:1248)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.access$700(BlueprintConfigurationProcessor.java:63)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor$MultipleHostTopologyUpdater.updateForClusterCreate(BlueprintConfigurationProcessor.java:1870)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.doUpdateForClusterCreate(BlueprintConfigurationProcessor.java:355)
at org.apache.ambari.server.topology.ClusterConfigurationRequest.process(ClusterConfigurationRequest.java:152)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:79)
at org.apache.ambari.server.security.authorization.internal.InternalAuthenticationInterceptor.invoke(InternalAuthenticationInterceptor.java:45)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
01 Dec 2017 12:51:15,602 INFO [ambari-client-thread-32] TopologyManager:963 - TopologyManager.processAcceptedHostOffer: queue tasks for host = namenode1 which responded ACCEPTED
01 Dec 2017 12:51:15,603 INFO [ambari-client-thread-32] TopologyManager:988 - TopologyManager.processAcceptedHostOffer: queueing tasks for host = namenode1
01 Dec 2017 12:51:15,603 INFO [ambari-client-thread-32] TopologyManager:863 - TopologyManager.processRequest: host name = datanode1 is mapped to LogicalRequest ID = 1 and will be removed from the reserved hosts.
01 Dec 2017 12:51:15,603 INFO [ambari-client-thread-32] TopologyManager:876 - TopologyManager.processRequest: offering host name = datanode1 to LogicalRequest ID = 1
01 Dec 2017 12:51:15,608 INFO [ambari-client-thread-32] LogicalRequest:101 - LogicalRequest.offer: attempting to match a request to a request for a reserved host to hostname = datanode1
01 Dec 2017 12:51:15,608 INFO [ambari-client-thread-32] LogicalRequest:110 - LogicalRequest.offer: request mapping ACCEPTED for host = datanode1
01 Dec 2017 12:51:15,608 INFO [ambari-client-thread-32] LogicalRequest:113 - LogicalRequest.offer returning response, reservedHost list size = 0
01 Dec 2017 12:51:15,617 INFO [ambari-client-thread-32] TopologyManager:886 - TopologyManager.processRequest: host name = datanode1 was ACCEPTED by LogicalRequest ID = 1 , host has been removed from available hosts.
01 Dec 2017 12:51:15,618 INFO [ambari-client-thread-32] ClusterTopologyImpl:158 - ClusterTopologyImpl.addHostTopology: added host = datanode1 to host group = hosts_group
01 Dec 2017 12:51:15,634 INFO [ambari-client-thread-32] TopologyManager:963 - TopologyManager.processAcceptedHostOffer: queue tasks for host = datanode1 which responded ACCEPTED
01 Dec 2017 12:51:15,638 INFO [ambari-client-thread-32] TopologyManager:988 - TopologyManager.processAcceptedHostOffer: queueing tasks for host = datanode1
01 Dec 2017 12:51:15,638 INFO [ambari-client-thread-32] TopologyManager:904 - TopologyManager.processRequest: not all required hosts have been matched, so adding LogicalRequest ID = 1 to outstanding requests
01 Dec 2017 12:51:15,640 INFO [ambari-client-thread-32] AmbariManagementControllerImpl:1624 - Received a updateCluster request, clusterId=null, clusterName=HACluster, securityType=null, request={ clusterName=HACluster, clusterId=null, provisioningState=INSTALLED, securityType=null, stackVersion=HDP-2.6, desired_scv=null, hosts=[] }
01 Dec 2017 12:51:16,594 WARN [pool-20-thread-1] BlueprintConfigurationProcessor:1546 - The property 'dfs.namenode.secondary.http-address' is associated with the component 'SECONDARY_NAMENODE' which isn't mapped to any host group. This may affect configuration topology resolution.
01 Dec 2017 12:51:16,598 ERROR [pool-20-thread-1] ConfigureClusterTask:116 - Could not determine required host groups
java.lang.IllegalArgumentException: Unable to match blueprint host group token to a host group: namenode1:8485;
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor$MultipleHostTopologyUpdater.getRequiredHostGroups(BlueprintConfigurationProcessor.java:2037)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.getRequiredHostGroups(BlueprintConfigurationProcessor.java:303)
at org.apache.ambari.server.topology.ClusterConfigurationRequest.getRequiredHostGroups(ClusterConfigurationRequest.java:126)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.getTopologyRequiredHostGroups(ConfigureClusterTask.java:113)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:71)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41.CGLIB$call$1(<generated>)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41$$FastClassByGuice$$a8ae8ea0.invoke(<generated>)
at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at org.apache.ambari.server.security.authorization.internal.InternalAuthenticationInterceptor.invoke(InternalAuthenticationInterceptor.java:45)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:52)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41.call(<generated>)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
01 Dec 2017 12:51:16,598 INFO [pool-20-thread-1] ConfigureClusterTask:78 - All required host groups are complete, cluster configuration can now begin
01 Dec 2017 12:51:16,598 INFO [pool-20-thread-1] BlueprintConfigurationProcessor:579 - Config recommendation strategy being used is NEVER_APPLY)
01 Dec 2017 12:51:16,598 INFO [pool-20-thread-1] BlueprintConfigurationProcessor:598 - No recommended configurations are applied. (strategy: NEVER_APPLY)
01 Dec 2017 12:51:16,829 INFO [pool-4-thread-1] AsyncCallableService:92 - Task ConfigureClusterTask exception during execution
java.lang.IllegalArgumentException: Unable to match blueprint host group token to a host group: namenode1:8485;
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.getHostStrings(BlueprintConfigurationProcessor.java:1248)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.access$700(BlueprintConfigurationProcessor.java:63)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor$MultipleHostTopologyUpdater.updateForClusterCreate(BlueprintConfigurationProcessor.java:1870)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.doUpdateForClusterCreate(BlueprintConfigurationProcessor.java:355)
at org.apache.ambari.server.topology.ClusterConfigurationRequest.process(ClusterConfigurationRequest.java:152)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:79)
at org.apache.ambari.server.security.authorization.internal.InternalAuthenticationInterceptor.invoke(InternalAuthenticationInterceptor.java:45)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
01 Dec 2017 12:51:17,845 WARN [pool-20-thread-1] BlueprintConfigurationProcessor:1546 - The property 'dfs.namenode.secondary.http-address' is associated with the component 'SECONDARY_NAMENODE' which isn't mapped to any host group. This may affect configuration topology resolution.
01 Dec 2017 12:51:17,852 ERROR [pool-20-thread-1] ConfigureClusterTask:116 - Could not determine required host groups
java.lang.IllegalArgumentException: Unable to match blueprint host group token to a host group: namenode1:8485;
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor$MultipleHostTopologyUpdater.getRequiredHostGroups(BlueprintConfigurationProcessor.java:2037)
at org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor.getRequiredHostGroups(BlueprintConfigurationProcessor.java:303)
at org.apache.ambari.server.topology.ClusterConfigurationRequest.getRequiredHostGroups(ClusterConfigurationRequest.java:126)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.getTopologyRequiredHostGroups(ConfigureClusterTask.java:113)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:71)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41.CGLIB$call$1(<generated>)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41$$FastClassByGuice$$a8ae8ea0.invoke(<generated>)
at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at org.apache.ambari.server.security.authorization.internal.InternalAuthenticationInterceptor.invoke(InternalAuthenticationInterceptor.java:45)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:52)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask$$EnhancerByGuice$$3479de41.call(<generated>)
at org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call(ConfigureClusterTask.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
01 Dec 2017 12:51:17,860 INFO [pool-20-thread-1] ConfigureClusterTask:78 - All required host groups are complete, cluster configuration can now begin
01 Dec 2017 12:51:17,860 INFO [pool-20-thread-1] BlueprintConfigurationProcessor:579 - Config recommendation strategy being used is NEVER_APPLY)
01 Dec 2017 12:51:17,860 INFO [pool-20-thread-1] BlueprintConfigurationProcessor:598 - No recommended configurations are applied. (strategy: NEVER_APPLY)
01 Dec 2017 12:51:18,000 INFO [pool-4-thread-1] AsyncCallableService:92 - Task ConfigureClusterTask exception during execution
What I am missing? How can I register the hosts?? Can someone please help me? Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari