Member since
09-21-2017
7
Posts
1
Kudos Received
0
Solutions
11-13-2017
05:50 PM
Hi there, We are looking to autoscale hadoop cluster up/down using time based mode. We are able to upscale the cluster successfully but the downscale does not get triggered automatically. But if we use remove nodes feature then it works fine. The host group which we are trying to downscale does not contain datanodes. Can someone help us if we are not doing something correctly? I am attaching screenshot of alerts, scaling policies , cluster scaling configuration and history. Thanks.! Screenshot : screen-shot-2017-11-13-at-094106.pngscreen-shot-2017-11-13-at-094119.pngscreen-shot-2017-11-13-at-094135.png
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Hortonworks Cloudbreak
10-11-2017
07:53 PM
Hi there, We are looking to use ephemeral hadoop cluster/infrastructure approach for one of our applications. Assuming the application runs at 6am/6pm and it runs for around 30min each time. We want to bring up the infrastructure with a particular blueprint for the application then let the application run end-to-end and then bring it down until the next run 12 hours later. Does cloudbreak provide this feature and if so can someone help us with where is a resource/documentation for it? Or we need to have a small cluster running all the time with minimum nodes and scale up/down depending on application schedule with time based autoscaling? Thanks.!
... View more
Labels:
10-03-2017
06:24 PM
Hi there, We are trying to have two different hadoop clusters, one is Map-reduce/Spark for computation and other HBase for storage. So, for the map-reduce cluster we want only client side jar for HBase and Phoenix. Is it possible to only add have HBase/Phoenix Client on worker or core nodes from the blueprint itself? Or we will need to have separate bootstrap action? Thanks.!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
-
Apache Phoenix
09-22-2017
10:58 PM
1 Kudo
Hi there, We are unable to add phoenix application to slave nodes in our presently running cluster which was spun up using Cloudbreak. We were of the opinion that if we add HBase as a service then Phoenix would be available for addition later when we need it. Would like to know if we have missed something. I am attaching the blueprint as well : {
"host_groups": [
{
"name": "host_group_master_1",
"configurations": [],
"components": [
{
"name": "HBASE_MASTER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "NAMENODE"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "HBASE_CLIENT"
}
],
"cardinality": "1"
},
{
"name": "host_group_master_2",
"configurations": [],
"components": [
{
"name": "HBASE_MASTER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "PIG"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "SECONDARY_NAMENODE"
}
],
"cardinality": "1"
},
{
"name": "host_group_master_3",
"configurations": [],
"components": [
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "METRICS_COLLECTOR"
}
],
"cardinality": "1"
},
{
"name": "host_group_client_1",
"configurations": [],
"components": [
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "PIG"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "SQOOP"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "METRICS_MONITOR"
}
],
"cardinality": "1"
},
{
"name": "host_group_slave_1",
"configurations": [],
"components": [
{
"name": "HBASE_REGIONSERVER"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "DATANODE"
},
{
"name": "HDFS_CLIENT"
}
],
"cardinality": "4"
}
],
"Blueprints": {
"blueprint_name": "hbase-sec-nn",
"stack_name": "HDP",
"stack_version": "2.6"
}
} Thank you.!!
... View more
Labels:
09-22-2017
10:06 PM
I was using incorrect job-tracker ip address. Make sure to check the ResourceManager ip address for job-tracker.
... View more
09-21-2017
04:36 PM
Hi, we have spun up a cluster using Cloudbreak on EC2 instances. We tried sample wordcount application using hadoop jar command and the map-reduce action was successful along with sqoop import using command line which was successful as well. But when we started running these same actions using oozie. The jobs are getting stuck in PREP state forever. Issue is similar to previously posted : https://community.hortonworks.com/questions/114512/oozie-actions-stuck-in-prep-state.html Can someone help us to determine what might be causing this problem? Thanks.
... View more
Labels:
- Labels:
-
Apache Oozie