Member since
08-10-2016
170
Posts
14
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
20328 | 01-31-2018 04:55 PM | |
4356 | 11-29-2017 03:28 PM | |
1939 | 09-27-2017 02:43 PM | |
2135 | 09-12-2016 06:36 PM | |
2029 | 09-02-2016 01:58 PM |
09-15-2017
07:10 PM
I have reproduced this with Data Science: Apache Spark 2.1, Apache Zeppelin 0.7.0 blueprint
... View more
09-15-2017
12:55 PM
I mean that if I don't install with security the cluster starts up without issues. Yes, my security group does have 9443 enabled. HiveServer2 fails to install: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 227, in <module>
HiveServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 81, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 117, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 52, in configure
hive(name='hiveserver2')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 141, in hive
copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py", line 267, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 555, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 552, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 303, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 418, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.5.5.0-157/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' --negotiate -u : 'http://had-m1.bt52pnivtndublvux4s5oursrh.ux.internal.cloudapp.net:50070/webhdfs/v1/hdp/apps/2.5.5.0-157/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health."
}
[this is repeated multiple times as it retries]
...
... View more
09-15-2017
12:37 PM
Using Cloudbreak I install a cluster and check that it works. I then reinstall the same cluster and enable security & Knox the cluster no longer installs correctly. Any help would be appreciated, I'm sure I have forgotten a step. I tried doing this through the UI and from a script. (Blueprint attached if that helps... but as my comment says below I was also able to replicate this with one of the default blueprints: "Data Science: Apache Spark 2.1, Apache Zeppelin 0.7.0") Here's how I built the cluster: credential select --name cloudbreakcredential
blueprint select --name "HA, zepplin and Ooziev2.7"
instancegroup configure --AZURE --instanceGroup master1 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup master2 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup master3 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup master4 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup Utility1 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer true
instancegroup configure --AZURE --instanceGroup worker --nodecount 5 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer false
#hostgroup configure --recipeNames ranger-pre-installation --hostgroup master4 --timeout 15
network select --name default-azure-network
stack create --AZURE --name hadoop-pilot-oozie-rg --region "Canada East" --wait true --attachedStorageType PER_VM
cluster create --description "Haoop Pilot" --password [password] --wait true --enableKnoxGateway --enableSecurity true --kerberosAdmin admin --kerberosMasterKey [masterkey] --kerberosPassword [password]
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-13-2017
02:44 PM
Thanks for the fast answer. If I used cloudbreakshell, is there a way to tell it to ignore the validation?
... View more
09-13-2017
02:20 PM
I want to enable NameNode high availability in my cluster but can't seem to find the right way to enable this from the blueprint I'm supplying to Cloudbreak.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-12-2017
02:18 PM
Thank you that was totally it.
... View more
09-12-2017
02:03 PM
3 host groups cbd-shell --> version 1.6.13
... View more
09-12-2017
01:44 PM
I can create everything without issue in the UI but I need to script this... Here's what I did: cloudbreak-shell>credential select --id 1
Credential selected, id: 1
cloudbreak-shell>blueprint select --id 10
Blueprint has been selected, id: 10
cloudbreak-shell>instancegroup configure --AZURE --instanceGroup master --nodecount 1 --ambariServer true --securityGroupName internalports --templateName mincanadianeast
instanceGroup templateId nodeCount type securityGroupId attributes
------------- ---------- --------- ------- --------------- ----------
master 4 1 GATEWAY 5 {}
cloudbreak-shell>instancegroup configure --AZURE --instanceGroup worker --nodecount 9 --ambariServer false --securityGroupName internalports --templateName mincanadianeast
instanceGroup templateId nodeCount type securityGroupId attributes
------------- ---------- --------- ------- --------------- ----------
worker 4 9 CORE 5 {}
master 4 1 GATEWAY 5 {}
cloudbreak-shell>network select --name default-azure-network
Network is selected with name: default-azure-network
cloudbreak-shell>stack create --AZURE --name pilot-2nd-cluster --region "Canada East" --wait true
Command 'stack create --AZURE --name pilot-2nd-cluster --region "Canada East" --wait true' was found but is not currently available (type 'help' then ENTER to learn about this command)
I'm following the documentation pretty closely on the site. Happy to provide any log that may be helpful. Why can't I create a stack?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
08-17-2017
07:08 PM
I think this really depends on the workload... I'd actually consider turning up the replication, given the following conditions: Data does not change frequently but is queried repeatedly. If you aren't writing constantly to a cluster and you have extra capacity why not consider increasing the replication factor to decrease the network traffic. If you have increased locality of data by spreading it wider across the cluster this could actually reduce traffic on the network. Yes, you'd pay a higher upfront cost for writing data, but if the workload is write once, read 1000 times, you may be better off increasing the replication factor. Thoughts? I want to acknowledge in a situation where you are doing some write heavy operations your article is on point.
... View more
08-08-2017
05:08 PM
Sorry mandatory @Aaron Dunlap
... View more