Member since
09-25-2015
25
Posts
31
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
357 | 04-06-2017 07:10 PM | |
528 | 08-24-2016 06:55 PM | |
999 | 12-16-2015 03:44 PM | |
567 | 12-08-2015 12:54 PM |
05-05-2017
01:15 PM
All, This is actually a BUG, as ambari removes the falcon service but not the hooks in Oozie configs. The proper way to fix this and not have it hit you each time you upgrade is to remove the below listed values from your oozie configs. I have also opened an internal (to Hortonworks) JIRA to have this fixed in future versions of Ambari. oozie.service.ProxyUserService.proxyuser.falcon.hosts
oozie.service.ProxyUserService.proxyuser.falcon.groups
oozie.service.URIHandlerService.uri.handlers
oozie.services.ext
oozie.service.ELService.ext.functions.coord-job-submit-instances
oozie.service.ELService.ext.functions.coord-action-create-inst
oozie.service.ELService.ext.functions.coord-action-create
oozie.service.ELService.ext.functions.coord-job-submit-data
oozie.service.ELService.ext.functions.coord-action-start
oozie.service.ELService.ext.functions.coord-sla-submit
oozie.service.ELService.ext.functions.coord-sla-create
... View more
04-06-2017
07:10 PM
1 Kudo
While we do not provide a best practice to this I suggest the following:
1. Bring down the entire cluster cleanly and perform the RAM upgrade. 2. Use this to determine what all your new config settings should be:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/determine-hdp-memory-config.html (or use the correct one for your HDP release, this is from 2.6.0) 3. Once you change your memory settings, restart the cluster.
... View more
03-21-2017
03:58 PM
2 Kudos
In the past, if Select returned no rows, hadoop would still create an empty file. Some customer would use this file to help with their work flow. In https://issues.apache.org/jira/browse/HIVE-13040 this was fixed, because anyone using cloud based solutions could be charged for empty files, so it was determined that hadoop should not write the files.
... View more
- Find more articles tagged with:
- Data Processing
- FAQ
- Hive
Labels:
01-12-2017
07:39 PM
3 Kudos
I can confirm for you that the deletes look correct. I normally do not do the alters, but I don't see them causing an issue. Remember to STOP ambari-server before editing. Take a good backup of the db. Then when done, start ambari-server.
... View more
01-12-2017
05:27 PM
3 Kudos
No, the code used in HDP is the same accross 6.x release. There will be no difference in how Hadoop works on these systems. Also, 2.1.x is compatible with RHEL 6.x per: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_installing_manually_book/content/rpm-chap1-2.html
... View more
08-24-2016
07:42 PM
@Arun A K I just use the Web Gui that comes with IPA ldap. Keep in mind I am not managing a large user base, but rather just doing small recreations to help customers. I would think the GUI would get cumbersome if you were doing an entire enterprise.
... View more
08-24-2016
07:15 PM
@Arun A K, first let's fix your admin. Simply go into the database and do: update users set ldap_user = 0 where user_name = 'admin'; then reset the password as follows: https://community.hortonworks.com/questions/449/how-to-reset-ambari-admin-password.html Here is the output of an ldapsearch on a user in my IPA, to show you where dn is: # orlando, users, accounts, ipa.example.com
dn: uid=orlando,cn=users,cn=accounts,dc=ipa,dc=example,dc=com
displayName: Orlando Teixeira
cn: Orlando Teixeira
objectClass: top
objectClass: person
objectClass: organizationalperson
objectClass: inetorgperson
objectClass: inetuser
objectClass: posixaccount
objectClass: krbprincipalaux
objectClass: krbticketpolicyaux
objectClass: ipaobject
objectClass: ipasshuser
objectClass: ipaSshGroupOfPubKeys
objectClass: mepOriginEntry
loginShell: /bin/sh
sn: Teixeira
gecos: Orlando Teixeira
homeDirectory: /home/orlando
krbPwdPolicyReference: cn=global_policy,cn=IPA.EXAMPLE.COM,cn=kerberos,dc=ipa,
dc=example,dc=com
mail: orlando@ipa.example.com
krbPrincipalName: orlando@IPA.EXAMPLE.COM
givenName: Orlando
uid: orlando
initials: OT
ipaUniqueID: 3b9308de-895c-11e5-a188-0800274e577d
uidNumber: 1690200001
gidNumber: 1690200001
memberOf: cn=ipausers,cn=groups,cn=accounts,dc=ipa,dc=example,dc=com
memberOf: cn=test,cn=groups,cn=accounts,dc=ipa,dc=example,dc=com
memberOf: cn=test2,cn=groups,cn=accounts,dc=ipa,dc=example,dc=com
mepManagedEntry: cn=orlando,cn=groups,cn=accounts,dc=ipa,dc=example,dc=com
krbLoginFailedCount: 6
krbLastFailedAuth: 20160601185034Z
# orlando, groups, accounts, ipa.example.com
dn: cn=orlando,cn=groups,cn=accounts,dc=ipa,dc=example,dc=com
objectClass: posixgroup
objectClass: ipaobject
objectClass: mepManagedEntry
objectClass: top
cn: orlando
gidNumber: 1690200001
description: User private group for orlando
mepManagedBy: uid=orlando,cn=users,cn=accounts,dc=ipa,dc=example,dc=com
ipaUniqueID: 3b9b8388-895c-11e5-a188-0800274e577d
... View more
08-24-2016
06:55 PM
2 Kudos
Here are the default IPA Values (If you used a out of the box no changes IPA) that work for me: authentication.ldap.dnAttribute=dn authentication.ldap.groupMembershipAttr= memberUid authentication.ldap.groupObjectClass=posixGroup authentication.ldap.userObjectClass=mepManagedEntry authentication.ldap.usernameAttribute=cn
... View more
08-03-2016
02:26 PM
@david serafini ARM64 will be part of the default debian release as of 8.5. Currently we support debian 7.x, but I am sure going forward we will work to include debian 8.x at some point. We currently have an internal JIRA tracking this, and the latest comment is that it would be more then 6 months out, just so you know.
... View more
08-03-2016
02:16 PM
Hello @rkrishna, I am not sure this is Ambari so much as your setup. A Cycle Detected error normally comes about when you have a proxy pointed to itself. Can you tell me, do you have an ambari proxy set in your /var/lib/ambari-server/ambari-env.sh? If so, did you exclude the ldap server that you configured in your setup-ldap? This might be the issue.
... View more
07-20-2016
06:02 PM
1 Kudo
In the ambari DB you would do: update host_role_command set status = 'COMPLETED' where request_id = 1; Also, instead of the other options you took, you could have set 17,829 to completed with almost the same command: update host_role_command set status = 'COMPLETED' where task_id = 17829;
... View more
07-19-2016
01:46 PM
@Felix M Hello, I would like to collect more information so we can work towards solving this issue. 1. On IBM SPSS host can you give me: OS/Vendor (ie. CentOS 6.6, etc) Version of Python (python --version) 2. You say this happens during Install of server, so looking at your ambari stack, is ranger installed, or install failed, or not there at all? 3. rpm -qa | grep ranger 4. attach the /var/log/ambari-agent/ambari-agent.log to this thread. -- Thank you, Orlando Teixeira Hortonworks, Inc. Senior Technical Support Engineer
... View more
07-05-2016
01:31 PM
Guilhemme, Could you please do a : yum whatprovides 'ambari-admin-password-reset' I just checked my 2.2.2 install and I do not see it: [root@mon-orlan ~]# ambari (hit tab to complete) ambari-python-wrap ambari-server ambari_server_main.py ambari-server.py
... View more
02-26-2016
04:33 PM
3 Kudos
Most customers actually have user ambari for the database, so it would actually be: 2. Run 'psql -U ambari ambari'
... View more
12-17-2015
06:36 PM
1 Kudo
OK, you are in a bad way. Can you get to Admin - Kerberos, and tell me do you have a disable kerberos button? I think your best bet would be to disable it, and then redeploy it.
... View more
12-17-2015
01:03 PM
1 Kudo
What do you get as output if you do: /var/lib/ambari-server/resources/scripts/configs.sh -u AMBARI_USER -p AMBARI_PASS get AMBARI_SERVER CLUSTER_NAME kerberos-env
... View more
12-16-2015
04:38 PM
3 Kudos
You in fact do have one hbase installed, the one internal to ambari metrics, and it needs to stay running. I think you may have an issue I have seen before, can you add this to your ambari.properties file: server.timeline.metrics.cache.disabled = true and then restart ambari, let me know if this helps.
... View more
12-16-2015
03:44 PM
3 Kudos
There is a similar issue in our database of issues. Can you follow this: Manually add the kerberos-env/kdc_type property back to the current kerberos-env configuration. The value must be either "mit-kdc" or "active-directory" and must be the correct one for the configuration. Once this is done, Ambari should be restarted so that any cached configuration data is refreshed. (This can be done with configs.sh command). /var/lib/ambari-server/resources/scripts/configs.sh -u AMBARI_USER -p AMBARI_PASS set hdp23 hdp23 kerberos-env kdc_type "<kdc_type_value>"
... View more
12-08-2015
05:12 PM
7 Kudos
In a browser put in: http://<AMBARI-HOST>:8080/api/v1/clusters/<CLUSTER-NAME>/requests/ Find the highest reqest number in the output. Now, we can go to each request by pasting in the number of the request, an example would be: http://<AMBARI-HOST>:8080/api/v1/clusters/<CLUSTER-NAME>/requests/110 The output should look like: {
"href" : "http://mon-orlan:8080/api/v1/clusters/test/requests/110",
"Requests" : {
"aborted_task_count" : 0,
"cluster_name" : "test",
"completed_task_count" : 3,
"create_time" : 1444340395849,
"end_time" : 1444340449936,
"exclusive" : false,
"failed_task_count" : 0,
"id" : 110,
"inputs" : "{}",
"operation_level" : "SERVICE",
"progress_percent" : 100.0,
"queued_task_count" : 0,
"request_context" : "Restart all components with Stale Configs for HIVE",
.....
If that is not your hung operation, go back one by one from the highest request number. What we are looking for is the request context matches your hung Operation, that is the one we want to abort. Once you find that request ID, then go to the ambari server command line as root and do: curl -u admin:admin -i -H "X-RequestedBy:ambari" -X PUT -d '{"Requests":{"request_status":"ABORTED","abort_reason":"Aborted by user"}}' http://localhost:8080/api/v1/clusters/<CLUSTER_NAME>/requests/<REQUEST_ID>;
... View more
- Find more articles tagged with:
- Ambari
- api
- Cloud & Operations
- FAQ
- hung
Labels:
12-08-2015
01:02 PM
All, I think you may have missed an important thing: sudo /usr/sbin/ambari-server setup-ldap Using python/usr/bin/python2.7 Setting up LDAP properties... We only support python 2.6.6, using 2.7 causes many many issues. Please use python 2.6.6. Also, I suggest you do an ldapsearch to test your settings: ldapsearch -x -H ldap://ldap.xxxxx.com -b dc=CENTENE,dc=com -D "CN=xxxxxx,OU=LDAP,DC=xxxxxxx,DC=com" -W "(sAMAccountName=<User-to-search-for>)"
... View more
12-08-2015
12:54 PM
So the official support answer, as per the doc listed above is NO. We support 5.6, but not 5.7. It has issues with at least Ranger, and has not been tested at all. I would think the heavier the cluster is used the more you would find it non-compatible with. Please use the supported version.
... View more
12-08-2015
12:40 PM
Pardeep, we do not currently support MySQL HA, and it is my understanding going forward that MySQL is going away (from the distros of linux), so you would have to add it manually.
... View more
12-03-2015
07:53 PM
1 Kudo
Problem: When trying to distcp to AWS, this error is reported: 2015-12-03 09:50:01,132 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224) and also: Caused by: java.lang.ClassNotFoundException: com.amazonaws.event.ProgressListener Solution: In ambari, under yarn, configs please add:
/usr/hdp/2.3.2.0-2950/hadoop/* and /usr/hdp/2.3.2.0-2950/hadoop/lib*
to the yarn.application.classpath Then under mapred, configs, do the same for mapreduce.application.classpath restart all affected services and it should work.
... View more
- Find more articles tagged with:
- aws
- Distcp
- FAQ
- Hadoop Core
- HDFS
Labels:
12-03-2015
04:24 PM
Currently Ambari (as of 2.1.2.1) does not support Postgres HA.
... View more
- Find more articles tagged with:
- Ambari
- Cloud & Operations
- FAQ
- postgres
Labels:
10-01-2015
04:31 PM
The Ambari admin account can be disabled, and if you really like, deleted from the manage ambari users area of the Ambari gui. Of course, you cannot disable it if logged in as admin, you need to log in with another admin account to do it.
... View more