Member since
09-18-2015
100
Posts
98
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
633 | 03-22-2016 02:05 AM | |
546 | 03-17-2016 06:16 AM | |
871 | 03-17-2016 06:13 AM | |
674 | 03-12-2016 04:48 AM | |
3099 | 03-10-2016 08:04 PM |
12-13-2017
03:37 AM
@Robert Levas I hav put a youtube link with a recording of the error. FreeIPA Video link (Youtube link showing the issue)
... View more
12-13-2017
03:25 AM
@Geoffrey Shelton Okot - The instruction above does not solve the issue. Still have the same problem.
... View more
12-13-2017
03:20 AM
@Robert Levas - The IPA version is VERSION: 4.5.0, API_VERSION: 2.228
... View more
12-13-2017
03:20 AM
@Robert Levas - I am using FreeIPA VERSION: 4.5.0, API_VERSION: 2.228
... View more
12-12-2017
12:28 AM
Below is the Exception that shows up when starting the Ambari Infra service for the solr instance. Not sure where the kerberos-env needs to be set? Or what is the underlying issue. The ambari-infra keytabs are present in the /etc/security/keytabs folder. =========================================================================================== Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 123, in <module>
InfraSolr().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 46, in start
self.configure(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 41, in configure
setup_infra_solr(name = 'server')
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/setup_infra_solr.py", line 101, in setup_infra_solr
mode=0640)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/templates/infra-solr-security.json.j2", line 28, in top-level template code
"{{atlas_kerberos_service_user}}@{{kerberos_realm}}": ["{{infra_solr_role_atlas}}", "{{infra_solr_role_ranger_audit}}", "{{infra_solr_role_dev}}"],
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'kerberos-env' was not found in configurations dictionary!
==============================================================================================
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Solr
12-11-2017
08:30 PM
Thanks for the response. I have removed the user couple of times and retried and there still seems to be an error. The error seems to be how password is tested and the password policy in IPA. The principal does get created by Ambari in IPA, it seems how it tests the Password is stopping from progressing. Today when I do a kinit with a new user from command line, it will ask for a password change when I do a Kinit for the first time. I am unsure how to get around it. Would you have any idea if the password policy is correct? Do we need to do anything else for it?
... View more
12-11-2017
06:09 PM
I am trying to Secure an HDP 2.6 install with Free IPA. I am using the experimental feature under Ambari. https://community.hortonworks.com/articles/59645/ambari-24-kerberos-with-freeipa.html I am running into issue where a test principal is being created. I changed the password policy in IPA to set the Max life and Min life to 0 in the global_policy. On Ambari Server Logs I see the below exception ============================================================================================== 11 Dec 2017 17:44:23,020 WARN [Server Action Executor Worker 315] IPAKerberosOperationHandler:310 - demo-121117 is not in lowercase. FreeIPA does not recognize user principals that are not entirely in lowercase. This can lead to issues with kinit and keytabs. Make sure users are in lowercase 11 Dec 2017 17:44:29,865 ERROR [Server Action Executor Worker 315] CreatePrincipalsServerAction:299 - Failed to create principal, demo-121117@US-WEST-1.COMPUTE.INTERNAL - Unexpected response from kinit while trying to password for demo-121117 got: org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Unexpected response from kinit while trying to password for demo-121117 got: at org.apache.ambari.server.serveraction.kerberos.IPAKerberosOperationHandler.updatePassword(IPAKerberosOperationHandler.java:575) at org.apache.ambari.server.serveraction.kerberos.IPAKerberosOperationHandler.createPrincipal(IPAKerberosOperationHandler.java:337) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:258) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:161) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processRecord(KerberosServerAction.java:538) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:420) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:91) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:516) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:453) at java.lang.Thread.run(Thread.java:745) ============================================================================================ I am looking at what should be the settings in IPA to resolve this issue. Thanks for all the help.
... View more
Labels:
- Labels:
-
Apache Ambari
06-24-2016
08:05 PM
We are interested in querying and exporting the Yarn AppTimelIine Server content for external reporting. Is there any documentation or method to query and export the data. What is the best way to do this? I found this on apache's website http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Accessing_generic-data_via_command-line
... View more
Labels:
- Labels:
-
Apache YARN
05-04-2016
01:17 AM
This is the hortonworks docs that gives reference to it. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/deploy-hue-rm-ha-cluster.html
... View more
03-22-2016
02:27 AM
@Richard Xu Can you please remove the hostname :). It is a security risk and customer may not like it.
... View more
03-22-2016
02:05 AM
1 Kudo
@rcicak Yes you are right, it does not make sense to have a 3x replication. It is a default so it is set to 3. I have thoughts about it. But the other way of looking at replication is if you are going after the same table and a node is busy, which does not apply in this case exactly, you can run the same query on another node where the replication is available. I would leave it to 3, incase someone add more nodes to the VMs, the data gets replicated correctly.
... View more
03-17-2016
06:16 AM
1 Kudo
@Mohammed Ibrahim a) I would go with the Hortonworks Sandbox, if you are doing standalone. b) Here is a lab for setting up HDP on a single node HDP https://github.com/shivajid/HortonworksOperationsWorkshop/blob/master/Lab1.md This will use HDP 2.3, you can make modifications to do HDP 2.4
... View more
03-17-2016
06:13 AM
1 Kudo
@ARUNKUMAR RAMASAMY Please provide the documentation link that you are using. Also why are you managing the kerberos keytabs manually? I would let the wizard create it irrespective of if you do it on AD or MIT KDC. This is a much cleaner process.
... View more
03-15-2016
01:55 AM
@Sunile Manjee Why not reuse the same Zookeeper? What is the advantage of a separate Zookeeper. I have deployed Solr Cloud with the same zookeeper as HDP, did not see any issues.
... View more
03-14-2016
09:32 PM
1 Kudo
@S Srinivasa Try and run the hostcleanup to clean away your host. This cleans up any old versions etc. https://cwiki.apache.org/confluence/display/AMBARI/Host+Cleanup+for+Ambari+and+Stack See if this helps. This has come to my rescue in the past.
... View more
03-14-2016
04:56 AM
1 Kudo
@Teng Geok Keh You are getting the following exception. The 4040 is having a Bind Exception. You already have a spark context running on your local machine. I would say a simple restart of spark should solve it for now. Probably you have the spark-shell running under another user. 16/03/13 14:07:53 INFO SparkEnv: Registering OutputCommitCoordinator
16/03/13 14:07:53 INFO Server: jetty-8.y.z-SNAPSHOT
16/03/13 14:07:54 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463) at sun.nio.ch.Net.bind(Net.java:455)
... View more
03-14-2016
04:56 AM
@Teng Geok Keh You are getting the following exception. The 4040 is having a Bind Exception. You already have a spark context running on your local machine. I would say a simple restart of spark should solve it for now. Probably you have the spark-shell running under another user. 16/03/13 14:07:53 INFO SparkEnv: Registering OutputCommitCoordinator
16/03/13 14:07:53 INFO Server: jetty-8.y.z-SNAPSHOT
16/03/13 14:07:54 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463) at sun.nio.ch.Net.bind(Net.java:455)
... View more
03-12-2016
04:48 AM
2 Kudos
@Madhu V You can add custom Metadata. I had written a CLI that uses the Atlas API to do this. The best code sample is with QuickStart.java which is part of the Atlas Code base. Take a look at that. There is also support for REST Calls
... View more
03-10-2016
11:09 PM
2 Kudos
@Sunile Manjee We ran into a similar issue with a customer. To clean up you need to do the followin Stop all services and run the cleanup script https://cwiki.apache.org/confluence/display/AMBARI/Host+Cleanup+for+Ambari+and+Stack python /usr/lib/python2. 6 /site-packages/ambari_agent/HostCleanup.py
You have elected to remove all users as well. If it is not intended then use option --skip "users" . Do you want to continue [y/n] (n) Run the above on each hosts. Next you would want to do a ambari reset and follow the steps that Scott mentioned.
... View more
03-10-2016
08:04 PM
2 Kudos
@Ram D Did you take a look at this. https://slider.incubator.apache.org/docs/manpage.html Commands for testing These operations are here primarily for testing. kill-container <name> --id container-id Kill a YARN container belong to the application. This is useful primarily for testing the resilience to failures. Container IDs can be determined from the application instance status JSON document.
... View more
03-10-2016
07:51 PM
1 Kudo
java.net.ConnectException:Connection refused This is connection error, it is unclear as why you are getting the error. You may look at hiveserver2 logs. It may have also happened that the hiveserver2 may have tripped. If it is not obvious take a tcpdump on the traffic and see where it is failing. I hope you do not have to go this far.
... View more
03-10-2016
03:26 PM
1 Kudo
@Andrew Watson - The ACID properties have been taken back by the community. It is not recommended for customer use currently.
... View more
03-03-2016
06:42 AM
1 Kudo
http://hortonworks.com/hadoop/cloudbreak/ - Check this video out. If you use S3 you should be fine, except you will not get stellar performance. It will be slower than HDFS on local storage. If you like the answer, you should hit "Accept" and give a vote :).
... View more
03-03-2016
05:02 AM
1 Kudo
@shahab nasir Best is to use the ambari-qa user. This is special user with super powers 🙂 su ambari-qa Overall, please understand the Hadoop security model. Spend some time to understand the user permissions. This is mostly like unix. The service accounts hdfs, yarn etc are service accounts that are part of hadoop group. Spend some time on the Hadoop HDFS section. This will help your understanding better.
... View more
03-03-2016
04:59 AM
The tutorial needs to be fixed.
... View more
03-03-2016
04:52 AM
3 Kudos
@ARUNKUMAR RAMASAMY In hortonworks platform we have Cloudbreak. It is open source http://sequenceiq.com/cloudbreak-docs/latest/ You can use it to launch Clusters on Amazon, Azure Google Cloud. It needs a host to install the Cloudbreak software and then it will spin up the nodes for you. One thing you have ti understand that if you have data in HDFS, it is not easy to bring down nodes. HDFS will kick off HDFS rebalance which will take time. An elastic cluster will work well when you use a detached storage like Blob storage behind it. Note scaling up is not an issue, it is scaling down that you will experience some rough time :).
... View more
03-03-2016
04:37 AM
2 Kudos
@John D
@John D. As Divakar pointed out Hortonworks Products page is a great place to start. It has step by step tutorials and overall about the HDP. In general if you would like to learn Apache Hadoop is a good place http://hadoop.apache.org/. As for a book - http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/1449311520 The above is a very good book. Hortonworks being part of the open source community and the only distribution with 100% open source projects. Hadoop has grown in many breadth and depths. Traditionally lot of work was done on map reduce and now people have moved more towards Hive (SQL Interface) and Spark. I would say if you are starting of as a developer, start with Spark. Also as you come from C#, picking up Scala will not be bad and you may enjoy working with Spark. Understanding Map Reduce framwork, HDFS and internals of Yarn is important to be skilled in Hadoop. I would say best of luck for your journey.
... View more
02-25-2016
06:12 PM
1 Kudo
Looking for Setting up Solr with Kerberized HDP Cluster. Especially when indexes are in HDFS.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Solr
02-23-2016
06:38 PM
1 Kudo
Done the reading. Let me know if it works for you. The user to queue mapping is working. But not the above.
... View more