Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3067 | 06-30-2017 05:30 PM | |
3988 | 06-30-2017 02:57 PM | |
3309 | 05-30-2017 07:00 AM | |
3884 | 01-20-2017 10:18 AM | |
8401 | 01-11-2017 02:11 PM |
12-24-2016
07:35 PM
5 Kudos
SYMPTOM: We had problem with namenode starts. We can check if there is namenode startup issue then we can check for gc related errors in logs and same time check for list of files/blocks customer is having on cluster and the recommended HEAP size as per hortonworks manual - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html
ROOT CAUSE: GC errors were causing the issue for namenode to start RESOLUTION: Below are the steps taken to resolve the issue - 1. Logged in to the namenode cli
2. When checked from cli using “ps –ref |grep –i namenode”, the namenode was not displaying.
3. Seems that the namenode process was getting killed after specific interval of time, but ambari was still showing the namenode process state in UI as “starting”
4. Cancelled the namenode starting process from Ambari UI.
5. Tried starting the whole HDFS process and simultaneously ran “IOSTAT” on the fsimage disk.
6, We found within iostat output the “Blk_read/s” was not displaying any value.
7. The namenode process was still getting killed.
8. We tried to enable debug using “export HADOOPROOTLOGGER=DEBUG,console” and ran the command “hadoop namenode”
9. We found that the namenode was have GC issue from the above command logs.
10. We suggested customer to increase Namenode HEAP SIZE from 3Gb to 4Gb and customer was able to start the Namenodes.
11. As per namenode heap recommendations “https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html”
12. Increased HEAP size for namenode to “5376m”to "8072m" as there was approx 10million files on cluster.
... View more
Labels:
12-24-2016
07:13 PM
1 Kudo
@Ashnee Sharma Turn off pagination by setting authentication.ldap.pagination.enabled=false in /etc/ambari-server/conf/ambari.properties
... View more
12-24-2016
06:59 PM
3 Kudos
@Ashnee Sharma Please check below and let me know if you are looking for the same - Installed expect first: # yum install expect -y
(or use your Linux distribution's package manager if you're not Using CentOS or RHEL) Then create and run the following expect script: # cat /tmp/ambari-server-sync-ldap-unattended.sh #!/usr/bin/expect
set timeout 20
spawn /usr/sbin/ambari-server sync-ldap --groups=/etc/ambari-server/ambari-groups.csv
expect "Enter Ambari Admin login:" { send "admin\n" }
expect "Enter Ambari Admin password:" { send "notTheRealPasswordOfCourse\n" }
interact
If customer wants password to NOT be in plain text, ask them to look at something like Ansible which handles decrypting passwords from a file. Let me know if that works for you.
... View more
12-24-2016
06:49 PM
2 Kudos
Problem Statement: when doing an ambari-server sync-ldap -groups=<your file> It will bring over the groups but not the users in it. ROOT CAUSE: When troubleshooting why the group members are not being sync'd with FreeIPA, a packet trace helped identify the issue. With ActiveDirectory the user's DN is exposed as an attribute: "distinguishedName", this is not the case inFreeIPA/RHEL IDM (using 389 DS for the directory server implementation). The DN is not an attribute on the user, and cannot be used in a filter like this: (&(objectClass=posixaccount)(|(dn=uid=dstreev,cn=users,cn=accounts,dc=hdp,dc=local)(uid=uid=dstreev,cn=users,cn=accounts,dc=hdp,dc=local)))
If we want to retrieve a specific object by DN we have to set the DN as the search base and do a base search scope. ldapsearch -H ldap://ad.hortonworks.local:389 -x -D "CN=hadoopsvc,CN=Users,dc=hortonworks,dc=local" -W -b "CN=paul,CN=Users,DC=hortonworks,DC=local" -s base -a always "(objectClass=user)"
In this case I'm looking for the user with DN: CN=paul,CN=Users,DC=hortonworks,DC=local. My bind user is hadoopsvc, and because this is AD my objectClass is user. RESOLUTION: This is a known bug: https://hortonworks.jira.com/browse/BUG-45536 (this link is an internal Hortonworks link and it's published here for reference purposes) There is no workaround, this is fixed in 2.1.3 version of ambari, per the bug.
... View more
Labels:
12-24-2016
06:10 PM
3 Kudos
SYMPTOM: When Ambari is upgraded to 2.4.x, SmartSense has to be upgraded before upgrading the HDP stack. Failing to do so, would cause SmartSense to capture bundle with small size (in KB's) that has very less data. ERROR: ERROR 2016-12-01 23:02:02,279 anonymize.py:63 - Execution of command returned 1. Exception in thread "main" java.lang.NullPointerException
at com.hortonworks.smartsense.anonymization.rules.RuleType.from(RuleType.java:47)
at com.hortonworks.smartsense.anonymization.rules.Rules.createRule(Rules.java:161)
at com.hortonworks.smartsense.anonymization.rules.Rules.createRules(Rules.java:148)
at com.hortonworks.smartsense.anonymization.rules.Rules.createRules(Rules.java:132)
at com.hortonworks.smartsense.anonymization.AnonymizerFactory.createAnonymizer(AnonymizerFactory.java:94)
at com.hortonworks.smartsense.anonymization.Main.start(Main.java:202)
at com.hortonworks.smartsense.anonymization.Main.main(Main.java:294)
ERROR 2016-12-01 23:02:02,280 AnonymizeBundleCommand.py:60 - Anonymization failed. Please check logs.
Traceback (most recent call last):
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/command/AnonymizeBundleCommand.py", line 56, in execute
context['bundle_dir'] = anonymizer.anonymize(bundle_dir)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/anonymize.py", line 64, in anonymize
raise Exception("Anonymization failed.")
ROOT CAUSE: When HDP is upgraded to 2.5.x, before following the SmartSense upgrade procedure, the smartsense-hst package also gets
upgraded to 1.3.x. With the old configurations and Anonymization rules, any bundles captured would be either empty or contains very less data.
RESOLUTION: Follow the below steps to complete the SmartSense upgrade to 1.3.0.
Step 1 Using Ambari UI, replace the SmartSense anonymization rules content (Under Services - SmartSense - Config - Data Capture ) to following data.
{
"rules":[
{
"name":"ip_address",
"ruleId": "Pattern",
"path":null,
"pattern": "([^a-z0-9\\.]|^)[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}([^a-z0-9\\.\\-]|(\\.[^0-9])|$)",
"extract": "[ :\\/]?([0-9\\.]+)[ :\\/]?",
"excludes": ["hdp-select*.*", "*version.txt"],
"shared":true
},
{
"name":"domain",
"ruleId": "Domain",
"path":null,
"pattern": "$DOMAIN_RULE$",
"shared":true
},
{
"name":"delete_oozie_jdbc_password",
"ruleId": "Property",
"path":"oozie-site.xml",
"property": "oozie.service.JPAService.jdbc.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_sqoop_metastore_password",
"ruleId": "Property",
"path":"sqoop-site.xml",
"property": "sqoop.metastore.client.autoconnect.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_hive_metastore_password",
"ruleId": "Property",
"path":"hive-site.xml",
"property": "javax.jdo.option.ConnectionPassword",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_s3_accesskey",
"ruleId": "Property",
"path":"core-site.xml",
"property": "fs.s3.awsAccessKeyId",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_s3_secret_accesskey",
"ruleId": "Property",
"path":"core-site.xml",
"property": "fs.s3.awsSecretAccessKey",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_s3n_accesskey",
"ruleId": "Property",
"path":"core-site.xml",
"property": "fs.s3n.awsAccessKeyId",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_s3n_secret_accesskey",
"ruleId": "Property",
"path":"core-site.xml",
"property": "fs.s3n.awsSecretAccessKey",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_azure_account_key",
"ruleId": "Property",
"path":"core-site.xml",
"property": "fs.azure.account.key.*",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"delete_ldap_password",
"ruleId": "Property",
"path":"core-site.xml",
"property": "hadoop.security.group.mapping.ldap.bind.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_ssl_client_keystore_pwd",
"ruleId": "Property",
"path":"ssl-client.xml",
"property": "ssl.client.keystore.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_ssl_client_truststore_pwd",
"ruleId": "Property",
"path":"ssl-client.xml",
"property": "ssl.client.truststore.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_ssl_server_keystore_keypwd",
"ruleId": "Property",
"path":"ssl-server.xml",
"property": "ssl.server.keystore.keypassword",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_ssl_server_keystore_pwd",
"ruleId": "Property",
"path":"ssl-server.xml",
"property": "ssl.server.keystore.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_ssl_server_truststore_pwd",
"ruleId": "Property",
"path":"ssl-server.xml",
"property": "ssl.server.truststore.password",
"operation":"REPLACE",
"value":"Hidden"
},
{
"name":"hide_oozie_pwd_in_java_process_info",
"ruleId": "Pattern",
"path":"java_process.txt",
"pattern": "oozie.https.keystore.pass=([^ ]*)",
"extract": "=([^ ]*)",
"shared":false
},
{
"name":"hide_oozie_pwd_in_process_info",
"ruleId": "Pattern",
"path":"pid.txt",
"pattern": "oozie.https.keystore.pass=([^ ]*)",
"extract": "=([^ ]*)",
"shared":false
},
{
"name":"hide_oozie_pwd_in_ambariagent_log",
"ruleId": "Pattern",
"path":"ambari-agent.log",
"pattern": "oozie.https.keystore.pass=([^ ]*)",
"extract": "=([^ ]*)",
"shared":false
},
{
"name":"delete_oozie_https_keystore_pass",
"ruleId": "Pattern",
"path":"oozie-env.cmd",
"pattern":"OOZIE_HTTPS_KEYSTORE_PASS=([^ ]*)",
"extract": "=([^ ]*)",
"shared":false
},
{
"name":"java_process_ganglia_password",
"ruleId": "Pattern",
"path":"java_process.txt",
"pattern":"ganglia_password=([^ ]*)",
"extract": "=([^ ]*)",
"shared":false
},
{
"name":"hide_ssn_from_logs",
"ruleId": "Pattern",
"path":"*\\.log*",
"pattern": "(^|[^0-9x])[0-9x]{3}-[0-9x]{2}-[0-9]{4}($|[^0-9x])",
"extract": "(?<![0-9x])([0-9x-]+)(?![0-9x])",
"shared":true
},
{
"name":"hide_credit_card_from_logs",
"ruleId": "Pattern",
"path":"*\\.log*",
"pattern": "(^|[^0-9x])(18|21|3[04678]|4[0-9x]|5[1-5]|60|65)[0-9x]{2}[- ]([0-9x]{4}[- ]){2}[0-9]{0,4}($|[^0-9x])",
"extract": "(?<![0-9x])([0-9x -]+)(?![0-9x])",
"shared":true
},
{
"name": "hide_knox_ldap_password",
"ruleId": "Property",
"path": "services/KNOX/components/KnoxGateway/DEFAULT/conf/topologies/*.xml",
"property": "main.ldapRealm.contextFactory.systemPassword",
"parentTag": "param",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name": "delete_kms_https_keystore_pass",
"ruleId": "Property",
"path": "kms-log4j.properties",
"property": "KMS_SSL_KEYSTORE_PASS",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name": "hide_kms_keystore_provider_pwd",
"ruleId": "Property",
"path": "kms-site.xml",
"property": "hadoop.security.keystore.JavaKeyStoreProvider.password",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name": "hide_ranger_https_keystore_pwd",
"ruleId": "Property",
"path": "xasecure-policymgr-ssl.xml",
"property": "xasecure.policymgr.clientssl.keystore.password",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name": "hide_ranger_https_truststore_pwd",
"ruleId": "Property",
"path": "xasecure-policymgr-ssl.xml",
"property": "xasecure.policymgr.clientssl.truststore.password",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name":"delete_ranger_webserver_https_keystore_pwd",
"ruleId": "Property",
"path":"ranger_webserver.properties",
"property":"HTTPS_KEYSTORE_PASS",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name":"delete_ranger_webserver_attrib_https_keystore_pwd",
"ruleId": "Property",
"path":"ranger_webserver.properties",
"property":"https.attrib.keystorePass",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name":"delete_ranger_keystore_pwd",
"ruleId": "Property",
"path":"unixauthservice.properties",
"property":"keyStorePassword",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name":"delete_ranger_truststore_pwd",
"ruleId": "Property",
"path":"unixauthservice.properties",
"property":"trustStorePassword",
"operation": "REPLACE",
"value": "Hidden"
},
{
"name": "hide_ambari_ssl_truststore_pwd",
"ruleId": "Property",
"path": "ambari.properties",
"property": "ssl.trustStore.password",
"operation": "REPLACE",
"value": "Hidden"
}
]
} Step 2 (This assumes that upgrade steps are not run before.) Update Capture levels configurations in Ambari by running following command
# /var/lib/ambari-server/resources/scripts/configs.sh -u <ambari_user_id> -p <ambari_userid_password> set localhost <hdp_cluster_name> capture-levels capture-levels-content '[\n {\n \"name\": \"L1\",\n \"description\": \"Configurations\",\n \"filter\": [\"CONFIGS\"]\n },\n {\n \"name\": \"L2\",\n \"description\": \"Configurations, reports and metrics\",\n \"filter\": [\"CONFIGS\", \"REPORTS\", \"METRICS\"]\n },\n {\n \"name\": \"L3\",\n \"description\": \"Configurations, reports, metrics and Logs\",\n \"filter\": [\"CONFIGS\", \"REPORTS\", \"METRICS\", \"LOGS\", \"DIAGNOSTICS\"]\n }\n]'
If the above command fails, then manually update /etc/hst/conf/capture_levels.json file on HST server with below content
[
{
"name": "L1",
"description": "Configurations",
"filter": ["CONFIGS"]
},
{
"name": "L2",
"description": "Configurations, reports and metrics",
"filter": ["CONFIGS", "REPORTS", "METRICS"]
},
{
"name": "L3",
"description": "Configurations, reports, metrics and Logs",
"filter": ["CONFIGS", "REPORTS", "METRICS", "LOGS", "DIAGNOSTICS"]
}
]
Step 3
Restart SmartSense Services via Ambari, to propate the above changes to all the agents. Step 4
Confirm smartsense-hst rpm package is upgraded to version 1.3.0
# rpm -qi smartsense-hst
On Ambari server host, as root user, run the following command
# hst upgrade-ambari-service
Step 5
Restart Ambari Server
# ambari-server restart
Step 6
Restart SmartSense service via Ambari to propagate final changes to agent hosts.
Step 7
Start Bundle Capture from Ambari SmartSense view and confirm the contents and size of the bundle.
... View more
Labels:
12-24-2016
05:46 PM
4 Kudos
SYMPTOM: We are connected to hue using hdfs user. After executing oozie w/f using HUE the workflow is failing. Below is sample job definition - <workflow-app name="Headlight" xmlns="uri:oozie:workflow:0.4">
<start to="Headlight"/>
<action name="Headlight">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<exec>/user/hdfs/oozie/deployments/headlight-exec-all.pl</exec>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
ERROR: Tried creating simple script and ran, but the files are created using yarn user as shown below - #!/bin/sh
/bin/touch /tmp/test.oozie
The result is ok. the file is created by the user yarn.
-rw-r--r-- 1 yarn hadoop 0 Jun 3 03:04 /tmp/test.oozie
ROOT CAUSE: User is expecting the oozie job should create files after successful run with the ownership as user who submits the job. Shell actions are not allowed to run as another user as sudo is blocked. If you want a yarn application to run as someone other than yarn (i.e. the submitter), then you'd want to run in a secured environment so that the containers are started up by the submitting user.
RESOLUTION: Configure kerberos within cluster and re-run the job
... View more
Labels:
12-24-2016
11:33 AM
4 Kudos
PROBLEM: Ambari service check for Solr fails when the active namenode is nn2. From the std-err, you will see log below ERROR: stderr: /var/lib/ambari-agent/data/errors-803.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/SOLR/5.5.2.2.5/package/scripts/service_check.py", line 48, in <module>
ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/SOLR/5.5.2.2.5/package/scripts/service_check.py", line 43, in service_check
user=params.solr_config_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/opt/lucidworks-hdpsearch/solr/bin/solr create_collection -c collection1 -d data_driven_schema_configs -p 8983 -s 2 -rf 1 >> /var/log/service_solr/solr-service.log 2>&1' returned 1.
Below was error message in solr log - 2016-09-15 17:04:49,886 [qtp1192108080-19] ERROR [ ] org.apache.solr.update.SolrIndexWriter (SolrIndexWriter.java:135) - Error closing IndexWriter
java.net.ConnectException: Call From dummyhost/0.0.0.0 to dummyhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy10.getListing(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1952)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:693)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
at org.apache.solr.store.hdfs.HdfsDirectory.listAll(HdfsDirectory.java:168)
at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
at org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:101)
at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
at org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:426)
at org.apache.lucene.index.IndexWriter.rollbackInternalNoCommit(IndexWriter.java:2099)
at org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2041)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1083)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1125)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:130)
at org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:832)
at org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:85)
at org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:358)
at org.apache.solr.update.SolrCoreState.decrefSolrCoreState(SolrCoreState.java:73)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1225)
at org.apache.solr.core.SolrCore.closeAndWait(SolrCore.java:1015)
at org.apache.solr.core.CoreContainer.unload(CoreContainer.java:994)
at org.apache.solr.handler.admin.CoreAdminOperation$2.call(CoreAdminOperation.java:144)
at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:354)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:153)
ROOT CAUSE: This is a product defect BUG-68180. RESOLUTION: Adding below link to solr-config-env content from ambari. Place it below JAVA_HOME. Which resolved the issue. export SOLR_HDFS_CONFIG=/etc/hadoop/conf
... View more
Labels:
12-24-2016
11:25 AM
4 Kudos
SYMPTOM: Ranger is installed and managed using Ambari. Services failed to start and giving below error ERROR: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 185, in <module>
HiveServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 218, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 85, in start
setup_ranger_hive(rolling_upgrade=rolling_restart)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/setup_ranger_hive.py", line 50, in setup_ranger_hive
hdp_version_override = hdp_version)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/setup_ranger_plugin_xml.py", line 82, in setup_ranger_plugin
policy_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/ranger_functions.py", line 92, in create_ranger_repository
repo = self.get_repository_by_name_urllib2(repo_name, component, 'true', ambari_username_password_for_ranger)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/ranger_functions.py", line 57, in get_repository_by_name_urllib2
response = json.loads(result.read())
File "/usr/lib/python2.6/site-packages/ambari_simplejson/_init_.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.6/site-packages/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.6/site-packages/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/ranger_functions.py", line 108, in create_ranger_repository
raise Fail('Ambari admin username and password are blank ')
resource_management.core.exceptions.Fail: Ambari admin username and password are blank
ROOT CAUSE: This error can occur if the user changes the admin password in the Ranger Web User Interface, but neglects to also change it in Ambari's Ranger configs. The error message, however, is not very descriptive of what the problem actually is. A bug has been filed against Ambari to provide better error reporting for situations like this: https://issues.apache.org/jira/browse/AMBARI-13346 A fix is scheduled to go into Ambari version 2.1.3 and higher. RESOLUTION: If the password was indeed changed in the Ranger Web Client but not in Ambari, then: 1) Log in to the Ambari Web User Interface. 2) Click on the Ranger Service. 3) Click on the Configs tab for the Ranger server.
4) Locate the admin_password parameter in Ranger's Advanced ranger-env section. 5) Update the password to match what was entered in the Ranger Web Interface.
6) Save the settings, restart the Ranger service, then restart any services that were failing.
... View more
Labels:
12-24-2016
11:10 AM
4 Kudos
SYMPTOM: Ambari Hive View does not reconnect when its connection to Metastore is interrupted. ERROR: 03 May 2016 13:33:39,172 ERROR qtp-ambari-client-41951 ServiceFormattedException:96 - org.apache.ambari.view.hive.client.HiveClientException: H100 Unable to submit statement show databases like '*': org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
org.apache.ambari.view.hive.client.HiveClientException: H100 Unable to submit statement show databases like '*': org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.ambari.view.hive.client.Connection$3.body(Connection.java:608)
at org.apache.ambari.view.hive.client.Connection$3.body(Connection.java:590)
at org.apache.ambari.view.hive.client.HiveCall.call(HiveCall.java:101)
at org.apache.ambari.view.hive.client.Connection.execute(Connection.java:590)
at org.apache.ambari.view.hive.client.Connection.executeSync(Connection.java:629)
at org.apache.ambari.view.hive.client.DDLDelegator.getDBListCursor(DDLDelegator.java:76)
at org.apache.ambari.view.hive.client.DDLDelegator.getDBList(DDLDelegator.java:65)
at org.apache.ambari.view.hive.resources.browser.HiveBrowserService.databases(HiveBrowserService.java:88)
at sun.reflect.GeneratedMethodAccessor759.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:196)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:216)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:205)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:152)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TSaslTransport.flush(TSaslTransport.java:471)
at org.apache.thrift.transport.TSaslClientTransport.flush(TSaslClientTransport.java:37)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at org.apache.hive.service.cli.thrift.TCLIService$Client.send_ExecuteStatement(TCLIService.java:219)
at org.apache.hive.service.cli.thrift.TCLIService$Client.ExecuteStatement(TCLIService.java:211)
at org.apache.ambari.view.hive.client.Connection$3.body(Connection.java:606)
... 97 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
ROOT CAUSE: The only way we've found to get a disrupted hive view connection to work again is to restart Ambari server, which forces the view to create a fresh connection. There should be a way for the view to auto connect after a set interval, or even a connection button for the customer to use. Restarting the Ambari server is not a viable solution This is a BUG - https://hortonworks.jira.com/browse/BUG-57145 in Ambari2.2
RESOLUTION: Upgrading to Ambari 2.4 fixed the issue. Workaround is to restart Ambari server.
... View more
Labels:
12-24-2016
07:17 AM
3 Kudos
SYMPTOM: Script runs fine in grunt, however, it fails when executed in Hue Pig editor. Script is as below - Pig script:
A = load '/tmp/baseball';
dump A; ERROR: Below are the error logs - ROOT CAUSE: The property templeton.libjars was pointing to the wrong jar files: /usr/hdp/${hdp.version}/zookeeper,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar/zookeeper.jar RESOLUTION: After changing path for "templeton.libjars" to /usr/hdp/${hdp.version}/zookeeper/zookeeper.jar,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar the script ran successful in the Hue Pig editor. Please do check - https://community.hortonworks.com/articles/15958/templetonlibjars-property-changed-its-value-after.html
... View more
Labels: