Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 17735 | 03-08-2019 06:33 PM | |
| 7164 | 02-15-2019 08:47 PM |
04-19-2018
09:40 PM
Thanks @Chad Woodhead - Updated! 🙂
... View more
02-21-2017
05:26 PM
SYMPTOM: Oozie sqoop action fails with below error while inserting data into Hive. 20217 [Thread-30] INFO org.apache.sqoop.hive.HiveImport - Sorry ! hive-shell is disabled use 'Beeline' or 'Hive View' instead. Please contact cluster administrators for further information
20218 [main] ERROR org.apache.sqoop.tool.ImportTool - Encountered IOException running import job: java.io.IOException: Hive exited with status 1
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:389)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:342)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:246)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:524)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:202)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:182)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) . ROOT CAUSE: Sqoop uses CliDriver class and does not use hive script whereas Oozie was not able to find that class in classpath hence it was trying to use hive cli. . WORKAROUND: N/A . RESOLUTION: Add below property in job.properties file and re-run failed Oozie workflow. oozie.action.sharelib.for.sqoop=sqoop,hive
... View more
Labels:
02-07-2017
07:08 PM
PROBLEM: Ambari Server won't be able to start because of DB inconsistencies. Sample Error: 2017-02-06 05:08:43,975 ERROR - You have non selected configs: zeppelin-ambari-config for service ZEPPELIN from cluster XXXX!
2017-02-06 05:08:43,976 INFO - ******************************* Check database completed *******************************
2017-02-06 05:10:12,834 INFO - Checking DB store version
2017-02-06 05:10:14,094 INFO - DB store version is compatible
2017-02-07 13:50:31,769 INFO - ******************************* Check database started *******************************
2017-02-07 13:50:41,247 INFO - Checking for configs not mapped to any cluster
2017-02-07 13:50:41,322 INFO - Checking for configs selected more than once
2017-02-07 13:50:41,326 INFO - Checking for hosts without state
2017-02-07 13:50:41,330 INFO - Checking host component states count equals host component desired states count
2017-02-07 13:50:41,334 INFO - Checking services and their configs
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / SQOOP
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / HDFS
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / MAPREDUCE2
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / TEZ
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / SPARK
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / HBASE
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / ZOOKEEPER
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / YARN
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / KNOX
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / PIG
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / RANGER
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / HIVE
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / SLIDER
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / AMBARI_INFRA
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / KAFKA
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / SMARTSENSE
2017-02-07 13:50:45,809 ERROR - You have non selected configs: zeppelin-ambari-config for service ZEPPELIN from cluster XXXXX!
2017-02-07 13:50:45,810 INFO - ******************************* Check database completed ******************************* . BUSINESS IMPACT: It's not recommended to make any changes in service configurations because backend Database is not consistent. . WORKAROUND: ambari-server start --skip-database-check Note - This is not recommended for production clusters, if you do this, please do not make any modifications in service configurations till you resolves the conflicts. . RESOLUTION: 1. Stop Ambari
server ambari-server stop . 2. Take backup of
Ambari Database For postgres - Use pg_dump command. For MySql - Use mysqldump command. . 3. Run below queries
to resolve conflicts delete from hostcomponentstate where service_name = 'ZEPPELIN';
delete from hostcomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicecomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicedesiredstate where service_name = 'ZEPPELIN';
delete from serviceconfighosts where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfig where service_name = 'ZEPPELIN';
delete from requestresourcefilter where service_name = 'ZEPPELIN';
delete from requestoperationlevel where service_name = 'ZEPPELIN';
delete from clusterservices where service_name ='ZEPPELIN';
delete from clusterconfig where type_name like 'zeppelin%';
delete from clusterconfigmapping where type_name like 'zeppelin%'; . 4. Start Ambari Server and it should come up without any inconsistencies. . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
03-06-2017
08:38 PM
@Georg Heiler - Yes. Please use refer below curl command for the same curl -H "X-Requested-By: ambari" -X GET-u <admin-user>:<admin-password> http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?format=blueprint
... View more
01-06-2017
05:22 PM
Do you know /usr/hdp/smartsense directory is created by the service or manually created?
... View more
12-21-2016
05:23 PM
2 Kudos
SYMPTOM We get below error while installing new HDP version packages before upgrading to latest HDP version on SUSE linux. 2016-12-21 13:46:47,919 - Package Manager failed to install packages. Error: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm livy_2_3_2_0_2950' returned 104. File 'repomd.xml' from repository 'AMBARI-2.4.1.0.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'AMBARI-2.4.1.0.repo' because of the above error.
File 'repomd.xml' from repository 'HDP.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'HDP.repo' because of the above error.
No provider of 'livy_2_3_2_0_2950' found.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py", line 376, in install_packages
retry_count=agent_stack_retry_count
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 58, in action_upgrade
self.upgrade_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py", line 62, in upgrade_package
return self.install_package(name, use_repos, skip_repos, is_upgrade)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py", line 57, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm livy_2_3_2_0_2950' returned 104. File 'repomd.xml' from repository 'AMBARI-2.4.1.0.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'AMBARI-2.4.1.0.repo' because of the above error.
File 'repomd.xml' from repository 'HDP.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache: . ROOT CAUSE This is a BUG reported under https://issues.apache.org/jira/browse/AMBARI-19186 for SUSE linux if we are using unsigned repo. .
WORKAROUND N/A . RESOLUTION Apply patch given at https://issues.apache.org/jira/browse/AMBARI-19186 Steps to Apply the patch: 1. Take a backup of /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py 2. Edit /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py with your favorite editor(I use vim) 3. Find the line with "--installed-only" E.g ["sudo", "zypper", "search", "--installed-only", "--details"], 4. Replace it with: ["sudo", "zypper", "--no-gpg-checks", "search", "--installed-only", "--details"], 5. Find the line with "--uninstalled-only" ["sudo", "zypper", "search", "--uninstalled-only", "--details"], 6. Replace it with: ["sudo", "zypper", "--no-gpg-checks", "search", "--uninstalled-only", "--details"], . Note - If the host where you are having this issue is a ambari-agent, you only need to apply patch on below file: /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py If the host where you are having an issue is ambari-server, you need to apply patch on below files: /usr/lib/ambari-server/lib/resource_management/libraries/functions/packages_analyzer.py /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py . Hope this information helps! Please comment if you have any questions. Happy Hadooping!! 🙂
... View more
12-20-2016
02:18 PM
3 Kudos
SYMPTOM Running java action via Oozie workflow fails with below error: Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, Could not find Yarn tags property (mapreduce.job.tags)
java.lang.RuntimeException: Could not find Yarn tags property (mapreduce.job.tags)
at org.apache.oozie.action.hadoop.LauncherMainHadoopUtils.getChildYarnJobs(LauncherMainHadoopUtils.java:52)
at org.apache.oozie.action.hadoop.LauncherMainHadoopUtils.killChildYarnJobs(LauncherMainHadoopUtils.java:87)
at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:44)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:36) . ROOT CAUSE Missing or Yarn related jar file conflict in Oozie sharelib. . RESOLUTION Complete the following steps as oozie user in the Oozie node: 1. Recreate Oozie sharelib using below command /usr/hdp/<hdp-version>/oozie/bin/oozie-setup.sh sharelib create -locallib /usr/hdp/<hdp-version>/oozie/oozie-sharelib.tar.gz -fs hdfs://<namenode-host>:8020 2. Update Oozie sharelib using below command oozie admin -oozie http://<oozie-host>:11000/oozie -sharelibupdate 3. Restart oozie service using Ambari and resubmit the workflow. . Note - If you have put any custom jars in Oozie sharelib, please make sure to copy them back again after re-creating Oozie sharelib.
... View more
Labels:
12-20-2016
02:02 PM
2 Kudos
SYMPTOM Beeline fails with below error: $ beeline --verbose
Beeline version 0.14.0.2.2.6.0-2800 by Apache Hive
beeline> !connect jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM
scan complete in 8ms
Connecting to jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM
Enter username for jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM: kuldeepk
Enter password for jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.6.0-2800/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.6.0-2800/hive/lib/hive-jdbc-0.14.0.2.2.6.0-2800-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/02/02 00:35:55 [main]: ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: No common protection layer between client and server
at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:252)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:187)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:507)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:264)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:138)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
at org.apache.hive.beeline.Commands.connect(Commands.java:1078)
at org.apache.hive.beeline.Commands.connect(Commands.java:999)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:45)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:936)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:801)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:762)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:476)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAc . ROOT CAUSE SSL was enabled on this cluster for Hiveserver2 --> Further customer disabled it however forgot to revert below property hive.server2.thrift.sasl.qop=auth-conf . WORKAROUND N/A . RESOLUTION Revert value of this property as below via Ambari and restart required services. hive.server2.thrift.sasl.qop=auth
... View more
Labels:
02-14-2018
05:55 PM
HI ... Thank you for the post. Is there a way to add node labels and queues through java API? We are planning to add node labels and queues on demand based on job submission.
... View more
11-01-2018
01:01 PM
Hi Kuldeep, I updated the MySQL connector jar to mysql-connector-java-5.1.41-bin.jar. HDP: HDP-2.6.5.0 MySQL: mysql Ver 14.14 Distrib 5.1.73 Performed the above steps and restarted ambari-server, ambari-agent, hiveserver2, and hive metastore components. However, I am still getting the same error in the logs. jdbc:hive2://hdpmaster1-dev.<domain>.c> show databases; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:826) at org.apache.hadoop.hive.metastore.ObjectStore.getAllDatabases(ObjectStore.java:842) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103) at com.sun.proxy.$Proxy8.getAllDatabases(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1270)
... View more