Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 14996 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12591 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
09-22-2017
09:31 AM
hi i added spark node in oozie workflow but getting this error all time { reason: Main class [org.apache.oozie.action.hadoop.SparkMain], exit code [101]
... View more
01-17-2017
05:47 PM
Thank you so much @rnettleton. So there is no alternative apart from having passwords set in cluster creation template. We can always change those passwords after cluster installation is done (just for security purpose)
... View more
01-09-2017
04:43 AM
@Jay SenSharma yes sasl is enabled as true in hive-site.xml. But it is still showing error.
... View more
01-06-2017
05:22 PM
Do you know /usr/hdp/smartsense directory is created by the service or manually created?
... View more
12-21-2016
05:23 PM
2 Kudos
SYMPTOM We get below error while installing new HDP version packages before upgrading to latest HDP version on SUSE linux. 2016-12-21 13:46:47,919 - Package Manager failed to install packages. Error: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm livy_2_3_2_0_2950' returned 104. File 'repomd.xml' from repository 'AMBARI-2.4.1.0.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'AMBARI-2.4.1.0.repo' because of the above error.
File 'repomd.xml' from repository 'HDP.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'HDP.repo' because of the above error.
No provider of 'livy_2_3_2_0_2950' found.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py", line 376, in install_packages
retry_count=agent_stack_retry_count
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 58, in action_upgrade
self.upgrade_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py", line 62, in upgrade_package
return self.install_package(name, use_repos, skip_repos, is_upgrade)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py", line 57, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm livy_2_3_2_0_2950' returned 104. File 'repomd.xml' from repository 'AMBARI-2.4.1.0.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache:
[|] Valid metadata not found at specified URL(s)
Warning: Disabling repository 'AMBARI-2.4.1.0.repo' because of the above error.
File 'repomd.xml' from repository 'HDP.repo' is unsigned, continue? [yes/no] (no): no
Error building the cache: . ROOT CAUSE This is a BUG reported under https://issues.apache.org/jira/browse/AMBARI-19186 for SUSE linux if we are using unsigned repo. .
WORKAROUND N/A . RESOLUTION Apply patch given at https://issues.apache.org/jira/browse/AMBARI-19186 Steps to Apply the patch: 1. Take a backup of /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py 2. Edit /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py with your favorite editor(I use vim) 3. Find the line with "--installed-only" E.g ["sudo", "zypper", "search", "--installed-only", "--details"], 4. Replace it with: ["sudo", "zypper", "--no-gpg-checks", "search", "--installed-only", "--details"], 5. Find the line with "--uninstalled-only" ["sudo", "zypper", "search", "--uninstalled-only", "--details"], 6. Replace it with: ["sudo", "zypper", "--no-gpg-checks", "search", "--uninstalled-only", "--details"], . Note - If the host where you are having this issue is a ambari-agent, you only need to apply patch on below file: /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py If the host where you are having an issue is ambari-server, you need to apply patch on below files: /usr/lib/ambari-server/lib/resource_management/libraries/functions/packages_analyzer.py /usr/lib/ambari-agent/lib/resource_management/libraries/functions/packages_analyzer.py . Hope this information helps! Please comment if you have any questions. Happy Hadooping!! 🙂
... View more
12-29-2016
02:45 PM
Thank you so much @irfan aziz for the confirmation. I'm accepting answer given by @Michael Young. Please feel free to accept appropriate answer if required.
... View more
12-20-2016
02:18 PM
3 Kudos
SYMPTOM Running java action via Oozie workflow fails with below error: Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, Could not find Yarn tags property (mapreduce.job.tags)
java.lang.RuntimeException: Could not find Yarn tags property (mapreduce.job.tags)
at org.apache.oozie.action.hadoop.LauncherMainHadoopUtils.getChildYarnJobs(LauncherMainHadoopUtils.java:52)
at org.apache.oozie.action.hadoop.LauncherMainHadoopUtils.killChildYarnJobs(LauncherMainHadoopUtils.java:87)
at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:44)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:36) . ROOT CAUSE Missing or Yarn related jar file conflict in Oozie sharelib. . RESOLUTION Complete the following steps as oozie user in the Oozie node: 1. Recreate Oozie sharelib using below command /usr/hdp/<hdp-version>/oozie/bin/oozie-setup.sh sharelib create -locallib /usr/hdp/<hdp-version>/oozie/oozie-sharelib.tar.gz -fs hdfs://<namenode-host>:8020 2. Update Oozie sharelib using below command oozie admin -oozie http://<oozie-host>:11000/oozie -sharelibupdate 3. Restart oozie service using Ambari and resubmit the workflow. . Note - If you have put any custom jars in Oozie sharelib, please make sure to copy them back again after re-creating Oozie sharelib.
... View more
Labels:
12-20-2016
02:02 PM
2 Kudos
SYMPTOM Beeline fails with below error: $ beeline --verbose
Beeline version 0.14.0.2.2.6.0-2800 by Apache Hive
beeline> !connect jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM
scan complete in 8ms
Connecting to jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM
Enter username for jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM: kuldeepk
Enter password for jdbc:hive2://prodnode1.crazyadmins.com:10000/default;principal=hive/prodnode1.crazyadmins.com@CRAZYADMINS.COM:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.6.0-2800/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.6.0-2800/hive/lib/hive-jdbc-0.14.0.2.2.6.0-2800-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/02/02 00:35:55 [main]: ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: No common protection layer between client and server
at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:252)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:187)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:507)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:264)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:138)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
at org.apache.hive.beeline.Commands.connect(Commands.java:1078)
at org.apache.hive.beeline.Commands.connect(Commands.java:999)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:45)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:936)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:801)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:762)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:476)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAc . ROOT CAUSE SSL was enabled on this cluster for Hiveserver2 --> Further customer disabled it however forgot to revert below property hive.server2.thrift.sasl.qop=auth-conf . WORKAROUND N/A . RESOLUTION Revert value of this property as below via Ambari and restart required services. hive.server2.thrift.sasl.qop=auth
... View more
Labels:
02-14-2018
05:55 PM
HI ... Thank you for the post. Is there a way to add node labels and queues through java API? We are planning to add node labels and queues on demand based on job submission.
... View more