Member since
10-31-2017
21
Posts
0
Kudos Received
0
Solutions
03-06-2019
07:57 AM
@Geoffrey Shelton Okot Hi, thanks. But this does not apply for the Hadoop, HBase, Storm configuration Kind regards Manfred
... View more
03-06-2019
07:34 AM
Hi, I am wondering how to protect keystore and truststore passwords in the ssl-server (ssl-client), hbase-site, storm-site ? These passwords are by default in plain text which is not very secure. On our cluster we have some egde nodes (that run some Spring Boot web application with HBase access, for that we let Ambari generate the configuration on those hosts), master (Namenode, HBase Master, etc.) and slave nodes (Datanode, Region Server). The applications run as a user that is not part of the hadoop group. Same for the YARN, MP jobs. Those users are kerberized. Since the configuration files are generated by Ambari with a 644 file mode, everybody is able to read those files (and passwords.) So here are my questions: 1) Is there a way of changing the way of storing the passwords ? (eg CredentialProvider) 2) May I change the file permissions (but it looks like Ambari is changing them @ every start) 3) Should I change my cluster architecture ? Edgenode, client config on master and slave nodes etc... Thanks Kind regards
... View more
- Tags:
- Security
Labels:
06-11-2018
09:59 AM
Hi, I already did dome research on this subject but I am not satisfied with the responses, so I am asking directly. We have a traditional HDP cluster with admin node (1), master nodes (2) and worker nodes (15 and growing) (kerberos is enabled) We also have some other nodes in our cluster like web application nodes (hadoop access), spring batch nodes (hadoop access), cache nodes (hadoop access) and some others without hadoop access. These nodes are not managed by Ambari (which makes the kerberos configuration a bit more complicated). Would it be a good idea to handle these nodes like edge nodes and add them to Ambari (client config would be out of the box)? What does this mean in terms of RAM, CPU to be allocated for Ambari Agent (etc) for each "edge node"? Thanks Manfed
... View more
Labels:
03-02-2018
02:29 PM
Hi, But what about the service configuration? Even when you re-add your host Ambari won't recognize the old service configuration. This has to be done when "reinstalling" the cluster ? Manfred
... View more
03-02-2018
09:03 AM
@Geoffrey Shelton Okot
Hi, it's not the Hadoop data I am worried about. It's how Ambari will handle the configuration. Let's say I am using Ambari 2.5 with HDP 2.6 1) I need to restore a DB dump for Ambari 2.0 2) All the migration configuration will be lost (of ambari and the HDP services) 3) I guess I have to remove the whole cluster and reinstall it. (Without removing the data) Manfred
... View more
03-01-2018
02:09 PM
Hi, I am in an uncomfortable situation. Some weeks ago I migrated a HDP 2.2 cluster to HDP 2.6 (using HDP 2.4 in the middle). I made a backup of the Oracle DB before each upgrade. I just haven't done one at the end. 2 day's ago a after a disk cash and a hard reboot of the Oracle instance, my Ambari database has a corrupt datafile. We also lost all the redologs ... So I have a DB dump for HDP 2.2 (Ambari 2.0) and HDP 2.4 (Ambari 2.2) but no redologs. The guys from Ooacle say I have to re-import the old dump, but in this case my cluster won't work anymore. (Since Ambari will push the old config to a newer HDP installation.) Do I have to remove the whole cluster and reinstall, or is there another solution ? Thanks Manfred
... View more
Labels:
- Labels:
-
Apache Ambari
12-08-2017
12:52 PM
@Venkata Sudheer Kumar M Hi, well
I did this on purpose (not generating a kerberos ticket). Shouldn't
this be done by Hive / Slider themselves before starting the LLAP
application ? I have not read anything about this in the documentation. My goal was to show that anonymous connections are allowed (read) but not for writing, so I cannot explain this exception : 2017-12-05 12:05:48,058 [main] WARN client.SliderClient - Error deleting registry entry /users/hive/services/org-apache-slider/llap0: org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hive/services/org-apache-slider/llap0': Not authorized to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0 Why does Slider run without Kerberos credentials ? 2017-12-05 12:05:47,992 [main] INFO zk.RegistrySecurity - Enabling ZK sasl client: jaasClientEntry = Client, principal = null, keytab = null
Just to remember : the original problem is that LLAP does not start. I'll post the YARN log later Cheers
... View more
12-05-2017
11:40 AM
@Venkata Sudheer Kumar M I already looked at these topics, but everything is correct. But here is what I thinks is different in my case : 2017-12-05 12:05:47,992 [main] INFO zk.RegistrySecurity - Enabling ZK sasl client: jaasClientEntry = Client, principal = null, keytab = null
2017-12-05 12:05:48,022 [main] INFO imps.CuratorFrameworkImpl - Starting
2017-12-05 12:05:48,033 [main-SendThread(zer332su.distribution.edf.fr:2181)] WARN zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No key to store Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2017-12-05 12:05:48,035 [main-EventThread] ERROR curator.ConnectionState - Authentication failed
2017-12-05 12:05:48,045 [main-EventThread] INFO state.ConnectionStateManager - State change: CONNECTED
2017-12-05 12:05:48,058 [main] WARN client.SliderClient - Error deleting registry entry /users/hive/services/org-apache-slider/llap0: org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hive/services/org-apache-slider/llap0': Not authorized to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0
org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hive/services/org-apache-slider/llap0': Not authorized to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0
at org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:385)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:364)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:684)
at org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:160)
at org.apache.slider.client.SliderClient.actionDestroy(SliderClient.java:677)
at org.apache.slider.client.SliderClient.exec(SliderClient.java:379)
at org.apache.slider.client.SliderClient.runService(SliderClient.java:333)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:188)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Slider is not using kerberos correctly (no principal / keytab) ! Like it is written here : 2017-12-05 12:05:47,992 [main] INFO zk.RegistrySecurity - Enabling ZK sasl client: jaasClientEntry = Client, principal = null, keytab = null When I connect myself as the hive user here is what I get (without a principal / keytab to test): sudo su - hive klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_xxxx) /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server $hostname
Connecting to $hostname
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:AuthFailed type:None path:null
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: $hostname(CONNECTED) 0] Looking for the acl of /registry/users/hive/services/org-apache-slider/llap0 [zk: $hostname(CONNECTED) 0] getAcl /registry/users/hive/services/org-apache-slider/llap0
'world,'anyone
: r
'sasl,'yarn
: cdrwa
'sasl,'jhs
: cdrwa
'sasl,'hdfs
: cdrwa
'sasl,'rm
: cdrwa
'sasl,'hive
: cdrwa
'sasl,'hive/hostname@REALM
: cdrwa
When the slider application is started, shoudn't this one use the keytab of the hive user ? @Matt Andruff I currently test with kerberos only. At the end I will use a custom authentication (like I used before) Sould I check slider ? Thanks
... View more
11-30-2017
01:44 PM
Hi, I get this error when running Hive Interactive Server : 2017-11-30 12:22:49,298 [main] INFO tools.SliderUtils - JVM initialized into secure mode with kerberos realm DOMAIN
2017-11-30 12:22:50,202 [main] WARN shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2017-11-30 12:22:50,345 [main] INFO client.AHSProxy - Connecting to Application History server at host/10.121.206.118:10200
2017-11-30 12:22:50,705 [main] INFO client.RequestHedgingRMFailoverProxyProvider - Looking for the active RM in [rm1, rm2]...
2017-11-30 12:22:50,840 [main] INFO client.RequestHedgingRMFailoverProxyProvider - Found active RM [rm1]
2017-11-30 12:22:50,850 [main] INFO client.SliderClient - Cluster llap0 is in a terminated state FAILED
2017-11-30 12:22:50,852 [main] INFO util.ExitUtil - Exiting with status 0
2017-11-30 12:22:53,280 [main] INFO tools.SliderUtils - JVM initialized into secure mode with kerberos realm DOMAIN
2017-11-30 12:22:54,061 [main] WARN shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2017-11-30 12:22:54,168 [main] INFO client.AHSProxy - Connecting to Application History server at host/10.121.206.118:10200
2017-11-30 12:22:54,205 [main] INFO client.RequestHedgingRMFailoverProxyProvider - Looking for the active RM in [rm1, rm2]...
2017-11-30 12:22:54,463 [main] INFO client.RequestHedgingRMFailoverProxyProvider - Found active RM [rm1]
2017-11-30 12:22:54,546 [main] INFO zk.ZKIntegration - Binding ZK client to host:2181,host:2181,host:2181
2017-11-30 12:22:54,566 [main] INFO zk.BlockingZKWatcher - waiting for ZK event
2017-11-30 12:22:54,600 [main-EventThread] INFO zk.BlockingZKWatcher - ZK binding callback received
2017-11-30 12:22:54,631 [main] INFO zk.RegistrySecurity - Enabling ZK sasl client: jaasClientEntry = Client, principal = null, keytab = null
2017-11-30 12:22:54,655 [main] INFO imps.CuratorFrameworkImpl - Starting
2017-11-30 12:22:54,663 [main-SendThread(host:2181)] WARN zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No key to store Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2017-11-30 12:22:54,666 [main-EventThread] ERROR curator.ConnectionState - Authentication failed
2017-11-30 12:22:54,671 [main-EventThread] INFO state.ConnectionStateManager - State change: CONNECTED
2017-11-30 12:22:54,682 [main] WARN client.SliderClient - Error deleting registry entry /users/hive/services/org-apache-slider/llap0: org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hive/services/org-apache-slider/llap0': Not authorized to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0
org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hive/services/org-apache-slider/llap0': Not authorized to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0
at org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:385)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:364)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:684)
at org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:160)
at org.apache.slider.client.SliderClient.actionDestroy(SliderClient.java:677)
at org.apache.slider.client.SliderClient.exec(SliderClient.java:379)
at org.apache.slider.client.SliderClient.runService(SliderClient.java:333)
at org.apache.slider.core.main.ServiceLauncher.launchService(ServiceLauncher.java:188)
at org.apache.slider.core.main.ServiceLauncher.launchServiceRobustly(ServiceLauncher.java:475)
at org.apache.slider.core.main.ServiceLauncher.launchServiceAndExit(ServiceLauncher.java:403)
at org.apache.slider.core.main.ServiceLauncher.serviceMain(ServiceLauncher.java:630)
at org.apache.slider.Slider.main(Slider.java:49)
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /registry/users/hive/services/org-apache-slider/llap0
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:238)
at org.apache.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:233)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.framework.imps.DeleteBuilderImpl.pathInForeground(DeleteBuilderImpl.java:230)
at org.apache.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:214)
at org.apache.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:41)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:680)
... 9 more
2017-11-30 12:22:54,690 [main] INFO client.SliderClient - Destroyed cluster llap0 Looks like some Kerberos properties were not properly set ? Has anyone already seen this error ? Thanks Manfred
... View more
Labels:
- Labels:
-
Apache Hive
11-21-2017
08:12 AM
Hi, I installed a cluster HDP 2.6.3 using only a Java 7 installation. The problem is that when Hive starts I got the following error : Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 211, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 61, in start
create_metastore_schema()
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 382, in create_metastore_schema
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.3.0-235/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.3.0-235/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hive/ql/log/NullAppender : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:442)
at java.net.URLClassLoader.access$100(URLClassLoader.java:64)
at java.net.URLClassLoader$1.run(URLClassLoader.java:354)
at java.net.URLClassLoader$1.run(URLClassLoader.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:347)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:312)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.logging.log4j.core.config.plugins.util.PluginRegistry.decodeCacheFiles(PluginRegistry.java:181)
at org.apache.logging.log4j.core.config.plugins.util.PluginRegistry.loadFromMainClassLoader(PluginRegistry.java:119)
at org.apache.logging.log4j.core.config.plugins.util.PluginManager.collectPlugins(PluginManager.java:132)
at org.apache.logging.log4j.core.pattern.PatternParser.<init>(PatternParser.java:131)
at org.apache.logging.log4j.core.pattern.PatternParser.<init>(PatternParser.java:112)
at org.apache.logging.log4j.core.layout.PatternLayout.createPatternParser(PatternLayout.java:209)
at org.apache.logging.log4j.core.layout.PatternLayout.createSerializer(PatternLayout.java:123)
at org.apache.logging.log4j.core.layout.PatternLayout.<init>(PatternLayout.java:111)
at org.apache.logging.log4j.core.layout.PatternLayout.<init>(PatternLayout.java:58)
at org.apache.logging.log4j.core.layout.PatternLayout$Builder.build(PatternLayout.java:494)
at org.apache.logging.log4j.core.config.AbstractConfiguration.setToDefault(AbstractConfiguration.java:548)
at org.apache.logging.log4j.core.config.DefaultConfiguration.<init>(DefaultConfiguration.java:47)
at org.apache.logging.log4j.core.LoggerContext.<init>(LoggerContext.java:74)
at org.apache.logging.log4j.core.selector.ClassLoaderContextSelector.createContext(ClassLoaderContextSelector.java:171)
at org.apache.logging.log4j.core.selector.ClassLoaderContextSelector.locateContext(ClassLoaderContextSelector.java:145)
at org.apache.logging.log4j.core.selector.ClassLoaderContextSelector.getContext(ClassLoaderContextSelector.java:70)
at org.apache.logging.log4j.core.selector.ClassLoaderContextSelector.getContext(ClassLoaderContextSelector.java:57)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:147)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
at org.apache.hadoop.util.RunJar.<clinit>(RunJar.java:54) Looks like Hive needs Java 8. Kind regards Manfred
... View more
Labels:
- Labels:
-
Apache Hive
11-14-2017
10:20 AM
Thanks, But how (in case of an Express or Rolling Upgrade) I should handle the configuration upgrade? I forget to mention that we have customized some properties (for example : "hadoop-env template"). We added for example JMX for all namenodes/datanodes and hbase etc. In this case, Ambari won't upgrade the properties since they have been modified. An here comes the problem: As the properties (hadoop_env, hbase_env...) have changed to use Java 8, how should I handle mine? Here is what I could do: upgrade Ambari from 2.0 to 2.2 change config to recommended (I don't know if this feature is already available in 2.2) values (Java 8 support for example) use express upgrade to upgrade HDP from 2.2 to 2.4 upgrade Ambati from 2.2 to 2.5 (or 2.6) change config to recommended values (? do I need this) use express upgrade to upgrade HDP from 2.4 to 2.6 apply custom config on top of a HDP 2.6 cluster using Ambari 2.5 (or 2.6) If I have to do this for all the 15 clusters, this will take some time. So I am searching for automation as well. Thank you
... View more
11-14-2017
07:49 AM
Hi everybody, I am currently working on a cluster migration from HDP 2.2 to HDP 2.6. All in all I will have to migrate 15 different clusters so I am thinking of automation. I know blueprints, which can be used for an initial cluster creation, but not for cluster modifications. So I am finally not sure how to do the migrations. Here is what I need to do : Upgrade from HDP 2.2. to HDP 2.4, and from HDP 2.4 to HDP 2.6 (this is OK) Migrate the configuration from HDP 2.2 to default values or should I do the upgrade first ? So should I restore the default (recommended) values with HDP (how to do this without doing hundreds of clicks) And then apply the specific configuration (HDO 2.6) using which API ? Create blueprints of the HDP 2.6 clusters (for later cluster creation) (with all or just the overridden values ?) As you can read, my goal is to have all clusters upgraded with the "correct" configuration. I have basically two kind of clusters (so I can reuse the configurations) (small and large). What do you think ? Kind regards Manfred PAUL
... View more
Labels:
- Labels:
-
Apache Ambari
11-10-2017
03:10 PM
Hi, since I installed a cluster from scatch, I do not have this jar anywhere (except the old clusters using hdp 2.6.2 or 2.2.9). So I removed the non working parameters from the config. Question : is this code needed for ambari metrics to work correctly ? In my case this won't cause any problems since I do not use ambari-metrics. Thanks
... View more
11-10-2017
12:44 PM
Hello, This bug is similar to this one. (https://community.hortonworks.com/questions/144755/storm-supervisor-and-nimbus-dropping-immediately-a.html) I installed HDP 2.6.3 and nimbus won't start since the file /usr/hdp/2.6.3.0-235/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar in missing. I checked the rpm : storm_2_6_3_0_235-1.1.0.2.6.3.0-235.x86_64 and is seems the directory storm-jmxetric (and file included files) is missing. ---- When using HDP 2.6.2 I get this : rpm -qf /usr/hdp/2.6.2.0-205/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar
storm_2_6_2_0_205-1.1.0.2.6.2.0-205.x86_64
Which means the jar file comes with the storm rpm. So I guess this looks like a packaging error in HDP 2.6.3 ? Kind regards Manfred PAUL
... View more
Labels:
- Labels:
-
Apache Storm
11-08-2017
03:37 PM
Hallo, I did recently an upgrade from HDP 2.2 to HDP 2.6. Storm has received a major upgrade. And it looks like as there are some thinks which haven't been migrated correctly. When I start a a storm topology I always get an error like this : Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[?:1.7.0_111]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_111]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_111]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_111]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_111]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[?:1.7.0_111]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_111]
at java.lang.Class.forName0(Native Method) ~[?:1.7.0_111]
at java.lang.Class.forName(Class.java:195) ~[?:1.7.0_111]
at org.apache.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:70) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
As you can see the topology does not find the class from one of the following jars : ambari-metrics-storm-sink-with-common-2.5.2.0.298.jar
ambari-metrics-storm-sink-legacy-with-common-2.5.2.0.298.jar But : The jars are not included in the classpath, And : As described in this article, the symlike does not exist. Who should create the symlink ? /usr/hdp/current/storm-client/lib/ambari-metrics-storm-sink-with-common-2.5.2.0.298.jar -> /usr/lib/storm/lib/ambari-metrics-storm-sink-with-common-2.5.2.0.298.jar
Does anyone have an idea how I get this this running ? I can just recreate the symlink, but it looks like that there are some thinks missing. Or : Should I remove the lines from the storm config used for the metrics system ? Thanks in advance Kind regards Manfred PAUL
... View more
Labels:
- Labels:
-
Apache Storm
11-07-2017
11:27 AM
Hello, I'am currently upgrading a cluster to HDP 2.6.2. Now HDP 2.6.3 has been release and I am wondering if I should update to this version. I have one question about it : must I use Ambari 2.6.0 or can I stay with Ambari 2.5.2 ? In production I do not like to use .0 version 🙂 What do you think about it ?
... View more
Labels:
10-31-2017
03:47 PM
Hello everybody, I am working to upgrade a couple of HDP clusters from 2.2.9 to 2.6.2, As there is no direct way to upgrade them, I've chosen to upgrade from 2.2 to 2.4 and from 2.4 to 2.6. But there is also the ambari version which has to be taken care of. I've chosen to upgrade from 2.0.2 to 2.2.2.0 (or 2.2.2.18, or 2.4.3.0) and from 2.2.2.0 to 2.5.2.0. What do you think ? Is there a "best" way to chose ? Thanks Kind regards Manfred
... View more
Labels:
- Labels:
-
Apache Ambari