Member since
04-07-2016
16
Posts
2
Kudos Received
0
Solutions
07-25-2016
04:29 PM
I'm accepting this as the current answer. But might I suggest that this is probably not the best way to handle user creation/admin in general. While I understand the requirement for an email address to send notifications, it seems that even going with this line of thinking there should be no requirement for the email address to be unique, or user creation to be based on the idea of an "invitation" as opposed to the admin directly creating a user. If the requirement for email address uniqueness and the ability of an admin to directly create a user was added that would make for a much more sane, and more general purpose user system that would facilitate the creation of utility users for things like CI pipelines.
... View more
07-23-2016
01:36 AM
I need to create a user in the cloudbreak web interface to tie into our deployment pipeline. This user doesn't have a working email address I can use to initiate the user creation. Is there any way to manually setup a user and password in the web interface or via the command line? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
07-21-2016
08:51 PM
1 Kudo
Hey Just wondering if there's a way in cloudbreak to adjust or template the ambari.properties, I'm enabling ldap integration so my users can get into ambari. So far I've had to do this manually once the cluster is up, any way to template it with the rest of the cluster? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
05-19-2016
09:37 PM
Confirmed this does not resolve the issue.
... View more
05-19-2016
04:38 PM
I am using CBD I just tried changing to "export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3" in my profile and it appears to be using the exact same 5.1.17 driver cat ./META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 4.4.6 20120305 (Red Hat 4.4.6-4) (Free Software Foundation
, Inc.)
Built-By: mockbuild
Bundle-Vendor: Sun Microsystems Inc.
Bundle-Classpath: .
Bundle-Version: 5.1.17
Bundle-Name: Sun Microsystems' JDBC Driver for MySQL
Bundle-ManifestVersion: 2
Bundle-SymbolicName: com.mysql.jdbc
Export-Package: com.mysql.jdbc;version="5.1.17";uses:="com.mysql.jdbc.
log,javax.naming,javax.net.ssl,javax.xml.transform,org.xml.sax",com.m
ysql.jdbc.jdbc2.optional;version="5.1.17";uses:="com.mysql.jdbc,com.m
ysql.jdbc.log,javax.naming,javax.sql,javax.transaction.xa",com.mysql.
jdbc.log;version="5.1.17",com.mysql.jdbc.profiler;version="5.1.17";us
es:="com.mysql.jdbc",com.mysql.jdbc.util;version="5.1.17";uses:="com.
mysql.jdbc.log",com.mysql.jdbc.exceptions;version="5.1.17",com.mysql.
jdbc.exceptions.jdbc4;version="5.1.17";uses:="com.mysql.jdbc",com.mys
ql.jdbc.interceptors;version="5.1.17";uses:="com.mysql.jdbc",com.mysq
l.jdbc.integration.c3p0;version="5.1.17",com.mysql.jdbc.integration.j
boss;version="5.1.17",com.mysql.jdbc.configs;version="5.1.17",org.gjt
.mm.mysql;version="5.1.17"
Import-Package: javax.net,javax.net.ssl;version="[1.0.1, 2.0.0)";resol
ution:=optional,javax.xml.parsers, javax.xml.stream,javax.xml.transfo
rm,javax.xml.transform.dom,javax.xml.transform.sax,javax.xml.transfor
m.stax,javax.xml.transform.stream,org.w3c.dom,org.xml.sax,org.xml.sax
.helpers;resolution:=optional,javax.naming,javax.naming.spi,javax.sql
,javax.transaction.xa;version="[1.0.1, 2.0.0)";resolution:=optional,c
om.mchange.v2.c3p0;version="[0.9.1.2, 1.0.0)";resolution:=optional,or
g.jboss.resource.adapter.jdbc;resolution:=optional,org.jboss.resource
.adapter.jdbc.vendor;resolution:=optional
Name: common
Specification-Title: JDBC
Specification-Version: 4.0
Specification-Vendor: Sun Microsystems Inc.
Implementation-Title: MySQL Connector/J
Implementation-Version: 5.1.17-SNAPSHOT
Implementation-Vendor-Id: com.mysql
Implementation-Vendor: Oracle
... View more
05-13-2016
10:18 PM
I can't recreate the issue outside of cloudbreak.
... View more
05-13-2016
09:48 PM
Is there a "supported" way to do that with cloudbreak? I think it's baked into the docker images
... View more
05-13-2016
09:25 PM
anyone able to shed any light on this. I'm starting up a cluster using cloudbreak/ambari with a blueprint, and external metastoredb, but When I try to run very simple operations via beeline (eg "show schemas;") I get an error every second try 0: jdbc:hive2://x.x.x.x:10000> show schemas;
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mysql/jdbc/SQLError (state=08S01,code=1)" See hiveserver2 logs below but if I restart hiveserver2 it just goes away. I can reproduce this reliably any ideas?? 2016-05-13 20:36:51,782 INFO [HiveServer2-Background-Pool: Thread-268]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called
2016-05-13 20:36:51,786 ERROR [HiveServer2-Background-Pool: Thread-268]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - java.lang.NoClassDefFoundError: com/mysql/jdbc/SQLError
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3575)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503)
at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173)
at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378)
at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328)
at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.store.query.Query.execute(Query.java:1654)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:192)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:138)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:300)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:603)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:581)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1188)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy11.get_all_databases(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1037)
at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy12.getAllDatabases(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1237)
at org.apache.hadoop.hive.ql.exec.DDLTask.showDatabases(DDLTask.java:2262)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:390)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-05-13 20:36:51,787 ERROR [HiveServer2-Background-Pool: Thread-268]: exec.DDLTask (DDLTask.java:failed(525)) - java.lang.NoClassDefFoundError: com/mysql/jdbc/SQLError
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3575)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503)
at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173)
at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378)
at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328)
at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.store.query.Query.execute(Query.java:1654)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:192)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:138)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:300)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:603)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:581)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1188)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy11.get_all_databases(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1037)
at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy12.getAllDatabases(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1237)
at org.apache.hadoop.hive.ql.exec.DDLTask.showDatabases(DDLTask.java:2262)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:390)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-05-13 20:36:51,787 INFO [HiveServer2-Background-Pool: Thread-268]: hooks.ATSHook (ATSHook.java:<init>(90)) - Created ATS Hook
2016-05-13 20:36:51,787 INFO [HiveServer2-Background-Pool: Thread-268]: log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) - <PERFLOG method=FailureHook.org.apache.hadoop.hive.ql.hooks.ATSHook from=org.apache.hadoop.hive.ql.Driver>
2016-05-13 20:36:51,787 INFO [HiveServer2-Background-Pool: Thread-268]: log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) - </PERFLOG method=FailureHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1463171811787 end=1463171811787 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2016-05-13 20:36:51,787 ERROR [HiveServer2-Background-Pool: Thread-268]: ql.Driver (SessionState.java:printError(962)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mysql/jdbc/SQLError
2016-05-13 20:36:51,788 INFO [HiveServer2-Background-Pool: Thread-268]: ql.Driver (Driver.java:execute(1629)) - Resetting the caller context to
2
... View more
Labels:
- Labels:
-
Apache Hive
-
Hortonworks Cloudbreak
05-06-2016
09:10 PM
What I did was write a boto based python script that does all the tagging, I download ind run it using a recipe The recipe is as simple as this #!/bin/bash
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
curl "https://<repo with my tagging script>/hdp_tagging.py" -o "hdp_tagging.py"
python ./get-pip.py
pip install boto
pip install argparse
python ./hdp_tagging.py --key '' --secret ''
... View more
04-14-2016
08:27 PM
I have a MapReduce2 jar that I need to run on a cluster we're currently spinning up via Cloudbreak. Previously the machine we run the jar from would have had all the hadoop client libraries and config files on so we could just run "hadoop jar jarfile.jar" (via a scheduling framework) and we'd be good to go. Now this won't work as some values (particularly hostnames/IP's) may change. What's the recommended method for submitting work to the cluster in this situation from an external client? Are there api endpoints where we can pull all the necessary configs once the cluster is spun up? or is the an API we can directly submit the jar to? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak