Member since
02-03-2016
119
Posts
55
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1332 | 07-04-2018 07:11 AM | |
2531 | 07-04-2018 06:48 AM | |
686 | 06-30-2018 06:50 PM | |
1145 | 04-04-2018 03:07 AM | |
967 | 06-06-2016 07:28 AM |
03-31-2017
06:28 AM
Since this is a fresh download, I just went to recreate the VM with a fresh copy of vdisk and it went OK.
Thanks!
... View more
03-31-2017
12:27 AM
Thanks @Jay SenSharma. This is a fresh download vbox sandbox. Just did ambari-server reset 🙂 But now, I can't finish the setup having issues with HDP UTILS repo. Failed to set locale, defaulting to C
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
... View more
03-30-2017
09:11 AM
Hello!
Just downloaded the HDP2.5 sandbox and successfully started the vm.
ambari-server won't start after following the guides here.
Attaching the logs.
/var/log/ambari-server/ambari-server.log 30 Mar 2017 17:28:39,533 ERROR [main] AmbariServer:927 - Failed to run the Ambari Server
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: could not read block 0 of relation base/16384/16567: read only 0 of 8192 bytes
Error Code: 0
Call: SELECT cluster_id, current_cluster_state, current_stack_id FROM clusterstate WHERE (cluster_id = ?)
bind => [1 parameter bound]
Query: ReadObjectQuery(name="clusterStateEntity" referenceClass=ClusterStateEntity )
at org.eclipse.persistence.internal.jpa.QueryImpl.getDetailedException(QueryImpl.java:382)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:260)
at org.eclipse.persistence.internal.jpa.QueryImpl.getResultList(QueryImpl.java:473)
at org.apache.ambari.server.orm.dao.ClusterDAO.findAll(ClusterDAO.java:92)
at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
at org.apache.ambari.server.state.cluster.ClustersImpl.loadClustersAndHosts(ClustersImpl.java:198)
at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:128)
at org.apache.ambari.server.state.cluster.ClustersImpl.checkLoaded(ClustersImpl.java:187)
at org.apache.ambari.server.state.cluster.ClustersImpl.getClusters(ClustersImpl.java:665)
at org.apache.ambari.server.api.services.AmbariMetaInfo.reconcileAlertDefinitions(AmbariMetaInfo.java:1088)
at org.apache.ambari.server.controller.AmbariServer.run(AmbariServer.java:593)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:925)
Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: could not read block 0 of relation base/16384/16567: read only 0 of 8192 bytes
Error Code: 0
Call: SELECT cluster_id, current_cluster_state, current_stack_id FROM clusterstate WHERE (cluster_id = ?)
bind => [1 parameter bound]
Query: ReadObjectQuery(name="clusterStateEntity" referenceClass=ClusterStateEntity )
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:684)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectOneRow(DatasourceCallQueryMechanism.java:714)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRowFromTable(ExpressionQueryMechanism.java:2803)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRow(ExpressionQueryMechanism.java:2756)
at org.eclipse.persistence.queries.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:555)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:1175)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1134)
at org.eclipse.persistence.queries.ReadObjectQuery.execute(ReadObjectQuery.java:441)
at org.eclipse.persistence.internal.sessions.AbstractSession.internalExecuteQuery(AbstractSession.java:3270)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1839)
at org.eclipse.persistence.internal.indirection.NoIndirectionPolicy.valueFromQuery(NoIndirectionPolicy.java:326)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.valueFromRowInternal(ForeignReferenceMapping.java:2334)
at org.eclipse.persistence.mappings.OneToOneMapping.valueFromRowInternal(OneToOneMapping.java:1848)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.valueFromRow(ForeignReferenceMapping.java:2178)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.readFromRowIntoObject(ForeignReferenceMapping.java:1505)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildAttributesIntoObject(ObjectBuilder.java:462)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:1005)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildWorkingCopyCloneNormally(ObjectBuilder.java:899)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObjectInUnitOfWork(ObjectBuilder.java:852)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:735)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:689)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.buildObject(ObjectLevelReadQuery.java:805)
at org.eclipse.persistence.queries.ReadAllQuery.registerResultInUnitOfWork(ReadAllQuery.java:962)
at org.eclipse.persistence.queries.ReadAllQuery.executeObjectLevelReadQuery(ReadAllQuery.java:573)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:1175)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1134)
at org.eclipse.persistence.queries.ReadAllQuery.execute(ReadAllQuery.java:460)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:1222)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1839)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1804)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:258)
... 10 more
Caused by: org.postgresql.util.PSQLException: ERROR: could not read block 0 of relation base/16384/16567: read only 0 of 8192 bytes
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:302)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:1009)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:644)
... 50 more
/var/log/ambari-agent/ambari-agent.out OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
Any help is highly appreciated!
... View more
Labels:
12-19-2016
07:05 AM
HI! What if my current journalNode went down due to bad disk and re-setup the machine from the start. What steps do I need to follow? Thank you!
... View more
12-19-2016
05:43 AM
@Ron Lee thank you for this article. Done restoring my ambari-server. Now I just need to find how to reinstall hdp components/clients that are also installed on the same machine. https://community.hortonworks.com/questions/72544/restoring-ambari-server-reinstalling-all-hdp-compo.html
... View more
11-16-2016
03:29 AM
WOW. Thank you for laying it out for me.
For #2, how can I recreate the view?
Do I need to resolve issue #1 and #3 first? 🙂
... View more
11-15-2016
01:06 AM
Hi @jss 1. Yes I'm getting errors. See below; Caused by: KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
... 93 more
15 Nov 2016 09:02:23,706 WARN [qtp-ambari-client-6023] ViewRegistry:855 - Could not find the cluster identified by OLDclusterNAME.
15 Nov 2016 09:02:23,707 ERROR [qtp-ambari-client-6023] ViewContextImpl:241 - Failed to get username
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor617.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:84)
at org.apache.ambari.server.view.ViewContextImpl.getUsername(ViewContextImpl.java:233)
at org.apache.ambari.view.hive.utils.SharedObjectsFactory.getTagName(SharedObjectsFactory.java:145)
2. I'm not in kerberized mode. :? But I have ranger on my cluster.
... View more
11-14-2016
06:49 AM
Hi! I'm trying to crate new instance for my hive view, but I couldn't see where the "Create Instance" anywhere on Ambari>Views as stated here in guide below; http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_ambari_views_guide/content/creating_the_hive_view_instance.html Please help.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
11-14-2016
03:55 AM
1 Kudo
This happened to me. What I did was connect run the command below from beeline: beeline> !connect jdbc:hive2:// Then, will ask for my user and pass, and i'm connected to my hive. I'm having issues now connecting to hiveserver from Hue and Ambari View thou. 😞 EDIT: I have 2 hiveserver2, hive metastore and webhcat servers.
... View more
10-14-2016
02:31 AM
I know it's already closed, but just to want share the steps on how to transfer/add zookeeper to another hosts. Stop the Zookeeper server. Select Hosts on the Ambari dashboard, then select the host on which to install the new Zookeeper server. On the Summary page of the new Zookeeper host, select Add > Zookeeper Server and add the new Zookeeper server. Double check and update the following properties on the new Zookeeper server (use the existing Zookeeper server settings as a reference).
ha.zookeeper.quorum hbase.zookeeper.quorum templeton.zookeeper.hosts yarn.resourcemanager.zk-address hive.zookeeper.quorum hive.cluster.delegation.token.store.zookeeper.connectString Select Hosts on the Ambari dashboard, then select the original Zookeeper server host. Select Zookeeper > Service Actions > Delete Service to delete the original Zookeeper server. Save the HDFS namespace. Restart the new Zookeeper server and other services affected. https://community.hortonworks.com/articles/48943/how-to-move-the-zookeeper-server-to-another-host-a.html (thanks to @jkuang https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-reference/content/ch_amb_ref_moving_the_zookeeper_server.html
... View more
08-26-2016
03:14 AM
Hi @apappu, Thank you for this great article. 😄 I have 1 question thou, after enabling HTTPS for hbase, my Master HBase became Standby Master in Ambari. Do you have any idea on how to remove the 'Standby'? Thanks! Note: I only have 1 master HBase.
... View more
07-28-2016
01:54 AM
1 Kudo
Hi! Sorry for a newbie question, but I just want to ask on how to change timezone on Phoenix. Right now, by default it is set to GMT, and would like to change it to GMT+8. Already tried to use this param below to Hbase, custom hbase-site. phoenix.query.dateFormatTimeZone=GMT+08:00 But, still no luck. Thanks in advance! Best regards, MD
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-05-2016
06:15 AM
Applied the suggested parameter on our env cluster, and it works! Thanks! https://community.hortonworks.com/questions/43331/user-drwho-is-not-authorized-to-view-the-logs.html
... View more
07-05-2016
06:11 AM
Thanks @Rajeshbabu Chintaguntla After adding the hadoop.http.staticuser.user=yarn (and as also mentioned by @rguruvannagari), solved my issue.
... View more
07-05-2016
03:54 AM
Thanks @rguruvannagari! Same answer from the comment of @Rajeshbabu Chintaguntla.
... View more
07-05-2016
03:39 AM
Hi @rguruvannagari, Just verified, and hadoop.security.authorization is set to false.
... View more
07-05-2016
03:27 AM
Hi, When browsing the logs from Yarn UI, we're prompted with this message "User [dr.who] is not authorized to view the logs for container_e29_1466052817052_0956_01_000002 in log file" Please help how to solve this issue of mine. Best regards, Thank you!
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache YARN
06-16-2016
02:01 AM
I Had the same prob (with a different solution). Just sharing. After I applied the smartsense recommendations I got an Alert from ambari (hive) Fail: Execution of '! beeline -u 'jdbc:hive2://hivehost:10000/;transportMode=binary' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL'' returned 1. Error: Could not open client transport with JDBC Uri: jdbc:hive2://hivehost:10000/;transportMode=binary: java.net.ConnectException: Connection refused (state=08S01,code=0) Error: Could not open client transport with JDBC Uri: jdbc:hive2://hivehost:10000/;transportMode=binary: java.net.ConnectException: Connection refused (state=08S01,code=0) )
I've been reading all the posts and comments that are related to this prob. Thanks to you guys! Upon checking on hiveserver2.log, the issue was related on queuename (I've setup a yarn scheduler). The fix was, to set a queuename for hive and add 'hive' to submit application to that queuename Thanks again!
... View more
06-14-2016
09:15 AM
Thanks @Sri Bandaru. Will take note on this.
... View more
06-14-2016
09:14 AM
Just did run phoenix-sqlline and it's working. mmmm.. I'll try to kill it and restart the PQS. thanks @ssoldatov
... View more
06-14-2016
01:43 AM
Hi @Ted Yu, thanks for the respose. 1. Noted on enabling phoenix 2. JPS 6896 Main
7731 NiFi
32283 Kafka
22044 ConsoleConsumer
1740 jenkins.war
7709 RunNiFi
28271 Jps
3. I didn't restart the whole cluster. Just restarted hbase component. Restarted it via Ambari
... View more
06-14-2016
01:20 AM
Hi, I've installed Phoenix Query Server (PQS) on one of the nodes on my cluster based on this answer. At first, it's running, but when I restart the HBase, PQS wont start again. Here's the error log below: On the node where PQS is installed, I also installed "clients". Also, when installing PQS, do I need to enable Phoenix? or This will be automatic after PQS install? Thank you in advance! - MD
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
06-06-2016
07:28 AM
@Sagar Shimpi @Jitendra Yadav Issue solved. It was on my end via iptables. Thank you for your response. Really appreciate it. #caseclosed
... View more
06-06-2016
12:46 AM
Hi @Jitendra Yadav, I've setup a cluster and the node have 2 interfaces. i've setup an ipsec tunnel to access them via private ip, but cat access ambari by using privateip:8080.
... View more
06-03-2016
12:11 PM
Hi! How do I bind ambari port 8080 to interfaces? Thank yoU!
... View more
Labels:
- Labels:
-
Apache Ambari
06-02-2016
09:47 AM
Thank you for this great guide!
... View more