Member since
10-14-2016
45
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2581 | 09-08-2017 06:36 AM | |
1197 | 12-15-2016 06:53 AM |
01-19-2018
12:59 PM
@rkovacs All looks good [root@cloudbreak-deployer-1 cloudbreak-deployment]# cbd doctor ===> Deployer doctor: Checks your environment, and reports a diagnose.
uname: Linux cloudbreak-deployer-1 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017 x86_64 x86_64 x86_64 GNU/Linux local version:1.16.5 latest release:1.16.5 docker images: hortonworks/haveged:1.1.0 hortonworks/socat:1.0.0 hortonworks/cbd-smartsense:0.10.0 hortonworks/cloudbreak-uaa:3.6.5 hortonworks/cloudbreak:1.16.5 hortonworks/cb-auth:1.16.5 hortonworks/cb-web:1.16.5 hortonworks/cloudbreak-autoscale:1.16.5 docker command exists: OK docker client version: 17.05.0-ce docker client version: 17.05.0-ce ping 8.8.8.8 on host: OK ping github.com on host: OK ping 8.8.8.8 in container: OK ping github.com in container: OK
... View more
01-19-2018
12:05 PM
@mmolnar same error attached the screen shot
... View more
01-19-2018
10:48 AM
Hi All, I am using Google Cloud and Cloudbreak 1.16.5. Cloudbreak UI is working. Successful login to cloudbreak. But while creating credentials under the [manage credentials] tab. Getting ERROR :-
GCP credential create failed : Failed to verify the credential: Could not verify credential [credential: 'cbddemo'], detailed message: Read timed out
detail log is attached.
Thanks,
Bhupesh
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
01-19-2018
10:30 AM
@mmolnar ,Yes i correctly generate the private & public key using puttygen, also place correctly in putty, even i checked with other team members as well, and all having the same issue.
... View more
01-18-2018
12:28 PM
I am using following document for cloud break deployment. https://hortonworks.github.io/cloudbreak-documentation/latest/gcp-launch/index.html
I created VM using Cloudbreak deployer image by executing the following command
gcloud compute images create cloudbreak-deployer-220-2017-12-19
--source-uri
gs://sequenceiqimage/cloudbreak-deployer-220-2017-12-19.tar.gz
after successful creation i am try to do ssh on VM using puyty. but we get following error.
Error - Server refused our key.
Disconnected: No supported authentication methods available (server sent: publickey, gssapi-keyex, gssapi-with-mic)
Firewall rules for port 22 - OK
Private Public Key - OK
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-08-2017
06:54 AM
Hi, Could you please execute your query with beeline --verbose=true. What happens if the query is run from Hive Cli?
... View more
09-08-2017
06:36 AM
1. Difference in HDP & HDF. Please go through the following link you will get better understanding. https://community.hortonworks.com/questions/108829/differences-between-hdp-hortonworks-data-platform.html 2. Cloudera vs. Hortonworks vs. MapR https://www.dezyre.com/article/cloudera-vs-hortonworks-vs-mapr-hadoop-distribution-comparison-/190
... View more
09-08-2017
06:24 AM
I got answer Yes, Hive LLAP does support optimized non-equi joins. In more detail, Apache Hive 2.2 added support for non-equijoins (HIVE-15211) while Apache Tez recently added the ability to run non-equijoins (aka theta joins) in a parallel fashion (TEZ-2104). This is all enabled in Hive LLAP within HDP 2.6.
... View more
09-05-2017
10:56 AM
@James Dinkel, @Dennis Connolly, Hi, Currently we are using presto for non equi joins, as we are looking replacement of presto. please let me know if LLAP support optimized non-equi joins ?
... View more
- Tags:
- llap
08-02-2017
10:08 AM
@Benjamin Leonhardi , @Sagar Shimpi, @Kuldeep Kulkarni Hi we have encountered this scenario multiple time, need to identify root cause for the error. the Tez job showed as complete in Tez view, but YARN ResourceManager UI says the job is still running. particular job had
been allocated 21 containers but now job
is executing only with 1 container. rm-ui-yarn.png tez-ui.png Error in Log For Container 2017-08-01 15:46:56,995 [WARN] [AMShutdownThread] |retry.RetryInvocationHandler|: Exception while invoking ClientNamenodeProtocolTranslatorPB.complete over test.com/199.6.0.0:8020. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/hive/sauser/_tez_session_dir/17a1d092-cce6-4b55-a6e3-7862fe7db6b5/.tez/application_1500098274855_236372/recovery/1/summary (inode 305586844): File does not exist. Holder DFSClient_NONMAPREDUCE_-2091640712_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy12.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy13.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.tez.dag.history.recovery.RecoveryService.serviceStop(RecoveryService.java:223)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
at org.apache.tez.dag.history.HistoryEventHandler.serviceStop(HistoryEventHandler.java:82)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:65)
at org.apache.tez.dag.app.DAGAppMaster.stopServices(DAGAppMaster.java:1768)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:1949)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHandler$AMShutdownRunnable.run(DAGAppMaster.java:864)
at java.lang.Thread.run(Thread.java:745) Thanks, Bhupesh Khanna
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache YARN
06-05-2017
09:58 AM
Hi, I have 2 HBase server (one is active and second is standby) and 4 Region Servers, while clicking on Active HBase Master UI, there are no regionserver for hbase reporting to hbase master But clicking on the Standby HBase Master UI, i can see the 4 regionservers detail. Can any one help me what is the actual issue with HBase. Thanks, Bhupesh Khanna
... View more
Labels:
- Labels:
-
Apache HBase
03-16-2017
09:46 AM
Hi @Artem Ervits support ticket is created with same title "SmartSense / Activity Analyzer" please have a look. Thanks, Bhupesh Khanna
... View more
03-15-2017
10:21 AM
we are facing issue with SmartSense / Activity Analyzer dashboard ERROR 1012 (42M03): Table undefined. tableName=ACTIVITY.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks SmartSense
02-09-2017
06:37 AM
We are getting below error, can any one please help me how i can debug the problem and identify root course of error Presto
Query Error: Query exceeded max memory size of 50GB, Also i know the simple approach is to increase the ram
allocation from 50GB but there are some concerns 1. How we can come to know what is optimum size of memory/ram allocation. 2. Suppose even if we increase it to 100GB then
there is no guarantee that the user will not receive the error again of out of
memory. 3. Is there any way to restrict user from launching such a
huge query or any other preventive approach. OR (limit to end user that we cannot process any
query which require more than 100GB or 150GB.) Thanks, Bhupesh Khanna
... View more
- Tags:
- presto
12-21-2016
07:07 AM
1 Kudo
I wont remember grafana admin password. Is there any way to reset the password ?
... View more
12-15-2016
06:53 AM
Hi All, Thanks for the support, now the issue is resolved,need to change some settings on google cloud networking / port.
... View more
12-14-2016
05:58 AM
@Prashobh Balasundaram
No Kerberized cluster, and the result is same on using Firefox Or chrome.
... View more
12-14-2016
05:56 AM
@apappu
yes i am able to connect via curl from remote host, As i am using RHEL-7 and iptables are OFF systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
Dec 08 10:52:41 localhost systemd[1]: Starting firewalld - dynamic firewall daemon...
Dec 08 10:52:43 localhost systemd[1]: Started firewalld - dynamic firewall daemon.
Dec 08 13:04:12 nitrogen systemd[1]: Stopping firewalld - dynamic firewall daemon...
Dec 08 13:04:13 nitrogen systemd[1]: Stopped firewalld - dynamic firewall daemon.
... View more
12-14-2016
05:51 AM
@james.jones the environment is google cloud, with 14 node cluster and network is internal (No external IP's)
... View more
12-12-2016
09:39 AM
I am using latest Ambari - 2.4.2.0 and HDP - 2.5.3.0 Ambari Infra & Ranger deployed perfectly all service check is working as expected, but having problem while accessing Ambari Infra Solr UI & Ranger UI, We get [ connection has timed out ] on browser , however the curl command working fine.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
-
Apache Solr
10-21-2016
05:03 AM
Hi @Ayub Khan
Below are the answers
Is your cluster kerberized? - No Is ranger enabled? - Yes which version of HDP you have installed? - HDP-2.5.0.0 please let me know if more information is required. Thanks, Bhupesh Khanna
... View more
10-20-2016
02:42 PM
@Ayub Khan I have seen you have resolved similar kind of problem with Atlas, could you please help me on the same i used some references http://atlas.incubator.apache.org/0.7.0-incubating/Bridge-Hive.html https://community.hortonworks.com/questions/39839/how-to-import-metadata-from-hive-into-atlas-and-th.html Thanks in advance Bhupesh Khanna
... View more
10-20-2016
02:06 PM
Atlas error while running import-hive.sh Anycan help me..complete stack trace as below. 2016-10-20 19:23:36,626 DEBUG - [main:] ~ Using resource http://ServerName:21000/api/atlas/entities/4dc94048-3f95-42f7-ae5f-6e9b5baea06d for 0 times (AtlasClient:784)
2016-10-20 19:24:36,673 WARN - [main:] ~ Handled exception in calling api api/atlas/entities (AtlasClient:791)
com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:81)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:623)
at org.apache.atlas.AtlasClient.callAPIWithResource(AtlasClient.java:1188)
at org.apache.atlas.AtlasClient.callAPIWithRetries(AtlasClient.java:785)
at org.apache.atlas.AtlasClient.callAPI(AtlasClient.java:1214)
at org.apache.atlas.AtlasClient.updateEntity(AtlasClient.java:808)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.updateInstance(HiveMetaStoreBridge.java:506)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerDatabase(HiveMetaStoreBridge.java:159)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:124)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:118)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:662)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
... 14 more
2016-10-20 19:24:36,675 WARN - [main:] ~ Exception's cause: class java.net.SocketTimeoutException (AtlasClient:792)
Exception in thread "main" com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:81)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:623)
at org.apache.atlas.AtlasClient.callAPIWithResource(AtlasClient.java:1188)
at org.apache.atlas.AtlasClient.callAPIWithRetries(AtlasClient.java:785)
at org.apache.atlas.AtlasClient.callAPI(AtlasClient.java:1214)
at org.apache.atlas.AtlasClient.updateEntity(AtlasClient.java:808)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.updateInstance(HiveMetaStoreBridge.java:506)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerDatabase(HiveMetaStoreBridge.java:159)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:124)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:118)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:662)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
... 14 more
Failed to import Hive Data Model!!!
... View more
Labels:
- Labels:
-
Apache Atlas