Member since
06-02-2017
39
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1498 | 09-04-2018 09:07 PM | |
11590 | 08-30-2018 04:57 PM | |
4376 | 08-14-2018 03:49 PM |
01-03-2023
12:07 PM
Ssumers, Can you please share the value you added in gateway.dispatch.whitelist?
... View more
08-07-2021
03:23 AM
Add below properties to yarn-site.xml: <property><name>yarn.resourcemanager.webapp.address.rm1</name> <value>hostname_of_resourcemanager_1:8088</value> </property> <property><name>yarn.resourcemanager.webapp.address.rm2</name> <value>hostname_of_resourcemanager_2:8088</value> </property> <property><name>yarn.resourcemanager.webapp.https.address.rm1</name> <value>hostname_of_resourcemanager_1:8090</value> </property> <property><name>yarn.resourcemanager.webapp.https.address.rm2</name> <value>hostname_of_resourcemanager_2:8090</value> </property> After adding these properties, restart resourcemanager and nodemanager. In the above properties, rm ids (rm1 and rm2) are values given for yarn.resourcemanager.ha.rm-ids property in yarn-site.xml. If you have given value other than rm1,rm2 for yarn.resourcemanager.ha.rm-ids property then make changes in the above mentioned webapp properties accordingly.
... View more
07-15-2020
06:45 AM
That is the reason I see my fat container containing Hadoop 2.6.5 containers (rather than 3.1.0). Expelled HftpFileSystem from Hadoop 3. I need a 2.3.1 container worked from Hadoop 3.1. I just observe Spark 2.3.1 worked from Hadoop 2.7. Where would i be able to get Spark 2.3.1 worked with Hadoop 3? Sparkle 2.3.1 backings Hadoop 3? A debt of gratitude is in order for the assistance. I tackled this issue utilizing 2.3.1 over the HDP3.0 group under/usr/hdp/current/spark2-customer/.
... View more
11-26-2019
09:48 AM
Hi @jiangok2006 ...Below are my tree structure of helium folder with all the permissions. (base) [zeppelin@xxx-xxx-XXXhelium]$ tree . ├── helium.json └── zeppelin-toc-spell ├── node_modules │ ├── zeppelin-spell │ │ └── package.json │ └── zeppelin-toc-spell │ ├── index.js │ ├── install.png │ ├── package.json │ ├── README.md │ ├── screenshot.png │ └── zeppelin-toc-spell.json └── package-lock.json Will this works as per your steps ??
... View more
09-04-2018
09:07 PM
It is a knox bug https://issues.apache.org/jira/browse/KNOX-1424. After patching the fix, SQL interpreter show results.
... View more
08-28-2018
04:21 PM
Thanks guys. Rolling back to python 2.7 made livy server succeed to start.
... View more
05-02-2019
11:18 PM
I've tried this and it doesn't seem to work. No matter what I change it to, I get errors in the log indicating it can't find znodes. ie: 2019-05-01 11:11:28,748 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server myserver.com/xx.xx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error) 2019-05-01 11:11:28,757 INFO zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established, initiating session, client: /10.87.130.196:51436, server:myserver.com/xx.xx.xxx.xxx:2181 2019-05-01 11:11:28,772 INFO zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1279)) - Session establishment complete on server myserver.com/xx.xx.xxx.xxx:2181, sessionid = 0x36a72f6d9090007, negotiated timeout = 60000 2019-05-01 11:11:28,792 WARN client.ConnectionImplementation (ConnectionImplementation.java:retrieveClusterId(528)) - Retrieve cluster id failed java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-unsecure/hbaseid at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:526) at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:286) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114) at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.serviceInit(HBaseTimelineReaderImpl.java:88) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer.serviceInit(TimelineReaderServer.java:92) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer.startTimelineReaderServer(TimelineReaderServer.java:233) at org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer.main(TimelineReaderServer.java:246) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-unsecure/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:164) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:321) at java.lang.Thread.run(Thread.java:745) 2019-05-01 11:11:28,982 INFO common.HBaseTimelineStorageUtils (HBaseTimelineStorageUtils.java:getTimelineServiceHBaseConf(65)) - Using hbase configuration at file:///usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml 2019-05-01 11:11:28,984 INFO zookeeper.ReadOnlyZKClient (ReadOnlyZKClient.java:<init>(130)) - Start read only zookeeper connection 0x534a5a98 to myserver.com:2181,myserver2.com:2181,myserver3.com:2181, session timeout 90000 ms, retries 6, retry interval 1000 ms, keep alive 60000 ms
... View more
08-15-2018
11:23 PM
@Lian Jiang For other HCC users reference: Adding a link to the other thread which describes about this issue in more detail. https://community.hortonworks.com/questions/212329/hdp30-timeline-service-v2-reader-cannot-create-zoo.html?childToView=212445#answer-212445
... View more
06-03-2017
04:29 AM
@Lian Jiang
Password less ssh is not mandatory for agent installation. You can also install the ambari agent without setting up the password less ssh. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-reference/content/ch_amb_ref_installing_ambari_agents_manually.html . If you want to setup password less ssh for local host then it is also possible too. # ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): (I kept it EMPTY)
Enter same passphrase again: (I kept it EMPTY)
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
# ssh-copy-id -i ~/.ssh/id_rsa.pub localhost
# ssh root@localhost .
... View more
06-02-2017
05:02 AM
@Lian Jiang Else you can also try adding the following repo in your "~/.m2/settings.xml" <settings>
<mirrors>
<mirror>
<id>public</id>
<mirrorOf>*</mirrorOf>
<url>http://repo.hortonworks.com/content/groups/public</url>
</mirror>
</mirrors>
</settings>
As i see it here as well : http://repo.hortonworks.com/content/groups/public/org/apache/maven/wagon/wagon-ssh-external/maven-metadata.xml .
... View more