Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1315 | 08-15-2017 08:15 AM | |
6154 | 01-24-2017 06:58 AM | |
1613 | 08-03-2016 06:45 AM | |
2914 | 06-01-2016 10:08 PM | |
2502 | 04-07-2016 10:30 AM |
07-15-2024
01:53 AM
Hi team, I want to configuration "yarn-user/hdp01-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" "yarn-user/hdp02-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" "yarn-user/hdp03-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" Please given a rule advidor. Thanks
... View more
09-15-2023
02:06 AM
I use CDH 6.3.2 。 hive 2.1 hadoop 3.0 hive on spark 。yarn cluster 。 hive.merge.sparkfiles=true ; hive.merge.orcfile.stripe.level=true ; This configuration makes the 1099 reduce file result merge into one file when the result is small 。Then the merged file has about 1099 stripes in one file 。 Then the result is so slow when it is read. I tried hive.merge.orcfile.stripe.level=false ; The result is desirable 。One small file with one stripe and read fast 。 Can anyone tell the difference between true and false ? Why " hive.merge.orcfile.stripe.level=true " is the default one ?
... View more
05-28-2022
11:27 AM
It's seems without space. #HDP DEV Cluster kadmin.local : addprinc krbtgt/HDPDQA.QA.COM@HDPDEV.DEV.COM kadmin.local : addprinc krbtgt/HDPDDEV.DEV.COM@HDPQA.QA.COM #HDP QA cluster Kadmin.local : addprinc krbtgt/HDPDQA.QA.COM@HDPDEV.QA.COM kadmin.local : addprinc krbtgt/HDPDDEV.DEV.COM@HDPQA.QA.COM
... View more
09-14-2020
12:16 AM
@RajaChintala You have to make below changes. HADOOP_OPTS="-Dsun.security.krb5.debug=true" For more details follow below docs: https://docs.cloudera.com/documentation/enterprise/5-10-x/topics/cdh_sg_debug_sun_kerberos_enable.html https://spark.apache.org/docs/2.0.0/running-on-yarn.html#troubleshooting-kerberos https://spark.apache.org/docs/2.0.0/running-on-yarn.html#troubleshooting-kerberos
... View more
02-06-2020
09:12 AM
@josh_nicholson NOTE: For Kerberized Cluster use the value of "zookeeper.znode.parent" may be "/ams-hbase-secure" so we can connect to it as following: /usr/hdp/2.5.0.0-1245/phoenix/bin/sqlline.py c6403.ambari.apache.org:61181:/ams-hbase-secure
... View more
10-22-2019
07:50 PM
Hi @Jonas Straub,do as your article ,i create collection by curl command,and got the 401 error: curl –negotiate –u : ‘http://myhost:8983/solr/admin/collections?action=CREATE&name=col&numShards=1&replicationFactor=1&collection.configName=_default&wt=json’ { “responseHeader”:{ “status”:0, “QTime”:31818}, “failure”:{ “myhost:8983_solr”:”org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://myhost:8983/solr:Excepted mime type application/octet-stream but got text/html. <html> <head> <meta http-equiv=\”Content-Type\” content=\”text/html;charset=utf-8\”/>” <title> Error 401 Authentication required </title> </head> <body> <h2>HTTP ERROR 401</h2> <p> Problem accessing /solr/admin/cores.Reason: <pre> Authentication required</pre> </p> </body> </html> } } When I debug the solr source code, found this exception is returned by “coreContainer.getZKController().getOverseerCollectionQueue().offer(Utils.toJson(m), timeout)”,so I doubt maybe the solr don’t authenticate zookeeper info and I use a no-kerberos zookeeper to replace the Kerberos zookeeper, solr collection can be created successfully. How to solve the problem with Kerberos ZK?
... View more
03-26-2018
10:01 PM
Seems like it's required. I guess mostly that was used for bootstrapping. How that solrconfig.xml file changes ... it's not that clear we already have the collection created or we are just doing an update. Then i think in the future upload configs + create / reload collections should be moved on Ranger side (background bootstrap process), then Ambari should just configure the files, not to do anything with Solr API from agent side.
... View more
03-22-2018
03:18 AM
Thanks a lot! java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Dhdp.version=2.6.3.0-235 -Ddruid.storage.storageDirectory=hdfs://`hostname -f`:8020/tmp/data/index/output -Ddruid.storage.type=hdfs -classpath /usr/hdp/current/druid-overlord/extensions/druid-hdfs-storage/*:/usr/hdp/current/druid-overlord/lib/*:/usr/hdp/current/druid-overlord/conf/_common:/etc/hadoop/conf/ io.druid.cli.Main index hadoop ./hadoop_index_spec.json Above worked. Mine is sandbox so using `hostname -f`.
... View more
02-15-2018
06:48 AM
HI @Slim I'm seeing "Connected to Druid but could not retrieve datasource information" when I create a table. Could you have any idea where I should check?
... View more
01-24-2018
03:14 AM
The doc https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_spark-component-guide/content/config-sts-user-imp.html doesn't say Kerberos is required in Prerequisites, but do you know if Spark 1.6 impersonation requires Kerberos (unlike Hive)?
... View more