Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
655 | 08-15-2017 08:15 AM | |
3387 | 01-24-2017 06:58 AM | |
722 | 08-03-2016 06:45 AM | |
1516 | 06-01-2016 10:08 PM | |
1271 | 04-07-2016 10:30 AM |
04-09-2018
10:18 PM
sorry for the layout and style. seems AnswerHub doesn't parse <pre> tag properly.
... View more
04-09-2018
10:16 PM
How about if you do "yum install druid_2_6_4_0_91 -y"? Just FYI, on my Sandbox 2.6.4, it seems to work [root@sandbox-hdp ~]# yum list available | grep ^druid
druid.noarch 0.10.1.2.6.4.0-91 HDP-2.6-repo-1
druid_2_6_4_0_91.noarch 0.10.1.2.6.4.0-91 HDP-2.6-repo-1
[root@sandbox-hdp ~]# yum install druid_2_6_4_0_91
Loaded plugins: fastestmirror, ovl, priorities
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: centos.mirror.ausnetservers.net.au
* epel: ucmirror.canterbury.ac.nz
* extras: centos.mirror.ausnetservers.net.au
* updates: mirror.colocity.com
Resolving Dependencies
--> Running transaction check
---> Package druid_2_6_4_0_91.noarch 0:0.10.1.2.6.4.0-91 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==================================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================================
Installing:
druid_2_6_4_0_91 noarch 0.10.1.2.6.4.0-91 HDP-2.6-repo-1 219 M
Transaction Summary
==================================================================================================================================================
Install 1 Package(s)
Total download size: 219 M
Installed size: 244 M
Is this ok [y/N]:
... View more
04-07-2018
12:57 AM
If you do "yum list available | grep ^druid", do you get any output? If no output, could you check repo files under /etc/yum.repo.d/?
... View more
04-07-2018
12:55 AM
Could you also make sure your hive-site has Atlas Hive Hook in hive.exec.post.hooks from Ambari?
... View more
04-04-2018
04:22 AM
Do you mean "$set" like https://docs.mongodb.com/manual/reference/operator/update/set/ ?
... View more
03-26-2018
09:44 PM
Hi @Olivér Szabó, I see the new dates in solrconfig.xml in the Zookeeper, but newly added docs still used previous days, so that I posted this question. After unloading ranger_audit core multiple times, it eventually worked. But I was wondering if unload/reload is required? If yes, why Ambari doesn't do that?
... View more
03-26-2018
05:17 AM
Looks like updating "Max Retention Days" from Ambari Web UI does not update not only existing docs (which is expected) but also newly added docs. I unloaded the ranger audit core multiple times and restarted Ranger to recreate the collection, then it eventually worked. Is this expected behaviour? If so, why Ambari doesn't unload (or reload if that should work)? (a bug?)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
-
Apache Solr
03-26-2018
04:49 AM
1 Kudo
As the fix is "prop_value = prop_value.replace("/usr/lib/python2.6/site-packages", "/usr/lib/ambari-server/lib")", how about creating a symlink? ln -s /usr/lib/ambari-server/lib /usr/lib/python2.6/site-packages
... View more
03-22-2018
09:23 AM
If above hostname is not your AMS node, please check "hbase.zookeeper.quorum"
... View more
03-22-2018
09:20 AM
BTW, is "hadoop.datalonga.com" your AMS node?
... View more
03-22-2018
09:20 AM
If embedded mode, could you try typing 61181 instead of {{zookeeper_clientPort}}.
... View more
03-22-2018
03:18 AM
Thanks a lot! java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Dhdp.version=2.6.3.0-235 -Ddruid.storage.storageDirectory=hdfs://`hostname -f`:8020/tmp/data/index/output -Ddruid.storage.type=hdfs -classpath /usr/hdp/current/druid-overlord/extensions/druid-hdfs-storage/*:/usr/hdp/current/druid-overlord/lib/*:/usr/hdp/current/druid-overlord/conf/_common:/etc/hadoop/conf/ io.druid.cli.Main index hadoop ./hadoop_index_spec.json Above worked. Mine is sandbox so using `hostname -f`.
... View more
03-21-2018
11:14 PM
1 Kudo
What is the Ambari version? If Ambari is upgraded, did you do the post upgrade task? Could you check the value of hbase.zookeeper.property.clientPort in Ambari? Also what is the value in "Metrics Service operation mode" in Ambari? embedded or distributed? Could you stop AMS completely and make sure it's actually stopped by checking the process, for example "ps aux | grep ^ams", please? Then try starting from Ambari. Did you also give enough heap for AMS?
... View more
03-19-2018
02:51 AM
The core-site.xml under /etc/hadoop/conf shows: <property>
<name>fs.defaultFS</name>
<value>hdfs://sandbox-hdp.hortonworks.com:8020</value>
<final>true</final>
</property> So... I guess my config is OK? Do I need to add "druid.indexer.fork.property.druid.indexer.task.hadoopWorkingPath" in some property file and add this in the -cp?
... View more
03-15-2018
11:28 PM
Thank you, @Nishant Bangarwa I sent those by email.
... View more
03-14-2018
08:05 AM
I read http://druid.io/docs/latest/ingestion/command-line-hadoop-indexer.html and tried the following command: java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Dhdp.version=2.6.3.0-235 -classpath /usr/hdp/current/druid-overlord/conf/_common:/usr/hdp/current/druid-overlord/lib/*:/etc/hadoop/conf io.druid.cli.Main index hadoop ./hadoop_index_spec.json But this job fails with below: 2018-03-14T07:37:06,132 INFO [main] io.druid.indexer.JobHelper - Deleting path[/tmp/druid/mmcellh/2018-03-14T071308.731Z_55fbb15cd4d4454885d909c870837f93]
2018-03-14T07:37:06,150 ERROR [main] io.druid.cli.CliHadoopIndexer - failure!!!!
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_151]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_151]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_151]
at io.druid.cli.CliHadoopIndexer.run(CliHadoopIndexer.java:117) [druid-services-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
at io.druid.cli.Main.main(Main.java:108) [druid-services-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!
at io.druid.indexer.JobHelper.runJobs(JobHelper.java:389) ~[druid-indexing-hadoop-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
at io.druid.cli.CliInternalHadoopIndexer.run(CliInternalHadoopIndexer.java:131) ~[druid-services-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
at io.druid.cli.Main.main(Main.java:108) ~[druid-services-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
... 6 more
And the yarn application log shows "xxxx is not a valid DFS filename": 2018-03-14T07:31:41,369 ERROR [main] io.druid.indexer.JobHelper - Exception in retry loop
java.lang.IllegalArgumentException: Pathname /tmp/data/index/output/mmcellh/2014-02-11T10:00:00.000Z_2014-02-11T11:00:00.000Z/2018-03-14T07:13:08.731Z/0/index.zip.3 from hdfs://sandbox-hdp.hortonworks.com:8020/tmp/data/index/output/mmcellh/2014-02-11T10:00:00.000Z_2014-02-11T11:00:00.000Z/2018-03-14T07:13:08.731Z/0/index.zip.3 is not a valid DFS filename.
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:217) ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:480) ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:476) ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:491) ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:417) ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:930) ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891) ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?]
at io.druid.indexer.JobHelper$4.push(JobHelper.java:415) [druid-indexing-hadoop-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
... https://github.com/druid-io/druid/pull/1121 looks similar but this should have been fixed in HDP 2.6.3. So I'm wondering if the classpath I'm using is correct.
... View more
Labels:
- Labels:
-
Apache Hadoop
03-13-2018
04:20 AM
1 Kudo
This issue may happen when Hive Metastore's 'DBS' table contains a location which doesn't have port. For example, 'hdfs://sandbox-hdp.hortonworks.com/apps/hive/warehouse/dummies.db' I think above is a valid location path, but when HS2 is restarted from Ambari, Ambari replaces not only this 'DBS' location, but also all 'SDS' locations with, for example, like below: old location: hdfs://sandbox-hdp.hortonworks.com:8020/apps/hive/warehouse/dummies.db/emp_part_bckt/department=A new location: hdfs://sandbox-hdp.hortonworks.com:8020:8020/apps/hive/warehouse/dummies.db/emp_part_bckt/department=A
So that next time hiveserver2 is restarted, you don't see this behaviour, but you still need to correct SDS location.
... View more
02-15-2018
06:48 AM
HI @Slim I'm seeing "Connected to Druid but could not retrieve datasource information" when I create a table. Could you have any idea where I should check?
... View more
01-25-2018
06:40 AM
Hi @Kuldeep Kulkarni Does this tutorial still work with Ambari 2.6?
... View more
01-24-2018
03:14 AM
The doc https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_spark-component-guide/content/config-sts-user-imp.html doesn't say Kerberos is required in Prerequisites, but do you know if Spark 1.6 impersonation requires Kerberos (unlike Hive)?
... View more
11-03-2017
01:33 AM
Shouldn't bellow use Spnego one?
hbase.thrift.keytab.file=/etc/security/keytabs/hbase.service.keytab
hbase.thrift.kerberos.principal=hbase/_HOST@HWX.COM Otherwise, I couldn't make "hbase org.apache.hadoop.hbase.thrift.HttpDoAsClient" work from another node. Or am I missing something else?
... View more
09-08-2017
02:49 AM
figured out: jar xvf 4811-custom-alerts.zip
... View more
09-08-2017
02:21 AM
Is that only me can't extract the downloaded custom-alerts.zip?
... View more
08-15-2017
08:15 AM
It seems to work from my Mac... hw11970:~ hosako$ curl -u admin:admin -X GET http://sandbox.hortonworks.com:6080/service/public/v2/api/servicedef/1
{"id":1,"guid":"0d047247-bafe-4cf8-8e9b-d5d377284b2d","isEnabled":true,"createTime":1473763848000,"updateTime":1473763848000,"version":1,"name":"hdfs","implClass":"org.apache.ranger.services.hdfs.RangerServiceHdfs","label":"HDFS...
... View more
08-11-2017
03:41 AM
If you are trying to update some specific config of an exisiting cluster without using Web UI, Ambari has APIs and configs.sh (or .py) script: https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
... View more
08-10-2017
11:00 PM
It says "TEZ gets an information from HCat". Does this mean Tez uses hcat client library to talk to Hive Metastore to get those information?
... View more
08-10-2017
06:45 AM
1 Kudo
That would be a good enhancement... I don't think you can update an existing cluster by posting a new blueprint in same Ambari. You would need to rebuild your cluster.
... View more
08-04-2017
01:23 AM
Thank you very much for useful information. Could I ask if autogather also updates column level stats?
... View more
06-07-2017
02:03 AM
Hi @Kuldeep Kulkarni Thanks for this article. I had customer which had issue because if Maintenance mode is ON, Ambari *silently* fails to change the config. So could you update this article to make sure the Maintenance Mode is OFF, please?
... View more
06-05-2017
08:30 AM
Does this work with Zeppelin 0.7.0?
... View more