Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3560 | 05-03-2017 05:13 PM | |
| 2935 | 05-02-2017 08:38 AM | |
| 3184 | 05-02-2017 08:13 AM | |
| 3147 | 04-10-2017 10:51 PM | |
| 1624 | 03-28-2017 02:27 AM |
08-20-2016
02:14 PM
2 Kudos
You can do it but it is not supported by hortonworks, you'd need to follow Spark Standalone instructions. it defeats the purpose though as you lose benefit of Ambari provisioning, security features (no kerberos unless you run Spark on YARN) and no support from vendor. you will also have to package and deploy yourself, all that said here's the 2.0 guide https://spark.apache.org/docs/latest/spark-standalone.html Our doc does not say it is possible, it says we do not support it.
... View more
08-20-2016
01:45 PM
1 Kudo
HDP 2.5 will contain Hadoop 2.7.3 not 2.8, if you need to apply patch, please involve Hortonworks support. If you are a customer, HWX can release a patch for you if it's technically possible based on specifics of the JIRAs. If you don't have support, you can certainly do it but test it first. All that said, I would check release notes for HDP 2.5, we may be back porting some 2.8 JIRAs to 2.7.x branch.
... View more
08-17-2016
06:48 PM
@Shun Takebayashi please upgrade to HDP 2.3.6 as that has fixes for MirrorMaker and support from HWX.
... View more
08-16-2016
11:02 PM
Path is /tmp/mahi_dev/data Lower case data my friend and you ran with upper
... View more
08-16-2016
10:24 PM
Well there's your answer, you did not upload the file or placed it in the wrong directory hdfs dfs -mkdir /tmp/mahi_dev
hdfs dfs -mkdir /tmp/mahi_dev/Data
hdfs dfs -put count.txt /tmp/mahi_dev/Data/
... View more
08-16-2016
10:16 PM
1 Kudo
You need to click on each component then in top right hand corner you need to select actions > turn off maintenance mode. don't do it for secondary namenodes as that is there on purpose, this is a single node machine
... View more
08-16-2016
02:03 PM
1 Kudo
Try to pass --schema schemaname argument, alternatively, specify database and schema in your jdbc URL sqoop import ... --table custom_table -- --schema custom_schema
... View more
08-16-2016
01:34 PM
1 Kudo
Please try to match your pom.xml versions to your cluster, I noticed you're using Storm 2.0 dependency
... View more
08-16-2016
11:40 AM
Can you try running Pig service check? It seems it fails on running service check via pig view. once it passes, run pig script in grunt as well and then try creating another view. after all those checks make sure to review the documentation for setting up pig view http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_views_guide/content/ch_using_pig_view.html
... View more