1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1842 | 04-03-2024 06:39 AM | |
| 2860 | 01-12-2024 08:19 AM | |
| 1581 | 12-07-2023 01:49 PM | |
| 2345 | 08-02-2023 07:30 AM | |
| 3233 | 03-29-2023 01:22 PM |
07-28-2016
07:41 AM
1 Kudo
TOAD is a freeware tool available for OSX and Windows from Dell. It supports Kerberos, Hive, HDFS Explorer, SQL, export to CSV/XLS, charting and logging. The documentation mentions support for up to HDP 2.3, but I had most features work well and fast with HDP 2.4.2 Sandbox. The tool is very smart to auto detect settings using Ambari. AMBARI URL for Sandbox: sandbox.hortonworks.com:8080 With Ambari User. Once configured, you can query Hive Tables through Spark SQL or Hive. The performance is quite good. The tool provides a nice view of Charts and Logs. You can can also browse HDFS files.
... View more
07-27-2016
11:21 PM
2 Kudos
A few more GUI tools https://marketplace.eclipse.org/content/hbase-plugin-eclipse https://github.com/NiceSystems/hrider
... View more
07-27-2016
07:04 PM
1 Kudo
I am wondering if Cloudbreak or another tool can easily setup an HDF cluster in AWS?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
07-27-2016
04:10 PM
7 months ago, it only supported 1.4 and we are running 1.6. http://hortonworks.com/blog/magellan-geospatial-analytics-in-spark/ Github says 1.4+, not 1.4 only. https://github.com/harsha2010/magellan
... View more
07-27-2016
04:08 PM
I am running the Magellan: Geospatial Analytics on Spark zeppelin notebook and when I get to Magellan Context I get this exception. val magellanContext = new MagellanContext(sc) java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/DataSourceStrategy$
at org.apache.spark.sql.magellan.MagellanContext$anon$1.<init>(MagellanContext.scala:35)
at org.apache.spark.sql.magellan.MagellanContext.<init>(MagellanContext.scala:32)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:59)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:64)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:66)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:68)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:70)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:72)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:74)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:76)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:78)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:80)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:82)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:84)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:86)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:88)
at $iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:90)
at $iwC$iwC$iwC$iwC$iwC.<init>(<console>:92)
at $iwC$iwC$iwC$iwC.<init>(<console>:94)
at $iwC$iwC$iwC.<init>(<console>:96)
at $iwC$iwC.<init>(<console>:98)
at $iwC.<init>(<console>:100)
at <init>(<console>:102)
at .<init>(<console>:106)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:709)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:673)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:666)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:295)
at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR
Took 2 seconds (outdated) All the other actions before that including import and z dep work fine.
... View more
Labels:
- Labels:
-
Apache Zeppelin
07-25-2016
06:49 PM
can you share the flow XML?
... View more
07-25-2016
06:46 PM
2 Kudos
Hadoop clusters are made up of Name Nodes and Data Nodes. There are a number of servers and services that directly use HDFS and those nodes so they need to be bundled tight. NiFi really is separate, it's an edge node that can work in cars, sensors or industrial devices. It makes more sense to keep it on it's own separate cluster as it has it's own clustering, doesn't use the NameNode or Zookeeper or the Hadoop infrastructure. It works well with writing to HDFS and Hadoop services, but it also works well with Azure, AWS, JMS, MQTT and other non-Hadoop sources.
... View more
07-25-2016
06:20 PM
1 Kudo
You can use Ambari to install NiFi (https://github.com/abajwa-hw/ambari-nifi-service) , but it really needs to be it's own cluster as it's not part of the Apache Hadoop stack. I like to think of HDP and HDF as Peanut Butter and Jelly. Awesome together,
... View more
07-25-2016
01:35 PM
Here are some NiFi XSLT references https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TransformXml/ http://stackoverflow.com/questions/6104698/use-xsl-to-convert-an-xml-file-to-name-value-pair
... View more