Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4022 | 08-20-2018 08:26 PM | |
| 1927 | 08-15-2018 01:59 PM | |
| 2357 | 08-13-2018 02:20 PM | |
| 4070 | 07-23-2018 04:37 PM | |
| 4987 | 07-19-2018 12:52 PM |
04-12-2016
02:06 PM
5 Kudos
In addition to the UnpackContent processor suggested by @Chris Gambino, the CompressContent[1] processor has a "decompress" option which works on these compression formats:
use mime.type attribute gzip bzip2 xz-lzma2 lzma snappy snappy framed [1]https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.CompressContent/index.html
... View more
04-12-2016
08:19 AM
Hi @Sunile Manjee Cloudbreak does not support launching machines on vSphere but you can implement a vSphere modul with SPI interfaces and then you will able to launch machines. For more details you check our moduls in the source. As I see vSphere supporting Openstack: http://docs.openstack.org/kilo/config-reference/content/vmware.html So If you install an Openstack on the top of vSphere then you will able to launch instances with Cloudbreak (It was not tested). Br, R
... View more
04-09-2016
12:56 AM
1 Kudo
Use fsck, it's a tool of choice to manage HDFS. "Orphans" are corrupted files (with missing blocks) in HDFS lingo. You can use "-move" or "-delete" options to move corrupted files to /lost+found or to delete them. fsck will also tell you about under-replicated blocks (having at least 1 replica but less than configured replication factor) but HDFS will repair them little by little by creating missing replicas.
... View more
04-08-2016
06:59 AM
Hi @Sunile Manjee, have you seen the answers? Please consider to accept/upvote the helpful ones.
... View more
05-30-2016
09:36 AM
Thank you so much for your answer I had the same problem this morning and I thought it was a proxy problem but actually I had an empty note in Zeppelin. I hope this tiny problem will be fixed in the next versions of Zeppelin Notebook. When I check the status of zeppelin-daemon.sh it tells me "Zeppelin running but process is dead" can anyone knows the reason for this error ?
... View more
05-27-2016
02:24 AM
Not supported and not possible is two different things. Docs have a clear example.
... View more
03-30-2016
08:59 PM
3 Kudos
I don't see any problems from HDFS side. MR is using utf-8 for writing text. If the user is using other encoding, she has to extend the input/output format.
... View more
03-30-2016
06:25 PM
@Mats Johansson Understood. However core of services like hive/pig use map reduce. Does that have the same constraints for node labeling? it seems node labeling is only applicable to storm/spark/kafka/hbase/etc. Services which do not use map reduce as its engine.
... View more
07-07-2016
02:52 AM
@Frank Welsch Yes.
... View more
05-23-2016
03:09 PM
@Enis The core of opening this post was regarding HBASE_10201. This features allows to flushes based on column family size. If the CF > DEFAULT_HREGION_MEMSTORE_PER_COLUMN_FAMILY_FLUSH the it will flush does families. Your thoughts?
... View more