Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3173 | 12-25-2018 10:42 PM | |
| 14192 | 10-09-2018 03:52 AM | |
| 4763 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2912 | 06-21-2017 12:06 AM |
06-20-2016
10:43 AM
If a Sqoop MR job fails there will be no output. Also, if you are using import jobs, parameters for "append" and "lastmodified" will not be updated. They are updated only after a successful import.
... View more
06-20-2016
09:55 AM
Option 1: reformat: you will need not only to "copyFromLocal" but also recreate the file system. See for example this for details. Option 2: Exit safe mode and find out where you are. I'd recommend this one. You can also find out what caused the trouble, maybe all corrupted blocks are on a bad disk or something like that. You can share the list of files you are uncertain whether to restore them or not.
... View more
06-20-2016
09:01 AM
No, not recommended, because the new version might not inter-operate well with other components. That's why HDP is released in "distributions", a line-up of components, tuned, and tested to work together. That said, some stand-alone components like Knox or Flume can be upgraded separately, but manually, it's not supported by Ambari, and not supported by Hortonworks.
... View more
06-20-2016
08:28 AM
1 Kudo
Do you really need 3 masters on such a small cluster? How about 2 masters, putting 3rd Zookeeper on the Kafka node, or on one worker node, and having 5 worker nodes? I always prefer more computing power than "book-keeping" of master nodes. You can also put Knox and Flume on the Edge node, if you use them, of course. Then, distribute other master services on 2 masters, Ambari on one, AMS collector on the other; NN on one RM on the other (or on both if you want HA). And, yes, you can move the majority of master services later, using Ambari.
... View more
06-20-2016
07:57 AM
1 Kudo
No, I mean, after hdfs is back, you detect corrupted files in hdfs, and restore important ones, like those in /hdp and /user/oozie/share/lib. You can also reformat and start afresh, but wait to see what files and how many of them are damaged. Regarding your worries about machines rebooting, theoretically yes, a thunderbolt can strike from a blue skies, but usually it doesn't 🙂 and when machines reboot they do so one at a time. That's why you do NN HA, backup NN metadata, avoid chipo servers, use UPS, etc.
... View more
06-19-2016
09:54 PM
2 Kudos
If you can afford it, then definitely on a separate server, to avoid potential bad influence from busy Hadoop master components. It is also recommended to have at least one slave KDC which can become master KDC if needed. You can find details here. KDCs can run on VMs.
... View more
06-19-2016
10:29 AM
Pig on Spark appears to be still under development, as PIG-4059, with more than 80% of sub-tasks completed. The source code is here. On the other hand Spork appears to be abandoned.
... View more
06-19-2016
01:17 AM
Your issue is described here. Yes, you can stop your DN, delete dncp_block_verification log files, and restart DN. The issue was fixed in Hadoop-2.7 but I guess you are using an older version.
... View more
06-19-2016
12:55 AM
1 Kudo
For troubleshooting this and other potential Kafka upgrade issues please see this.
... View more
06-18-2016
02:25 AM
Glad to hear it works! By the way, what type is "shape"? According to docs: ST_AsGeoJson(geometry) return GeoJson representation of geometry.
... View more