Member since
04-04-2018
80
Posts
32
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8468 | 10-28-2017 05:13 AM |
09-08-2016
04:25 PM
3 Kudos
@Nilesh You can use Linux based encryption on your disks which would give you encryption of HDFS data stored on those encrypted filesystems. While the performance is good, it is not as flexible. You should read this: https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html When you use Hadoop Transparent Data Encryption, you have the ability to selectively encrypt data. You can use different keys for different encryption zones which gives you finer grained access controls. I also recommend you take a look at this: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hdfs_admin_tools/content/hdfs-encryption-overview.html
... View more
07-24-2017
02:42 PM
For a comparison between compression formats take a look at this link: http://comphadoop.weebly.com/
... View more
09-04-2018
03:22 PM
Hi, this Looks like fifo-scheduling / capacity scheduling with 1 group only Try to switch to fair scheduling in yarn. Regards, Volker
... View more
03-09-2016
03:08 PM
3 Kudos
Looks like others have reported the same problem before. See http://grokbase.com/t/cloudera/hue-user/137axc7mpm/upload-file-over-64mb-via-hue and https://issues.cloudera.org/browse/HUE-2782. I do agree with HUE-2782's "we need to intentionally limit upload file size" as this is a web app and probably isn't the right tool when we are getting to some file size. Glad to hear "hdfs dfs -put" is working fine. On the flipside, I did test this out a bit with the HDFS Files "Ambari View" that is available with the 2.4 Sandbox and as you can see from the screenshot, user maria_dev was able to load a 80MB file to her home directory via the web interface as well as a 500+MB file. I'm sure this Ambari View also has some upper limit. Maybe it is time to start thinking about moving from Hue to Ambari Views??
... View more
03-08-2016
02:49 PM
1 Kudo
Thanks Dave Issue has been resolved after make Active NameNode where hue installed.
... View more
03-03-2016
07:09 AM
1 Kudo
@Alan Gates This is continued from previous post: I have made required changes in hive-site.xml on datanode, but when i restarted hive service from ambari changes are not reflecting in hive-site.xml it takes previous working configuration.
... View more
03-23-2017
03:54 PM
The command to get YARN logs is: yarn logs -applicationId <applicationId>
... View more
01-07-2016
09:40 AM
Thank you Deepesh and Neeraj. Hive stuck issue has been resolved by provide more Yarn resources. Thank you once again for your kind help. Regards, Nilesh
... View more
02-08-2016
01:06 PM
Our best practice is to have a dedicated ambai views server (standalone) separate from Ambari server node. You can add more Ambari Views servers by the way and Hue is in the same situation I'd it goes down impact felt by everyone. @Saurabh Kumar please refer to ambai views user guide http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_views_guide/content/ch_using_ambari_views.html
... View more
12-16-2015
07:06 PM
In HDFS, the NameNode metadata consists of fsimage files (checkpoints of the entire file system state) and edit logs (a sequence of transactions to be applied that alter the base file system state represented in the most recent checkpoint). There are various consistency checks performed by the NameNode when it reads these metadata files. The error message indicates that one of these consistency checks has failed. Specifically, the NameNode separately tracks the last known transaction ID that was previously present in edit logs in another file named seen_txid. If the transaction ID recorded in this file is not available in the edit logs when the NameNode is trying to load metadata at startup, then it aborts. It's difficult to say exactly how this could have happened in your environment without a deep review of configuration, logs and operations procedures. A potential explanation would be if the NameNode metadata was restored from a backup, and that backup contained the most recent fsimage (the checkpoint) but did not include the edit logs (the subsequent transactions). You might be interested in these additional resources that give further explanation of the NameNode metadata and suggestions on a possible backup plan. http://hortonworks.com/blog/hdfs-metadata-director... https://community.hortonworks.com/questions/4694/p...
... View more
- « Previous
-
- 1
- 2
- Next »