Member since
12-10-2015
18
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2216 | 02-27-2018 07:07 AM | |
2274 | 02-13-2018 09:00 AM | |
1119 | 11-21-2016 02:53 PM |
02-27-2018
07:07 AM
Hi csguna, Navigator audit is auditing the Hadoop Master roles only, and the hdfs shell commands are working as a regular HDFS client from the NameNode's perspective. At the namenode side, where HDFS audit logs are generated, is not possible to determine why a client would like to read a file. The only thing that the namenode knows & can log that a client/user would like to open&read a file, but we have no information about what the client will actually do with the data. The client could save the data to a local disk, send it to a network service, simply display the contents of the file, or do an ordinary ETL job and write the results back to HDFS, etc. That is why an "open" operation is logged for both 'hadoop fs -cat size.log' and 'hadoop fs -get size.log'. Therefore with Navigator Audit, this is not currently possible, as the knowledge what the client will do with the data read from HDFS is missing. Usually there are some ways on the OS level itself to audit what users/processes do (like the Linux audit framework), and that can be used to audit file access on the OS level. It might be possible to combine audit data form the OS and Navigator to pinpoint such operations that you mentioned, but I do not know any automated way to do that.
... View more
02-13-2018
09:00 AM
Hi, support for device UUIDs with navencrypt was introduced in version 3.13.0: https://www.cloudera.com/documentation/enterprise/release-notes/topics/rg_navigator_encrypt_new_features.html#nav_encrypt_313 Please check the relevant documentation on how to use it here: https://www.cloudera.com/documentation/enterprise/latest/topics/navigator_encrypt_prepare.html#concept_device_uuids regards, Gabor Zele Customer Operations Engineer
... View more
11-21-2016
02:53 PM
1 Kudo
Hi keagles, It can be still useful to do manual failover in some cases. For example, you would like to do some HW/OS maintenance on the active datanode - you can fail over manually to the other NN without distrupting running processes on the cluster. Also, you can do configuration changes to the NN in the same way. Change config, then restart the Standby NN, it will start with the new configuration, fail over, then restart the other one. See, you updated your NN's settings without the need to do a full cluster stop. These are not possible with a non-HA configuration. cheers, zegab
... View more
11-21-2016
02:36 PM
Hi WhiteWizard, This is really unexpected. It seems that your supervisord processes are in a bad state for some reason. Have you tried to restart them by any chance? Or did you try rebooting the host/os by any chance? cheers, zegab
... View more