Support Questions

Find answers, ask questions, and share your expertise

Namenode and Datanode Plugin

avatar
Explorer

Hi All,

 

I was going through the namenode and datanode configuration and found that there is a way to add plugin to both of them.

I did some searching but could not find any relevant information.  Does anyone know, what is a namenode plugin and datanode plugin?

 

Thanks

Sebin

1 ACCEPTED SOLUTION

avatar
Guru

I'm not aware of any examples of such plugins, but found some details looking through the code.

 

Plugins should implement the org.apache.hadoop.util.ServicePlugin Java interface, and you can find the code for that here: https://github.com/cloudera/hadoop-common/blob/cdh5-2.6.0_5.4.7/hadoop-common-project/hadoop-common/... It says "Service plug-ins may be used to expose functionality of datanodes or namenodes using arbitrary RPC protocols". Basically the service will read in the class name(s), instantiate them, and then call the start() method and pass it a reference to the service. You can then do whatever you want, and it will later call stop() on the plugin when things are shutting down.

 

The are other ways to write plugins for Hadoop. Sentry's HDFS support is implemented as a plugin, but it's a more specific type of authorization plugin, rather than a class that's just started and stopped along with the service.

 

Hope that helps!

 

View solution in original post

5 REPLIES 5

avatar
Could you share where you read this, would give us some context.

Regards,
Gautam Gopalakrishnan

avatar
Explorer
if you go to hdfs configurations and search for plugin
two configurations will come up
- dfs.namenode.plugins.list
- dfs.datanode.plugins.list

What are these two?

avatar
Guru

I'm not aware of any examples of such plugins, but found some details looking through the code.

 

Plugins should implement the org.apache.hadoop.util.ServicePlugin Java interface, and you can find the code for that here: https://github.com/cloudera/hadoop-common/blob/cdh5-2.6.0_5.4.7/hadoop-common-project/hadoop-common/... It says "Service plug-ins may be used to expose functionality of datanodes or namenodes using arbitrary RPC protocols". Basically the service will read in the class name(s), instantiate them, and then call the start() method and pass it a reference to the service. You can then do whatever you want, and it will later call stop() on the plugin when things are shutting down.

 

The are other ways to write plugins for Hadoop. Sentry's HDFS support is implemented as a plugin, but it's a more specific type of authorization plugin, rather than a class that's just started and stopped along with the service.

 

Hope that helps!

 

avatar
Explorer
I think this is the answer to the question.
Can I ask if this is already implemented by some classes? I would love to get some more info on this.

Thanks Sean

avatar
Guru

I had a quick search around on Github for anything that implements this, and couldn't find anything I'm afraid. Pretty sure Cloudera's product line doesn't include any such plugins...