Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7118 | 09-17-2018 06:33 AM | |
1672 | 08-29-2018 07:48 AM | |
2574 | 08-28-2018 12:38 PM | |
1970 | 08-03-2018 05:42 AM | |
1833 | 07-27-2018 04:00 PM |
08-03-2018
07:25 AM
2 Kudos
Thankyou @Sindhu and @Rakesh S. I did a root cause analysis and found that our server is hosted in AWS which is a public cloud and we have not setup Kerberos or firewalls. In the nodes I can find the process w.conf running: yarn 21775 353 0.0 470060 12772 ? Ssl Aug02 5591:25 /var/tmp/java -c /var/tmp/w.conf Within /var/temp I can see a config.json which contains: {
"algo": "cryptonight", // cryptonight (default) or cryptonight-lite
"av": 0, // algorithm variation, 0 auto select
"background": true, // true to run the miner in the background
"colors": true, // false to disable colored output
"cpu-affinity": null, // set process affinity to CPU core(s), mask "0x3" for cores 0 and 1
"cpu-priority": null, // set process priority (0 idle, 2 normal to 5 highest)
"donate-level": 1, // donate level, mininum 1%
"log-file": null, // log all output to a file, example: "c:/some/path/xmrig.log"
"max-cpu-usage": 95, // maximum CPU usage for automatic mode, usually limiting factor is CPU cache not this option.
"print-time": 60, // print hashrate report every N seconds
"retries": 5, // number of times to retry before switch to backup server
"retry-pause": 5, // time to pause between retries
"safe": false, // true to safe adjust threads and av settings for current CPU
"threads": null, // number of miner threads
"pools": [
{
"url": "158.69.133.20:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
},
{
"url": "192.99.142.249:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
},
{
"url": "202.144.193.110:3333", // URL of mining server
"user": "4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg", // username for mining server
"pass": "x", // password for mining server
"keepalive": true, // send keepalived for prevent timeout (need pool support)
"nicehash": false // enable nicehash/xmrig-proxy support
}
],
"api": {
"port": 0, // port for the miner API https://github.com/xmrig/xmrig/wiki/API
"access-token": null, // access token for API
"worker-id": null // custom worker-id for API
}
} which clearly shows some mining attack effected with our system. Worst of it, all the the files were created and process were running with root permissions. Even though I could not confirm the root cause, I guess, some attacker got access to our unprotected/unrestricted 8088 port and identified that the cluster is not Kerberized. Hence he tried some bruteforce and cracked our root password. Thus logged in to our AWS cluster and gained full access of our cluster. Conclusion: 1. Enable kerberos, add Knox, and secure your servers 2. Try to enable VPC 3. Refine the security groups to whitelist needed IPs and ports for HTTP and SSH 4. Give high security passwords for public clouds. 5. Change the default static user in Hadoop. Ambari > HDFS > Configurations >Custom core-site > Add Property hadoop.http.staticuser.user=yarn
... View more
10-08-2018
03:38 PM
Same problem but more complex. Can you help me ?
oozie DB is created user/pass and privileges are set OK Connection test is OK I can connect through command line from the same server emulating JDBC connector with sqlline: # java -Djava.ext.dirs=/home/user/jline_sqlline__mysql_connector/ sqlline.SqlLine
sqlline version 1.0.2 by Marc Prud'hommeaux
sqlline> !connect jdbc:mysql://pro-hadoop-ambari/oozie oozie XXXXXXX
Connecting to jdbc:mysql://pro-hadoop-ambari/oozie
Connected to: MySQL (version 5.7.23)
Driver: MySQL-AB JDBC Driver (version mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ))
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:mysql://pro-hadoop-ambari/oozie>
But ... the service doesn't start due to a JDBC error 😞 Validate DB Connection
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
DONE
DB schema does not exist
Check OOZIE_SYS table does not exist
DONE
Create SQL schema
Error: A connection could not be obtained for driver class "com.mysql.jdbc.Driver" and URL "jdbc:mysql://pro-hadoop-ambari/oozie". You may have specified an invalid URL.
Stack trace for the error was (for debug purposes):
--------------------------------------
<openjpa-2.4.1-r422266:1730418 fatal user error> org.apache.openjpa.util.UserException: A connection could not be obtained for driver class "com.mysql.jdbc.Driver" and URL "jdbc:mysql://pro-hadoop-ambari/oozie". You may have specified an invalid URL.
at org.apache.openjpa.jdbc.schema.DataSourceFactory.newConnectException(DataSourceFactory.java:272)
at org.apache.openjpa.jdbc.schema.DataSourceFactory.installDBDictionary(DataSourceFactory.java:258)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getConnectionFactory(JDBCConfigurationImpl.java:733)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDataSource(JDBCConfigurationImpl.java:878)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDataSource2(JDBCConfigurationImpl.java:920)
at org.apache.openjpa.jdbc.schema.SchemaTool.<init>(SchemaTool.java:132)
at org.apache.openjpa.jdbc.meta.MappingTool.newSchemaTool(MappingTool.java:314)
at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:495)
at org.apache.openjpa.jdbc.meta.MappingTool.run(MappingTool.java:1095)
at org.apache.openjpa.jdbc.meta.MappingTool.run(MappingTool.java:1006)
at org.apache.openjpa.jdbc.meta.MappingTool$1.run(MappingTool.java:939)
at org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:762)
at org.apache.openjpa.lib.conf.Configurations.runAgainstAllAnchors(Configurations.java:752)
at org.apache.openjpa.jdbc.meta.MappingTool.main(MappingTool.java:934)
at org.apache.oozie.tools.OozieDBCLI.createUpgradeDB(OozieDBCLI.java:1191)
at org.apache.oozie.tools.OozieDBCLI.createDB(OozieDBCLI.java:198)
at org.apache.oozie.tools.OozieDBCLI.run(OozieDBCLI.java:131)
at org.apache.oozie.tools.OozieDBCLI.main(OozieDBCLI.java:79)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Access denied for user 'oozie'@'pro-hadoop-ambari' (using password: YES))
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.apache.openjpa.jdbc.schema.DBCPDriverDataSource.getDBCPConnection(DBCPDriverDataSource.java:74)
at org.apache.openjpa.jdbc.schema.AutoDriverDataSource.getConnection(AutoDriverDataSource.java:42)
at org.apache.openjpa.jdbc.schema.SimpleDriverDataSource.getConnection(SimpleDriverDataSource.java:76)
at org.apache.openjpa.lib.jdbc.DelegatingDataSource.getConnection(DelegatingDataSource.java:118)
at org.apache.openjpa.lib.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:92)
at org.apache.openjpa.jdbc.schema.DataSourceFactory.installDBDictionary(DataSourceFactory.java:250)
... 16 more
Caused by: java.sql.SQLException: Access denied for user 'oozie'@'pro-hadoop-ambari' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:927)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1709)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1252)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2488)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
... 24 more
--------------------------------------
... View more
07-11-2018
12:13 PM
@Anjali Shevadkar you are right, that's why I asked you to check hive cli, so, seems to be some configuration in your ranger. Did you try to connect using ZK hosts on your connection string? I suggest you check this following document, check the permissions on HDFS. Let me know if this works for you. Make sure the user that you configure as the same as the unix user (or ldap, whatever). Try to configure another user to test. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/configure_ranger_authentication.html Another important thing, check the permissions on your HDFS, because when you are using ranger you need to change the owner/group and permissions https://br.hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/
... View more
01-16-2019
08:28 AM
@Sindhu hi sindhu, what is the connection string for http mode of hive with kerberized cluster ,im unable to connect the sqlalchemy uri.for binary its working fine please help me out. i am getting below error to connect the http mode of hive (knox). ERROR: {"error": "Connection failed!\n\nThe error message returned was:\nTSocket read 0 bytes"} thanks in advance
... View more
04-04-2018
10:31 AM
oh thanks alot Sindhu for the revert. I will then ensure that both the nodes have the same version and then enable . cheers
... View more
03-21-2018
09:36 AM
@SUDHIR KUMAR To list tables, you need to use 'show tables;'. Also, FYI, link is for Hive QL.
... View more
03-07-2018
05:09 AM
@Aymen Rahal The issue is due to 'Connection refused on the default ssh port'. Verify the following: 1. Check the ssh port under file /etc/ssh/sshd_config, if not set try setting to 22. 2. Try running ssh to the host from terminal.
... View more
06-06-2018
01:59 PM
Hey @Sindhu and @ SUDHIR KUMAR Did you get the solution for this at all? I have been getting the same errors while running the Sqoop jobs and couldnt find out the solution. I have been through all the suggestions like restarting the cluster and checking "hdfs dfsadmin -report" which shows the datanodes availability. I have a EMR cluster with 3 ec2 instances of datanodes and 1ec2 instance of masternode. Any help is appreciable.
... View more
10-30-2017
12:13 PM
@Saurabh It happens sometimes because of limitation of no of lines displayed in CLI. try this --> hive -e "show create table sample_db.i0001_ivo_hdr;" > ddl.txt
... View more
05-15-2018
03:59 PM
1 Kudo
Unfortunately "--hive-overwrite" option destroy hive table structure and re-create it after that which is not acceptable way. The only way is: 1. hive> truncate table sample; 2. sqoop import --connect jdbc:mysql://yourhost/test --username test --password test01 --table sample --hcatalog-table sample
... View more