Member since
10-01-2016
156
Posts
8
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8184 | 04-04-2019 09:41 PM | |
3141 | 06-04-2018 08:34 AM | |
1468 | 05-23-2018 01:03 PM | |
2973 | 05-21-2018 07:12 AM | |
1828 | 05-08-2018 10:48 AM |
12-06-2017
04:29 PM
Hi, please give the command with root user. [root@namenode~]# sudo su hdfs -l -c 'hdfs dfsadmin -safemode enter'
[root@namenode~]# sudo su hdfs -l -c 'hdfs dfsadmin -saveNamespace'
... View more
12-05-2017
01:24 PM
Hi @Mudassar Hussain Did you put namenode safemode and save the namespace? sudo su hdfs -l -c 'hdfs dfsadmin -safemode enter'
sudo su hdfs -l -c 'hdfs dfsadmin -saveNamespace'
restart sevices and see hdfs-site.xml cat /etc/hadoop/conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property> You can also check that Ambari Dashboard HDFS Disk Usage is decreased.
... View more
12-04-2017
06:46 AM
Thank you very much @bkosaraju it worked. I examined mysql.user table and realized user Hostname was not '%' . So I changed it with following as @bkosaraju recommended. GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' WITH GRANT OPTION; Alternatively I realized it can be changed from mysql databese user table with following query; mysql>use mysql;
mysql> update user set Host='%' where User='user';
... View more
12-01-2017
02:19 PM
I have a table in Hive, named table_hive and I want it export to mysql database(mydatabase), table_mysql on server2. I submit sqoop export job from server1 on which sqoop client is installed. After submit, it tries to connect server1's mysql database as if it were, which is not. I don't understand why it tries to connect server1 although command --connect parameter indicates server2. sqoop export --connect jdbc:mysql://server2:3306/mydatabase --username user --password password --table table_mysql --export-dir /apps/hive/warehouse/staging.db/table_hive Error codes: /usr/hdp/2.6.3.0-235//sqoop/conf/sqoop-env.sh: line 21: HADOOP_CLASSPATH=${hcat -classpath}: bad substitution
Warning: /usr/hdp/2.6.3.0-235/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/12/01 15:50:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.3.0-235
17/12/01 15:50:13 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager) . Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
17/12/01 15:50:13 INFO manager.SqlManager: Using default fetchSize of 1000
17/12/01 15:50:13 INFO tool.CodeGenTool: Beginning code generation
17/12/01 15:50:14 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Access denied for user 'user'@'server1' (using password: NO)
java.sql.SQLException: Access denied for user 'user'@'server1' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:927)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1709)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1252)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2488)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:904)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:289)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:260)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:246)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:328)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1853)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1653)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:64)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
17/12/01 15:50:14 ERROR tool.ExportTool: Encountered IOException running export job: java.io.IOException: No columns to generate for ClassWriter
... View more
Labels:
- Labels:
-
Apache Sqoop
11-30-2017
02:26 PM
Hi Milind Rao. You can use Ambari -> YARN -> Quick Links -> ResourcesManager UI for ApplicationId
... View more
11-14-2017
08:18 AM
On my cluster NiFi and one of three HBase Region Server run on same server. I modified NiFi boostrap.conf file and uncommented java.arg.13=-XX:+UseG1GC then Region Server stopped. I tried many times to restart, once it started soon it stopped again till I commented out java.arg.13=-XX:+UseG1GC property. It now works. I think the property changes the server's JVM garbage collections style.
... View more
11-06-2017
01:19 PM
Hi Timothy, ListDatabaseTables link is broken.
... View more
11-03-2017
09:01 AM
I had the same error. Based on this recomendation I added new properties in Zeppelin Spark2 interpreter as spark.deploy.maxExecutorRetries=10 It worked for me.
... View more
10-10-2017
08:43 AM
You can see more info from YARN logs yarn logs -applicationId application_1484116726997_0144
... View more
- « Previous
- Next »