Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 483 | 06-04-2025 11:36 PM | |
| 1011 | 03-23-2025 05:23 AM | |
| 536 | 03-17-2025 10:18 AM | |
| 1990 | 03-05-2025 01:34 PM | |
| 1257 | 03-03-2025 01:09 PM |
07-06-2016
09:37 PM
1 Kudo
@Angel Kafazov See the attached doc should help.
... View more
07-06-2016
10:30 AM
I copied the downloaded and installed JDBC drivers for sqoop in /usr/hdp/current/sqoop-client/lib configured a sqoop import to our network Oracle Ebusiness suite environment. But when I run the import I get an error I am blowing my head trying to resolve this easy problem . Any quick solution? Permission denied: user=sqoop, access=WRITE, inode="/user/sqoop/.staging":hdfs:hdfs:drwxr-xr-x [sqoop@sandbox lib]$ sqoop import --connect jdbc:oracle:thin:@192.168.0.15:1521/PROD --username sqoop -P --table DEPT_INFO
Warning: /usr/hdp/2.3.2.0-2950/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/07/06 09:39:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.3.2.0-2950
Enter password:
16/07/06 09:39:49 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
16/07/06 09:39:49 INFO manager.SqlManager: Using default fetchSize of 1000
16/07/06 09:39:49 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/06 09:39:51 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM DEPT_INFO t WHERE 1=0
16/07/06 09:39:51 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.3.2.0-2950/hadoop-mapreduce
Note: /tmp/sqoop-sqoop/compile/c34f78f377ce385f2582badaf9bd81a8/DEPT_INFO.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/07/06 09:39:54 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-sqoop/compile/c34f78f377ce385f2582badaf9bd81a8/DEPT_INFO.jar
16/07/06 09:39:54 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:54 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:54 INFO mapreduce.ImportJobBase: Beginning import of DEPT_INFO
16/07/06 09:39:55 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:58 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
16/07/06 09:39:58 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/192.168.0.104:8050
16/07/06 09:39:59 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.security.AccessControlException: Permission denied: user=sqoop, access=WRITE, inode="/user/sqoop/.staging":hdfs:hdfs:drwxr-xr-x
... View more
Labels:
- Labels:
-
Apache Sqoop
06-17-2016
12:15 PM
@dnyanesh kulkarnni Apart from the above answers of Ashnee and Sunile the number of question that need to be answered to deploy correctly the HDP components 1. What is the business case for this cluster build? DWH??? 2. Is it a POC or intended for production? 3. How many nodes in this cluster ? For HA you need to consider NN ,DN and zookeeper redundancy etc 4. Definitely with RDBMS you will need sqoop but what other components do you want deployed? 5. Remember to build your Hadoop architecture to map your business needs @Ashnee You forgot to mention NTP synchronization between the nodes in the cluster which is very important for the zookeeper etc
... View more
06-17-2016
09:45 AM
@chandramouli muthukumaran One time I forgot this setting and I encountered the same problem. Enable NTP on the Cluster and on the Browser Host The clocks of all the nodes in your cluster and the machine that runs the browser through which you access Ambari Web must be able to synchronize with each other. How to enable ntp
... View more
06-13-2016
10:12 PM
@jbarnett I suspect the yum is updating the java try a simple trick add a line to /etc/yum.conf on the clients exclude=j* ... will exclude all packages starting with j. ( The wildcard ( * ) is usually a must.)
... View more
06-09-2016
03:10 PM
@Aidan Condron You can change other elements of the default configuration by modifying spark-env.sh. You can change the following: SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports
SPARK_WORKER_CORES, to set the number of cores to use on this machine
SPARK_WORKER_MEMORY, to set how much memory to use (for example 1000MB, 2GB)
SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT
SPARK_WORKER_INSTANCE, to set the number of worker processes per node
SPARK_WORKER_DIR, to set the working directory of worker processes
... View more
06-08-2016
07:46 AM
@Abdul Shihab Can you check the Ambari server port values in /etc/ambari-server/conf/ambari.properties Can you use this link to change the default port and try restarting the ambari server
... View more
06-08-2016
06:33 AM
1 Kudo
@Rajib Mandal Surely all HDP and HDF products are open source you will only need to pay for support . For HDF see the support matrix here
... View more
05-30-2016
10:37 AM
What version of Linux is your server and protocol ? ipv4 or ipv6 Put some value like net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30
... View more