Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 784 | 06-04-2025 11:36 PM | |
| 1363 | 03-23-2025 05:23 AM | |
| 675 | 03-17-2025 10:18 AM | |
| 2454 | 03-05-2025 01:34 PM | |
| 1596 | 03-03-2025 01:09 PM |
07-13-2016
04:45 AM
@Rajib Mandal The below services are NOT started. MapReduce2
Hive
Hbase
Oozie
Falcon
Storm
Altlas
Kafka Pinging the hosts isn't no big deal. Questions Did you configure the passwordless logon from the Ambari to the other 2 nodes?
Disable firewall /Iptables or Ip6tables?
Disable SeLinux.?
Disables THP ?
Configure NTPD? And lastly did you configure the Database for Hive or Oozie if you ain't using the default derby? Can you post here these logs Ambari Server logs are found at /var/log/ambari-server/ambari-server.log
Ambari Agent logs are found at /var/log/ambari-agent/ambari-agent.log .
... View more
07-07-2016
05:37 AM
@Sindhu The code snippet didn't work see [root@sandbox ~]# su - hdfs [hdfs@sandbox ~]$ hdfs dfs -chown -R sqoop:hdfs /user/root
chown: `/user/root': No such file or directory So what I did I just run the sqoop command as hdfs and it run successfully [hdfs@sandbox ~]$ sudo sqoop import --connect jdbc:oracle:thin:@192.168.0.15:1521/PROD --username sqoop -P --table DEPT_INFO
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for hdfs:
[hdfs@sandbox ~]$ sqoop import --connect jdbc:oracle:thin:@192.168.0.15:1521/PROD --username sqoop -P --table DEPT_INFO
Warning: /usr/hdp/2.3.2.0-2950/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/07/07 05:25:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.3.2.0-2950
Enter password:
16/07/07 05:25:39 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
16/07/07 05:25:39 INFO manager.SqlManager: Using default fetchSize of 1000
16/07/07 05:25:39 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/07 05:25:52 INFO manager.OracleManager: Time zone has been set to GMT
16/07/07 05:25:53 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM DEPT_INFO t WHERE 1=0
16/07/07 05:25:53 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.3.2.0-2950/hadoop-mapreduce
Note: /tmp/sqoop-hdfs/compile/075dc9427b098234773ffaadf17b1b5f/DEPT_INFO.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/07/07 05:25:57 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/075dc9427b098234773ffaadf17b1b5f/DEPT_INFO.jar
16/07/07 05:25:57 INFO manager.OracleManager: Time zone has been set to GMT
16/07/07 05:25:57 INFO manager.OracleManager: Time zone has been set to GMT
16/07/07 05:25:57 INFO mapreduce.ImportJobBase: Beginning import of DEPT_INFO
16/07/07 05:25:58 INFO manager.OracleManager: Time zone has been set to GMT
16/07/07 05:26:00 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
16/07/07 05:26:01 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/192.168.0.104:8050
16/07/07 05:26:06 INFO db.DBInputFormat: Using read commited transaction isolation
16/07/07 05:26:06 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(DEPT_ID), MAX(DEPT_ID) FROM DEPT_INFO
16/07/07 05:26:06 INFO mapreduce.JobSubmitter: number of splits:4
16/07/07 05:26:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467711992449_0004
16/07/07 05:26:07 INFO impl.YarnClientImpl: Submitted application application_1467711992449_0004
16/07/07 05:26:07 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1467711992449_0004/
16/07/07 05:26:07 INFO mapreduce.Job: Running job: job_1467711992449_0004
16/07/07 05:26:32 INFO mapreduce.Job: Job job_1467711992449_0004 running in uber mode : false
16/07/07 05:26:32 INFO mapreduce.Job: map 0% reduce 0%
16/07/07 05:26:55 INFO mapreduce.Job: map 25% reduce 0%
16/07/07 05:27:22 INFO mapreduce.Job: map 50% reduce 0%
16/07/07 05:27:25 INFO mapreduce.Job: map 100% reduce 0%
16/07/07 05:27:26 INFO mapreduce.Job: Job job_1467711992449_0004 completed successfully
16/07/07 05:27:27 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=584380
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=417
HDFS: Number of bytes written=95
HDFS: Number of read operations=16
HDFS: Number of large read operations=0
HDFS: Number of write operations=8
Job Counters
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=164533
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=164533
Total vcore-seconds taken by all map tasks=164533
Total megabyte-seconds taken by all map tasks=41133250
Map-Reduce Framework
Map input records=9
Map output records=9
Input split bytes=417
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=871
CPU time spent (ms)=10660
Physical memory (bytes) snapshot=659173376
Virtual memory (bytes) snapshot=3322916864
Total committed heap usage (bytes)=533200896
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=95
16/07/07 05:27:27 INFO mapreduce.ImportJobBase: Transferred 95 bytes in 87.9458 seconds (1.0802 bytes/sec)
16/07/07 05:27:27 INFO mapreduce.ImportJobBase: Retrieved 9 records.
... View more
07-07-2016
04:36 AM
@Angel Kafazov Did you take note of no.9 "zookeeper.znode.parent" and restart all the components
... View more
07-06-2016
09:37 PM
1 Kudo
@Angel Kafazov See the attached doc should help.
... View more
07-06-2016
10:30 AM
I copied the downloaded and installed JDBC drivers for sqoop in /usr/hdp/current/sqoop-client/lib configured a sqoop import to our network Oracle Ebusiness suite environment. But when I run the import I get an error I am blowing my head trying to resolve this easy problem . Any quick solution? Permission denied: user=sqoop, access=WRITE, inode="/user/sqoop/.staging":hdfs:hdfs:drwxr-xr-x [sqoop@sandbox lib]$ sqoop import --connect jdbc:oracle:thin:@192.168.0.15:1521/PROD --username sqoop -P --table DEPT_INFO
Warning: /usr/hdp/2.3.2.0-2950/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/07/06 09:39:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.3.2.0-2950
Enter password:
16/07/06 09:39:49 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
16/07/06 09:39:49 INFO manager.SqlManager: Using default fetchSize of 1000
16/07/06 09:39:49 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/06 09:39:51 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM DEPT_INFO t WHERE 1=0
16/07/06 09:39:51 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.3.2.0-2950/hadoop-mapreduce
Note: /tmp/sqoop-sqoop/compile/c34f78f377ce385f2582badaf9bd81a8/DEPT_INFO.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/07/06 09:39:54 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-sqoop/compile/c34f78f377ce385f2582badaf9bd81a8/DEPT_INFO.jar
16/07/06 09:39:54 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:54 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:54 INFO mapreduce.ImportJobBase: Beginning import of DEPT_INFO
16/07/06 09:39:55 INFO manager.OracleManager: Time zone has been set to GMT
16/07/06 09:39:58 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
16/07/06 09:39:58 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/192.168.0.104:8050
16/07/06 09:39:59 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.security.AccessControlException: Permission denied: user=sqoop, access=WRITE, inode="/user/sqoop/.staging":hdfs:hdfs:drwxr-xr-x
... View more
Labels:
- Labels:
-
Apache Sqoop
06-17-2016
12:15 PM
@dnyanesh kulkarnni Apart from the above answers of Ashnee and Sunile the number of question that need to be answered to deploy correctly the HDP components 1. What is the business case for this cluster build? DWH??? 2. Is it a POC or intended for production? 3. How many nodes in this cluster ? For HA you need to consider NN ,DN and zookeeper redundancy etc 4. Definitely with RDBMS you will need sqoop but what other components do you want deployed? 5. Remember to build your Hadoop architecture to map your business needs @Ashnee You forgot to mention NTP synchronization between the nodes in the cluster which is very important for the zookeeper etc
... View more
06-17-2016
09:45 AM
@chandramouli muthukumaran One time I forgot this setting and I encountered the same problem. Enable NTP on the Cluster and on the Browser Host The clocks of all the nodes in your cluster and the machine that runs the browser through which you access Ambari Web must be able to synchronize with each other. How to enable ntp
... View more
06-13-2016
10:12 PM
@jbarnett I suspect the yum is updating the java try a simple trick add a line to /etc/yum.conf on the clients exclude=j* ... will exclude all packages starting with j. ( The wildcard ( * ) is usually a must.)
... View more
06-09-2016
03:10 PM
@Aidan Condron You can change other elements of the default configuration by modifying spark-env.sh. You can change the following: SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports
SPARK_WORKER_CORES, to set the number of cores to use on this machine
SPARK_WORKER_MEMORY, to set how much memory to use (for example 1000MB, 2GB)
SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT
SPARK_WORKER_INSTANCE, to set the number of worker processes per node
SPARK_WORKER_DIR, to set the working directory of worker processes
... View more