Member since
06-25-2014
7
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2802 | 06-25-2014 06:17 AM |
07-09-2014
07:23 AM
Thank you Abe, it seams that problem was missing partition columns. So I tried to add one column and job started. But it would be great to receive this information also from Hue.
... View more
07-08-2014
12:17 AM
Hi, I am trying to transfer data from Oracle into HDFS. I copied JDBC driver (ojdbc6.jar) into /var/lib/sqoop2 and also /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/sqoop2 for sure. But when I create job and start it, message - Error, could not start job is shown. I also tried to run job from CLI: start job --jid 4 But got this message: Exception has occurred during processing command
Exception: org.apache.sqoop.common.SqoopException Message: CLIENT_0001:Server has returned exception This error message doesn't help me, it is not too verbose. Do you know where the problem should be? Connection should be ok, and table and schema exists in oracle database. Here is connection and job info: Job: Job with id 4 and name Test job (Enabled: true, Created by rtomsej at 7/8/14 8:42 AM, Updated by rtomsej at 7/8/14 8:42 AM)
Using Connection id 5 and Connector id 1
Database configuration
Schema name: ibp
Table name: open_nl_cnt
Table SQL statement:
Table column names:
Partition column name:
Nulls in partition column: true
Boundary query:
Output configuration
Storage type: HDFS
Output format: TEXT_FILE
Compression format: NONE
Output directory: /user/rtomsej/ibp
Throttling resources
Extractors:
Loaders: Connection: Connection with id 5 and name Oracle-Web (Enabled: true, Created by rtomsej at 7/8/14 8:41 AM, Updated by rtomsej at 7/8/14 8:53 AM)
Using Connector id 1
Connection configuration
JDBC Driver Class: oracle.jdbc.driver.OracleDriver
JDBC Connection String: jdbc:oracle:thin:@localhost:1521
Username: localhost
Password:
JDBC Connection Properties:
Security related configuration options
Max connections: We are using CDH5.0.2 with parcels. Thank you
... View more
Labels:
06-25-2014
06:17 AM
It seams that problem was in command: hbase -Dhbase.import.version=0.90 org.apache.hadoop.hbase.mapreduce.Import ip /user/rtomsej/ip3 When I modified it like this, whole job went ok: hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import ip /user/rtomsej/ip3 Think that import.version=0.90 is not supported.
... View more
06-25-2014
02:10 AM
Hi experts, we are currently migrating from CDH3u4 to CDH5. Everything went smooth thanks to Cloudera manager. But we have problem with migrating from HBase 0.90.6 to HBase 0.96.1.1. I tried to copy tables by using CopyTable command (we have 5 zookeeper servers in quorum): hbase org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=zookeeper_server1,zookeeper_server2,zookeeper_server3,zookeeper_server4,zookeeper_server5:2181:/hbase --new.name=ip ip The mapreduce job started, connection to zookeeper was established, but nothing has happened, and nothing is in log file: 2014-06-25 09:44:29,211 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server zookeeper_server1
2014-06-25 09:44:29,212 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to zookeeper_server1, initiating session
2014-06-25 09:44:29,259 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server zookeeper_server1, sessionid = 0x446cdf87bf005ec, negotiated timeout = 60000 Task is killed after 600 seconds: Task attempt_201406121410_0411_m_000003_1 failed to report status for 600 seconds. Killing! I also tried migrate data by using Export/Import feature of HBase (Export/Import). I have managed to export data and copy them to new server (discp). When i used command on destination cluster: hbase -Dhbase.import.version=0.90 org.apache.hadoop.hbase.mapreduce.Import ip /user/rtomsej/ip3 Job was completed successfully, but no data was load: 14/06/25 09:04:58 INFO mapreduce.Job: Job job_1403615212297_0014 running in uber mode : false
14/06/25 09:04:58 INFO mapreduce.Job: map 0% reduce 0%
14/06/25 09:05:08 INFO mapreduce.Job: map 7% reduce 0%
14/06/25 09:05:11 INFO mapreduce.Job: map 43% reduce 0%
14/06/25 09:05:16 INFO mapreduce.Job: map 45% reduce 0%
14/06/25 09:05:18 INFO mapreduce.Job: map 50% reduce 0%
14/06/25 09:05:20 INFO mapreduce.Job: map 55% reduce 0%
14/06/25 09:05:21 INFO mapreduce.Job: map 57% reduce 0%
14/06/25 09:05:22 INFO mapreduce.Job: map 80% reduce 0%
14/06/25 09:05:23 INFO mapreduce.Job: map 86% reduce 0%
14/06/25 09:05:25 INFO mapreduce.Job: map 91% reduce 0%
14/06/25 09:05:26 INFO mapreduce.Job: map 98% reduce 0%
14/06/25 09:05:28 INFO mapreduce.Job: map 100% reduce 0%
14/06/25 09:05:28 INFO mapreduce.Job: Job job_1403615212297_0014 completed successfully
14/06/25 09:05:28 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=5172058
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=5452414893
HDFS: Number of bytes written=0
HDFS: Number of read operations=132
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Launched map tasks=44
Data-local map tasks=44
Total time spent by all maps in occupied slots (ms)=410004
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=410004
Total vcore-seconds taken by all map tasks=410004
Total megabyte-seconds taken by all map tasks=419844096
Map-Reduce Framework
Map input records=9964456
Map output records=0
Input split bytes=5720
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=7648
CPU time spent (ms)=117230
Physical memory (bytes) snapshot=17097363456
Virtual memory (bytes) snapshot=68115570688
Total committed heap usage (bytes)=26497384448
File Input Format Counters
Bytes Read=5452409173
File Output Format Counters
Bytes Written=0 When I look into log no error is here, but in Hbase table I could not find any data: 2014-06-25 09:05:07,492 INFO [main] org.apache.hadoop.hbase.mapreduce.TableOutputFormat: Created table instance for ip
2014-06-25 09:05:07,526 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2014-06-25 09:05:07,784 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://cz-dc1-v-197.mall.local:8020/user/rtomsej/ip3/part-m-00000:0+134217728
2014-06-25 09:05:07,833 INFO [main] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=cz-dc1-s-133.mall.local:2181,cz-dc1-s-135.mall.local:2181,cz-dc1-s-132.mall.local:2181,cz-dc1-s-134.mall.local:2181,cz-dc1-s-136.mall.local:2181 sessionTimeout=60000 watcher=attempt_1403615212297_0014_m_000000_0, quorum=cz-dc1-s-133.mall.local:2181,cz-dc1-s-135.mall.local:2181,cz-dc1-s-132.mall.local:2181,cz-dc1-s-134.mall.local:2181,cz-dc1-s-136.mall.local:2181, baseZNode=/hbase
2014-06-25 09:05:07,834 INFO [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=attempt_1403615212297_0014_m_000000_0 connecting to ZooKeeper ensemble=cz-dc1-s-133.mall.local:2181,cz-dc1-s-135.mall.local:2181,cz-dc1-s-132.mall.local:2181,cz-dc1-s-134.mall.local:2181,cz-dc1-s-136.mall.local:2181
2014-06-25 09:05:07,835 INFO [main-SendThread(cz-dc1-s-133.mall.local:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server cz-dc1-s-133.mall.local/10.1.16.133:2181. Will not attempt to authenticate using SASL (unknown error)
2014-06-25 09:05:07,835 INFO [main-SendThread(cz-dc1-s-133.mall.local:2181)] org.apache.zookeeper.ClientCnxn: Socket connection established to cz-dc1-s-133.mall.local/10.1.16.133:2181, initiating session
2014-06-25 09:05:07,849 INFO [main-SendThread(cz-dc1-s-133.mall.local:2181)] org.apache.zookeeper.ClientCnxn: Session establishment complete on server cz-dc1-s-133.mall.local/10.1.16.133:2181, sessionid = 0x446cdf87bf005ad, negotiated timeout = 60000
2014-06-25 09:05:07,871 INFO [main] org.apache.zookeeper.ZooKeeper: Session: 0x446cdf87bf005ad closed
2014-06-25 09:05:07,871 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn: EventThread shut down
2014-06-25 09:05:10,479 INFO [main] org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x346cdf87bc805a9
2014-06-25 09:05:10,494 INFO [main] org.apache.zookeeper.ZooKeeper: Session: 0x346cdf87bc805a9 closed
2014-06-25 09:05:10,494 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn: EventThread shut down
2014-06-25 09:05:10,595 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1403615212297_0014_m_000000_0 is done. And is in the process of committing
2014-06-25 09:05:10,666 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1403615212297_0014_m_000000_0' done. I would appreciate any idea, thank you so much!
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Zookeeper