Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1543 | 07-09-2019 12:53 AM | |
9297 | 06-23-2019 08:37 PM | |
8052 | 06-18-2019 11:28 PM | |
8677 | 05-23-2019 08:46 PM | |
3477 | 05-20-2019 01:14 AM |
03-07-2019
09:08 PM
It appears that you're trying to use Sqoop's internal handling of DATE/TIMESTAMP data types, instead of using Strings which the Oracle connector converts them to. Have you tried the option specified at https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_java_sql_timestamp? -Doraoop.timestamp.string=false You shouldn't need to map the column types manually in this approach.
... View more
03-07-2019
08:30 AM
thanks a ton !!
... View more
03-07-2019
08:21 AM
Thank you for the confirmation. Yes, I'll make a feature request.
... View more
03-07-2019
07:46 AM
Issue was master node couldn't reach https://archive.cloudera.com/, Issue was solved when IT allowed it
... View more
03-07-2019
01:03 AM
Thank you very much Harsh
... View more
03-06-2019
11:42 PM
1 Kudo
MapReduce jobs can be submitted with ease, as all they mostly require is the correct config on the classpath (such as under src/main/resources for Maven projects). Spark/PySpark greatly relies on its script tooling to submit to a remote cluster so it is a little more involved to achieve this. IntelliJ IDEA has a remote execution option in its run targets that can be configured to copy over the build jar and invoke any arbitrary command on an edge host. This can be combined with remote debugging perhaps to get equal experience as MR. Another option is to use a web interface based editor such as CDSW.
... View more
03-06-2019
01:54 AM
We don't have any critical issues. We just saw in other systems (Cassandra, Kafka etc) that G1GC brought better performance and fewer problems so we thought to use it also for CDH, but I see from your answer it is not a big change. Thanks!
... View more
02-26-2019
01:06 AM
Hi HArsh and thank you for your suggestion Have you some sample or example? Could I use python for that?
... View more
02-20-2019
06:26 AM
When the first attempt fails, it tries to run again the app. So the status changes from "running" to "accepted". If you check the RM webUI you could see several attempts were run.
... View more
02-20-2019
01:58 AM
1 Kudo
The HBase shell currently only prints out ASCII printable range of characters, and not unicode, to make it easier to pass around values. In practice, HBase keys are often not designed to be readable and are binary forms (such as encoded integers of hashed values, etc.). That said, the HBase shell is a programmable JRuby console, so you can use HBase Java APIs within it to get a desired output if you are going to be relying on HBase shell for your scripting work. Here's a simple example: hbase(main):013:0> config = org.apache.hadoop.hbase.HBaseConfiguration.create
=> #<Java::OrgApacheHadoopConf::Configuration:0x4a864d4d>
hbase(main):014:0> table = org.apache.hadoop.hbase.client.HTable.new(config, 't')
=> #<Java::OrgApacheHadoopHbaseClient::HTable:0x5e85c21b>
hbase(main):015:0> scanner = table.getScanner(Scan.new())
=> #<Java::OrgApacheHadoopHbaseClient::ClientScanner:0x5aa76ad2>
hbase(main):030:0> scanner.each do |row|
hbase(main):031:1* key = String.from_java_bytes(row.getRow())
hbase(main):032:1> puts "'#{key}'"
hbase(main):033:1> end
'我'
... View more