Member since
09-19-2020
46
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2331 | 07-13-2021 12:09 AM |
07-14-2021
05:52 AM
@roshanbi Is your issue resolved?
If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-08-2021
02:24 PM
Hi @roshanbi , You can simply put your sqoop command in a shell file and then have Cron run that script according to your desired schedule (depending on how frequently you expect the data to be updated in the Oracle source table). Alternatively, you can use Oozie to orchestrate the running of the sqoop job. Regards, Alex
... View more
07-06-2021
11:54 PM
Hi. @roshanbi, Has any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, kindly provide more information that can help the experts help you.
... View more
07-06-2021
05:37 AM
@roshanbi Have you resolved your issue and if so would you mind sharing the solution and marking this thread as solved? If you are still experiencing the issue, can you provide the information @RangaReddy has requested?
... View more
07-04-2021
11:36 PM
Hi @roshanbi , We can confirm you that the type of Kudu's column cannot be changed after it is created. One is that in the Known issue of document [1], the following information is explained. Non-alterable Column Types Kudu does not allow the type of a column to be altered. The second is that in the Impala documentation, there is no description that you can change the field type of Kudu. [1]. https://kudu.apache.org/docs/schema_design.html#known-limitations After your review you can accept this as a solution and it helps others who are looking for the same. Thank you, Chethan YM
... View more
07-04-2021
07:11 PM
Hi @roshanbi I think there are really two questions here: For each row of my data set, can I mask the last 5 digits of each data element present in the pri_identity column using Ranger? Is this possible to achieve while using Kudu? I'll restrict myself to addressing the first question. Your second question is a good one, though, because most of the documentation I've read about this simply doesn't mention Kudu, so I'll leave that part of your question to another community member who has more experience with Apache Kudu as a storage option. You didn't provide the version of either Impala, Ranger or Kudu you're using or on what distribution, but I will attempt to point you in the right direction nonetheless. You can see a quick demonstration of why and how to use a mask in Ranger on CDP in the first two minutes of this video: How to use Column Masking and Row Filtering in CDP You can see a slightly longer length demonstration of how to do something similar on HDP 3.1.x in this video: How to mask Hive columns using Atlas tags and Ranger Neither quite shows how to establish the custom masking expression, though, which is what I think you'll need to satisfy your requirements. To suppress the display of the last 5 digits in the pri_identity column, you are likely to need a custom masking expression for use in Ranger. Ranger includes several "out of the box" masking types, but a cursory look at the documentation indicates that the masking policy you've described and desire is not one of them. If that's true, you can always write a custom masking expression using the UDF syntax, which you can read about at the Apache.org site here: Hive Operators and User-Defined Functions (UDFs) Hope this helps
... View more
06-30-2021
07:34 AM
Make sure that you are using the oracle jdbc driver version which is compatible with the oracle db version you are connecting to
... View more
06-27-2021
09:04 AM
Hi @roshanbi Please find the difference: val textFileDF : Dataset[String] = spark.read.textFile("/path") // returns Dataset object val textFileRDD : RDD[String] = spark.sparkContext.textFile("/path") // returns RDD object If you are satisfied, please Accept as Solution.
... View more
06-27-2021
12:41 AM
Hi, I run the following command and I am getting error below: sqoop import --connect "jdbc:oracle:thin:@10.215.227.*:1521:cxx_stby" --username cbs_view --password ccccc --query "select TMP_ACCOUNT_CODE_N,decode(EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@STREET_DESC'),'.',null,EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@STREET_DESC'))||' '||EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@SUB_LOCALITY_DESC') ||' '||EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@CITY_DESC') "ADDRESS" from tmp_address_xml@cbsstandby where $CONDITIONS" -m 4 --split-by object_type --hive-import --target-dir '/devsh_loudacre' --hive-table test_oracle.address_tab --verbose SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 21/06/27 07:39:14 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7.3.0.1.0-187 21/06/27 07:39:14 DEBUG tool.BaseSqoopTool: Enabled debug logging. 21/06/27 07:39:14 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 21/06/27 07:39:14 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 21/06/27 07:39:14 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 21/06/27 07:39:14 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory 21/06/27 07:39:14 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.DefaultManagerFactory 21/06/27 07:39:14 DEBUG sqoop.ConnFactory: Trying ManagerFactory: org.apache.sqoop.manager.oracle.OraOopManagerFactory 21/06/27 07:39:14 DEBUG oracle.OraOopUtilities: Enabled OraOop debug logging. 21/06/27 07:39:14 DEBUG oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop can be called by Sqoop! 21/06/27 07:39:14 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 21/06/27 07:39:14 DEBUG sqoop.ConnFactory: Trying ManagerFactory: org.apache.sqoop.manager.DefaultManagerFactory 21/06/27 07:39:14 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@10.215.227.22:1521 21/06/27 07:39:14 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache. 21/06/27 07:39:14 INFO manager.SqlManager: Using default fetchSize of 1000 21/06/27 07:39:14 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@3bf7ca37 21/06/27 07:39:14 INFO tool.CodeGenTool: Beginning code generation 21/06/27 07:39:14 ERROR tool.ImportTool: Import failed: java.io.IOException: Query [select TMP_ACCOUNT_CODE_N,decode(EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@STREET_DESC'),'.',null,EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@STREET_DESC'))||' '||EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@SUB_LOCALITY_DESC') ||' '||EXTRACTVALUE (address_x, '//ADDRESS_DTLS/@CITY_DESC') ADDRESS from tmp_address_xml@cbsstandby where ] must contain '$CONDITIONS' in WHERE clause. at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:333) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1879) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1672) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:106) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:516) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:656) at org.apache.sqoop.Sqoop.run(Sqoop.java:150) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:186) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:240) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:249) at org.apache.sqoop.Sqoop.main(Sqoop.java:258) Kindly advise Thanks, Roshan
... View more
06-24-2021
08:23 AM
Hi @roshanbi If you are satisfied with my answer please Accept as Solution.
... View more
- « Previous
-
- 1
- 2
- Next »