Member since
04-12-2016
46
Posts
74
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7955 | 03-09-2017 12:27 PM | |
2262 | 02-01-2017 09:54 AM | |
8979 | 07-07-2016 08:44 PM | |
9528 | 07-05-2016 10:18 AM | |
3254 | 07-01-2016 06:31 AM |
09-28-2017
06:20 PM
1 Kudo
@Sreejith Madhavan I see the comment "running a SELECT statement in LLAP" but using Hive1.2 env, Try using Hive2 code base. Connect to HSI (Hiveserver2Interactive) url via beeline, which will make use of llap containers
... View more
03-27-2017
07:27 AM
1 Kudo
@zkfs hope this article might help you for correct settings https://community.hortonworks.com/articles/591/using-hive-with-pam-authentication.html
... View more
03-22-2017
07:13 PM
3 Kudos
@Harold Allen Badilla There is no disadvantage in importing a table from Sql Server directly to Hive. In fact its a single command which internally does importing data into HDFS loaction(you can specify via --warehouse-dir) creates hive table schema and Loads the data to Hive table. This create Hive table name/schema similar to the source database table. sqoop import --connect "jdbc:sqlserver://11.11.111.11;databaseName=dswFICO" \ --username sqoop\ --password sqoop \ --driver com.microsoft.sqlserver.jdbc.SQLServerDriver \ --table KNA1 \ --warehouse-dir <HDFS path> --hive-import -> additionally you can specify --hive-overwrite if you want to overwrite any exiting data on the Hive table(if exists) -> If you want to load data into a table(hive) of your choice you can use --create-hive-table --hive-table <table name>
... View more
03-09-2017
01:17 PM
2 Kudos
@elliot gimple seems this doc might help you. https://forums.databricks.com/questions/7599/create-a-in-memory-table-in-spark-and-insert-data.html
... View more
03-09-2017
12:27 PM
8 Kudos
@Padmanabhan Vijendran Can you try with '\073' select split(fieldname,'\073')[0] from table name Similar issue seen in beeswax connection used by Hue. https://issues.cloudera.org/browse/HUE-1332
... View more
02-06-2017
12:56 PM
2 Kudos
@Pavani N Check the following params core-site.xml <property>
<name>hadoop.proxyuser.<loginuser>.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.<loginuser>.groups</name>
<value>*</value>
</property>
hdfs-site.xml <property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
... View more
02-01-2017
10:11 AM
3 Kudos
@Nilesh Shrimant Try to create table in parquet format , and set this config set hive.fetch.task.conversion=more; https://issues.apache.org/jira/browse/HIVE-11785 hive> create table repo (lvalue int, charstring string) stored as parquet;
OK
Time taken: 0.34 seconds
hive> load data inpath '/tmp/repo/test.parquet' overwrite into table repo;
Loading data to table default.repo
chgrp: changing ownership of 'hdfs://nameservice1/user/hive/warehouse/repo/test.parquet': User does not belong to hive
Table default.repo stats: [numFiles=1, numRows=0, totalSize=610, rawDataSize=0]
OK
Time taken: 0.732 seconds
hive> set hive.fetch.task.conversion=more;
hive> select * from repo; Option 2: There is some info here: http://stackoverflow.com/questions/26339564/handling-newline-character-in-hive Records in Hive are hard-coded to be terminated by the newline character (even though there is a LINES TERMINATED BY clause, it is not implemented).
Write a custom InputFormat that uses a RecordReader that understands non-newline delimited records. Look at the code for LineReader / LineRecordReader and TextInputFormat . Use a format other than text/ASCII, like Parquet. I would recommend this regardless, as text is probably the worst format you can store data in anyway.
... View more
02-01-2017
09:54 AM
1 Kudo
@Nic Hopper You can directly import table to hive, with --hive-import sqoop import --connect "jdbc:sqlserver://ipaddress:port;database=dbname;user=username;password=userpassword" --table policy --warehouse-dir "/user/maria_dev/data/SQLImport" --hive-import --hive-overwrite
It creates the hive table and writes data into it(generally managed table finally moves data to hive.warehouse.dir)
... View more
08-22-2016
06:42 AM
2 Kudos
@Andrew A Cloudera Connector for Teradata 1.1.1 do not support imports from views as is documented inlimitations section of the user guide. The connector will try to create temporary tables in order to provide all or nothing semantics, which I'm expecting is the reason for the exception. If you do not have such privileges on the main database, you can instruct the connector to create the staging tables in any other database where you have enough privileges Please use this link has enough explanation. http://stackoverflow.com/questions/16855710/sqoop-teradata-connector-issue-error-the-user-does-not-have-create-table-acce Please look into last answer in the link which has enough explanation.
... View more
07-14-2016
08:53 AM
2 Kudos
@chennuri gouri shankar Main issue could be : The system was not able to create new process(es), because of the limits set for nproc in /etc/security/limits.conf file. Increase the value of "nproc" parameter for user or all user's in /etc/security/limits.d/90-nproc.conf
example of /etc/security/limits.d/90-nproc.conf file.
<code><user> - nproc 2048 <<<----[ Only for "<user>" user ]
Please use this link to debug for redhat linux flavours: https://access.redhat.com/solutions/543503 Other thing you can check is the user limit in creating processes. Check with : ulimit -a. (TO check Limits in your shell) Esp check for 'ulimit -u' max user processes,
... View more