Member since
02-09-2016
559
Posts
422
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2130 | 03-02-2018 01:19 AM | |
3468 | 03-02-2018 01:04 AM | |
2359 | 08-02-2017 05:40 PM | |
2340 | 07-17-2017 05:35 PM | |
1704 | 07-10-2017 02:49 PM |
08-31-2016
03:59 PM
@jk Try specifying the full Serde definition (although what you tried should work): ROW FORMAT SERDE ‘org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe’ WITH SERDEPROPERTIES(“serialization.encoding”=’UTF-8′);
... View more
08-29-2016
07:25 PM
@Sami Ahmad When you say "local table", do you mean like "default.tab1"? It's always a good idea to be as specific as possible about where you want to store the data in Hive. If you create a new database called "test" and you ran your original command, how would Sqoop and Hive know where to write the table? By specifying "default.tab1" or "test.tab1", you remove the unknowns.
... View more
08-29-2016
07:01 PM
@Sami Ahmad You can use the "default" database in Hive without any issue. It is a fairly common approach. The error you are seeing could be related to sqoop having access problems to the PATRON database in Oracle. You can pass the --hive-table parameter to specify the name of the hive table into which you want to store the data. https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_literal_sqoop_create_hive_table_literal Try adding "--hive-table default.tab1" to the command.
... View more
08-25-2016
02:03 PM
@Yukti Agrawal If you want to get a parameter in a bash script, you use $1 - first parameter
$2 - second parameter So in the script you would have something like this: myFile=$1
Anywhere you had "myFile.txt", you can replace with "$myFile". Naturally you would want to do some error checking make sure the file exists. There are a number of ways to do that.
... View more
08-24-2016
10:51 PM
@Ranjith Penupalli Installing in an Azure Linux VM is not significantly different than installing in a bare metal Linux server. Ensure you have created an Azure VM that follows the recommendations/requirements documented for Ansible Tower: http://docs.ansible.com/ansible-tower/latest/html/quickinstall/prepare.html This link https://docs.ansible.com/ansible/guide_azure.html covers using Ansible to deploy on Azure with playbook. While this is not specifically what you asked, you may find it helpful. You may also find this link https://azure.microsoft.com/en-us/documentation/templates/ansible-advancedlinux/ helpful. Are there specific parts of the process you are having trouble with?
... View more
08-24-2016
07:52 PM
@Yukti Agrawal When you use sudo, you should use the "-" option to get the user environment settings, etc. sudo su - hdfs -c "hdfs dfs -mkdir ${myOptions[0]}
... View more
08-23-2016
04:38 PM
@Alex Br Good to know you solved it. Thank you for sharing the solution!
... View more
08-23-2016
02:08 PM
1 Kudo
@milind pandit Apache Falcon is specifically designed to replicate data between clusters. It is another tool in your toolbox in addition to the suggestions provided by @zkfs. http://hortonworks.com/apache/falcon/
... View more
08-23-2016
02:06 PM
1 Kudo
@Alex Br Can you provide the steps you took to create the directory?
... View more
08-23-2016
02:02 PM
@shashi cheppela
As @Eyad Garelnabi already mentioned, the Hortonworks Sandbox is a great way to experiment with HDP 2.5. If you are comfortable with Amazon AWS, you can use Cloudbreak to deploy the tech preview of HDP 2.5. This HCC article walks you through the process. https://community.hortonworks.com/articles/52380/how-to-test-hdp-25-tp-using-cloudbreak-14-on-amazo.html Another way to test HDP 2.5 is to use Hortonworks Cloud (HDP AWS), which is based on Cloudbreak. They seem very similar, but Hortonworks Cloud is meant for ephemeral clusters, while Cloudbreak is meant for much longer running, "permanent" clusters: http://hortonworks.github.io/hdp-aws/
... View more