Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3173 | 12-25-2018 10:42 PM | |
| 14196 | 10-09-2018 03:52 AM | |
| 4764 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2914 | 06-21-2017 12:06 AM |
05-06-2016
11:33 AM
Hi @Chokroma Tusir, yes, a work-around in this version of HDP. As the docs say "there is a community solution", well, we are the community! 🙂 And thanks for a rep point! Can you also accept the answer to help us manage answered questions. Tnx!
... View more
05-06-2016
11:27 AM
Keeping imported table as-is and transforming it (ORC, Parquet, etc) is the preferred way. You keep your data, no need to import again if something unplanned happens, and once you have decided how to handle it you can drop the imported table. (You can also import directly to Hive as ORC, there is a guide how to do that on HCC). Regarding the second question, my example was with a managed table (stored at /apps/hive/warehouse), if you wish to store tables elsewhere you can create another external table, provide the location, and write into it.
... View more
05-06-2016
11:06 AM
1 Kudo
Just set "listeners=PLAINTEXT://localhost:6667", Ambari and Kafka will replace each localhost with a respective broker node FQDN. No change when creating new topics. When running a producer, list all brokers, you can define them separately as you will need them over and over: export BK="broker1.fqdn:6667,broker2.fqdn:6667,broker3.fqdn:6667"
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list $BK --topic test3
... View more
05-06-2016
10:53 AM
1 Kudo
Your location is okay, but the format of the files is not. It looks like you imported your table as avro files. So, in your declaration you have to say "STORED AS AVRO", not "STORED AS ORC". Once that suceeds you can first test your table by selecting some rows, SELECT COUNT(*) etc, and then create your ORC table by saying in Hive, for example: "CREATE TABLE DimSampleDesc_orc STORED AS ORC AS SELECT * from DimSampleDesc". You also have permission issues, the owner of the file is sqoop but user hive needs write permission to create external table. There are several way to resolve this, the fastest will be to give write permissions to everyone: hdfs dfs chmod -R a+w /dataload/tohdfs/reio/odpdw/may2016/DimSampleDesc However, you better think how to handle permissions in you system, for example you can run sqoop etc. as an end user (like user1), add user1 to hadoop group, and give write permissions to the group (g+w). That will work because user hive also belongs to group hadoop. Or you can use Ranger to manage permissions.
... View more
05-06-2016
10:37 AM
Hi @Mike Vogt, thanks and glad to hear it worked. Can you kindly accept the answer and thus help us managing answered questions. Tnx!
... View more
05-06-2016
06:24 AM
1 Kudo
Try replacing --target-dir with --warehouse-dir. Table t1 will be imported into directory warehouse-dir/t1. Regarding Hive, add --hive-import, the very first time use --create-hive-table, and after that use --hive-overwrite. If troubles continue, test your Oozie Sqoop action on a single table import into hdfs, just to make sure you have the right syntax. After that retry import-all-tables.
... View more
05-05-2016
01:32 PM
Try this export http_proxy=http://proxy.tcs.com:8080
wget --proxy-user=1105949 --proxy-password='Bvs#...' http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.1/ambari.repo -O /etc/yum.repos.d/ambari.repo Type your full password on the second line, and confirm proxy URL and the proxy user. If it works you will get ambari.repo and you should be good to go since your yum proxy looks good. Confirm with "yum repolist", and try to install ambari-agent.
... View more
05-05-2016
10:11 AM
Check /etc/yum.conf, proxy is usually set there.
... View more
05-05-2016
07:58 AM
No, Impala cannot read all versions either. Impala defines tables in HBase using Hive DDL because Impala doesn't support custom SerDe's to define tables, and as we saw Hive doesn't expose HBase timestamps (more detail about Impala and HBase here). So, if you want to access timestamps you have to make them "first class citizens" and include them in your Hbase table key, or among values, if you can ensure unique keys by other means.
... View more
05-05-2016
04:16 AM
1 Kudo
Do you still have 4 *-site.xml files in your Oozie share/lib Spark directory in HDFS? If not upload them to HDFS, or even better, to be sure, remove them, and upload the latest versions. Then restart Oozie and retry your action.
... View more