Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3373 | 05-03-2017 05:13 PM | |
2801 | 05-02-2017 08:38 AM | |
3082 | 05-02-2017 08:13 AM | |
3011 | 04-10-2017 10:51 PM | |
1527 | 03-28-2017 02:27 AM |
01-23-2016
07:28 PM
@Robin Dong take a look at our add-on for teradata and hdp. Link. Read the docs on the Connector. In general you would use Sqoop to ingest into and out of an EDW or RDBMS.
... View more
01-23-2016
12:41 PM
1 Kudo
@Jonas Straub needs to be converted to article!
... View more
01-23-2016
12:34 PM
Our latest tutorial on ranger walls you through importing a policy with rest @Neeraj Sabharwal
... View more
01-23-2016
02:10 AM
1 Kudo
@Pramit Mitra at this point Ambari does not support multiple versions of Hive on the same cluster. Hive 0.13 and 0.14 are old, the latest stable release is 1.2.1. If you want to test ACID, you need to use 1.2.1 as there are important fixes for ACID features. That said, you need to be on HDP 2.3 to take advantage of that. If you download our latest Sandbox, it has Hive 1.2.1 and you can take it for a test drive there.
... View more
01-23-2016
02:04 AM
Google Dataflow is a language framework for multiple engines like Spark, Flink and mapreduce. Hadoop Data Flow is a data in motion processing tool with a visual editor.
... View more
01-23-2016
01:59 AM
2 Kudos
@rbalam finalizing upgrade means you can't go back. Using the finalize command removes the previous version of the NameNode and DataNode storage directories. After the upgrade is finalized, the system cannot be rolled back. Perform thorough testing of the upgraded cluster before finalizing the upgrade. The upgrade must be finalized before another upgrade can be performed. Directories used by Hadoop 1 services set in /etc/hadoop/conf/taskcontroller.cfg are not automatically deleted after upgrade. Administrators can choose to delete these directories after the upgrade. To finalize the upgrade, execute the following command once, on the primary NameNode host in your HDP cluster: sudo su -l <HDFS_USER> -c "hdfs dfsadmin -finalizeUpgrade" where <HDFS_USER> is the HDFS service user. For example, hdfs. Here's link to the docs.
... View more
01-22-2016
08:51 PM
adding the client fixed the problem? @Venkata Sridhar Gangavarapu
... View more
01-22-2016
08:38 PM
@bsaini @zblanco @Rafael Coss @Balu I was able to run through the tutorial on my own built machine with HDP 2.3.4, albeit doing something wrong with paths, it works. Granted I was using the latest HDP 2.3 tutorial https://github.com/ZacBlanco/hwx-tutorials/blob/master/2-3/tutorials/define-and-process-data-pipelines-with-falcon/define-and-process-data-pipelines-with-apache-falcon.md where there are no CLI commands for falcon.
... View more
01-22-2016
08:25 PM
@Venkata Sridhar Gangavarapu go to the problematic node and install hive client via Ambari.
... View more
01-22-2016
05:31 PM
@John J copy the file to /home/guest as root "cp Batting.csv /home/guest" then "su guest" then you can upload to /user/guest or sudo -u guest hdfs dfs -put Batting.csv /user/guest
... View more