Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3365 | 05-03-2017 05:13 PM | |
2797 | 05-02-2017 08:38 AM | |
3076 | 05-02-2017 08:13 AM | |
3006 | 04-10-2017 10:51 PM | |
1517 | 03-28-2017 02:27 AM |
01-12-2016
08:22 AM
you need to set output directory in your job FileInputFormat.addInputPath(job, new Path(otherArgs.get(0)));
FileOutputFormat.setOutputPath(job, new Path(otherArgs.get(1)));
... View more
01-12-2016
07:55 AM
I should've clarified which jira, I was suggesting to file a jira with Apache @Bidyut B.
... View more
01-12-2016
07:47 AM
here's a scenario that works on Sandbox # Create Hive table drop table if exists export_table; create table export_table ( key int, value string ) row format delimited fields terminated by ","; # populate Hive with dummy data insert into export_table values("1", "ExportedValue"); # confirm Hive table has data select * from export_table; # display the values in hive as hdfs files hdfs dfs -cat /apps/hive/warehouse/export_table/000000_0 # export table to MySQL # MySQL table must exist su mysql mysql -u root create database export; use export; create table exported (rowkey int, value varchar(20)); exit; # on HDP 2.3.2 Sandbox, SQOOP-1400 bug, use --driver com.mysql.jdbc.Driver to overcome the problem # sqoop export from a Hive table into MySQL sqoop export --connect jdbc:mysql://127.0.0.1/export --username hive --password hive --table exported --direct --export-dir /apps/hive/warehouse/export_table --driver com.mysql.jdbc.Driver # login to Mysql and check the table su mysql mysql -u root use export; select * from exported; exit;
... View more
01-12-2016
07:47 AM
you typically wouldn't use sqoop to query Hive table. Login to mysql and list databases there. Once you know which db you'd like to import then you can use the command you'd used before. For exporting hive data, you'd use hive commands, pig scripts, hdfs commands, etc.
... View more
01-11-2016
07:30 PM
@Peter Coates I think it depends on staleness property of your hdfs-site configuration. If it takes a long time to reboot a server, namenode will mark the DN as stale. Take a look at this property, dfs.namenode.stale.datanode.interval.
... View more
01-11-2016
07:25 PM
@Mihai Morareanu glad it worked for you, please accept one of the valid answers to close out the thread.
... View more
01-11-2016
07:19 PM
@Jade Liu that is old indeed, you have several options, either have a separate cluster and distcp data to new cluster, install earlier spark version on your current cluster, not supported by Hortonworks btw, upgrade the current cluster. With Ambari 2.2, we're dropping support for HDP 2.0 so we highly encourage you upgrade to the latest HDP, you will benefit immensely from all new features and fixes. Going from 2.0 to 2.3 is not supported, you have to upgrade to 2.1 first and then you can use Express Upgrade to go to 2.3. We recommend you contact Hortonworks support for upgrades of this kind.
... View more
01-11-2016
07:14 PM
1 Kudo
@Revlin Abbi you're confusing your local filesystem with Sandbox filesystem. You can create a mysql database in Sandbox as it already comes with mysql server, you can create a db in postgres as well, Ambari db is one such database. Unless you specify host-guest networking so that guest can access the host's services, you won't be able to achieve what you're doing. It's a lot simpler to work with mysql that is inside Sandbox guest machine.
... View more
01-11-2016
06:01 PM
1 Kudo
@Kuldeep Kulkarni
create cluster command is asynchronous, what specifically will you gain from having a timeout? You may need to file an enhancement jira with Ambari project. Step 5: Create Cluster POST /api/v1/clusters/:clusterName Request body includes blueprint name, host mappings and configurations from Step 3. Request is asynchronous and returns a /requests URL which can be used to monitor progress.
... View more
01-11-2016
05:57 PM
@Kuldeep Kulkarni you can also poke around in the ambari database for status installed but that's essentially the same as using API only much more intrusive.
... View more