Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3574 | 05-03-2017 05:13 PM | |
| 2945 | 05-02-2017 08:38 AM | |
| 3196 | 05-02-2017 08:13 AM | |
| 3159 | 04-10-2017 10:51 PM | |
| 1632 | 03-28-2017 02:27 AM |
02-01-2016
02:31 PM
@Avraha Zilberman execute the job only specifying hbase-client and hadoop-client, if still doesn't work add hbase-server but that's it.
... View more
02-01-2016
02:28 PM
@Ram D on hwx platform, use our recommendation. Elsewhere it's up to you
... View more
02-01-2016
02:23 PM
take a look at job preemption, new feature in YARN. @Ram D you can also raise and lower the priority of each job. Command to interact with Map Reduce Jobs. Usage: hadoop job [GENERIC_OPTIONS] [-submit <job-file>] | [-status <job-id>] | [-counter <job-id> <group-name> <counter-name>] | [-kill <job-id>] | [-events <job-id> <from-event-#> <#-of-events>] | [-history [all] <jobOutputDir>] | [-list [all]] | [-kill-task <task-id>] | [-fail-task <task-id>] | [-set-priority <job-id> <priority>] COMMAND_OPTION Description -submit job-file Submits the job. -status job-id Prints the map and reduce completion percentage and all job counters. -counter job-id group-name counter-name Prints the counter value. -kill job-id Kills the job. -events job-id from-event-# #-of-events Prints the events' details received by jobtracker for the given range. -history [all]jobOutputDir Prints job details, failed and killed tip details. More details about the job such as successful tasks and task attempts made for each task can be viewed by specifying the [all] option. -list [all] Displays jobs which are yet to complete. -list all displays all jobs. -kill-task task-id Kills the task. Killed tasks are NOT counted against failed attempts. -fail-task task-id Fails the task. Failed tasks are counted against failed attempts. -set-priority job-id priority Changes the priority of the job. Allowed priority values are VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW
... View more
02-01-2016
02:20 PM
127.0.0.1 or if you add 2nd network card, 192.168.56.101 @Bharathkumar B you can also comment out the entry in /etc/hosts and use 127.0.0.1 again.
... View more
02-01-2016
02:17 PM
@Kibrom Gebrehiwot you probably shut down the machine abruptly. SSH to the machine, issue shutdown now -r and hope everything comes back. If it doesn't, SSH again, issue "ambari-server start", login to web UI by going to 127.0.0.1:8080 user:admin pass:admin, start services manually. If still doesn't work, import a new sandbox and start from scratch.
... View more
02-01-2016
01:58 PM
@Avraha Zilberman have you looked at these examples? Can you show your pom file as well? You'd ususally call a built-in utility for HBase like so TableMapReduceUtil.initTableMapperJob(
tableName, // input HBase table name
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper
null, // mapper output key
null, // mapper output value
job);
job.setOutputFormatClass(NullOutputFormat.class);
... View more
02-01-2016
12:44 PM
Look at this example Link for spark streaming an this example for Kafka Link @Krishna Srinivas
... View more
02-01-2016
12:41 PM
1 Kudo
@Krishna Srinivas take a look at nifi, you can sqoop into a spooling dir, have Kafka pick up from there on. Spark streaming in nifi already exists and Storm is going to be included soon. Rough idea of your last inquiry Sqoop incremental into hdfs directory > watch hdfs dir with nifi > putKafka > Stormspark You can also split to two pipes in nifi and join into one pipe from two
... View more
02-01-2016
12:38 PM
@John Smith its a better practice so that if you do happen to get a null at least it won't bomb. As far as jira, that's open source, individual contributors also need earn a living and if there's higher responsibilities then they'll get to it when queue is clear. I wouldn't get your hopes up and identify alternative ways. Shoot an email to the avro mailing list. They may help faster.
... View more