Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2626 | 12-25-2018 10:42 PM | |
12058 | 10-09-2018 03:52 AM | |
4164 | 02-23-2018 11:46 PM | |
1838 | 09-02-2017 01:49 AM | |
2166 | 06-21-2017 12:06 AM |
01-21-2016
02:39 AM
Something is possibly wrong with your connection to Ambari server node, or with Ambari DB. Can you see "Settings" for other Hadoop components? What about the "Advanced" tab and Hive summary page, can you see them?
... View more
01-20-2016
02:16 PM
3 Kudos
@sivasaravanakumar k Sorry, but if you want Sqoop to support descrubed functionality a time-stamp column is required. You can easily add it to you existing table by doing this in MySql: ALTER TABLE student_info ADD ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP;
UPDATE TABLE student_info SET ts=now(); That's all! When you update values in your table, for example by "update student_info set ..." ts will be updated automatically. And Sqoop will use ts to import only updated rows. Please give it a try.
... View more
01-20-2016
07:20 AM
5 Kudos
Hi @Pardeep with Support's help we got rid of those alerts by adding 'misfire_grace_time':10 to APS_CONFIG in /usr/lib/python2.6/site-packages/ambari_agent/AlertSchedulerHandler.py on every node. After the update that section should read: APS_CONFIG = {
'threadpool.core_threads': 3,
'coalesce': True,
'standalone': False,
'misfire_grace_time':10
} In this we are allowing up to 10 seconds for all tests to complete. After that restart all ambari_agents. We tried on one cluster and it worked. This is most likely fixed in Ambari-2.2 but happens in 2.1.2.
... View more
01-20-2016
07:04 AM
The above sqoop job will do that. Just add a new column to your MySql table like below. When you update your table ts will be updated automatically to the current time, and Sqoop will use ts to update only updated rows. ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
... View more
01-20-2016
05:24 AM
2 Kudos
Use --incremental lastmodified, and you need to add an extra column to your MySql table with a time-stamp, and whenever you update a row in MySql you need to update the time-stamp column as well. Let's call that new column ts, then you can create a new Sqoop job like this: $ sqoop job --create student_info2 -- import --connect ... --incremental lastmodified --check-column ts And run student_info2. If you run from the cmd line you can also specify "--last-value last-ts" telling sqoop to import only rows where ts>last-ts. When you use saved jobs Sqoop does that for you.
... View more
01-20-2016
04:11 AM
Check your second command, you omitted "dfs".
... View more
01-19-2016
02:46 PM
@narasimha meruva I checked details about shell and java Oozie actions and found that both are executed as a 1-mapper, 0-reducer MapReduce job. I'm not sure how ecactly is "hadoop jar" being executed in a single mapper, but I'm afraid that this approach will not easily scale to 100 mappers if at all. OTOH, as we know, it will definitely work as an Map-reduce action, so, to avoid further troubles, my suggestion is to identify mapper and reducer classes and run this as an Oozie MR action.
... View more
01-19-2016
11:54 AM
All right, any idea where is that 100 coming from? Can you change to 50? How did you "install" abc.jar, just by copying to your system or was there any other config file included? We have to find that out, and supply that config to Oozie. Or you can try to set the number of mappers directly like below. If it still runs only 1 mapper try "-D mapreduce.job.maps", it's a new name for the same property. [By the way, I think that even if we set mapper and reducer classes it will run only 1 mapper.] Or ask the guys who made abc.jar. hadoop jar abc.jar DriverProg -D mapred.map.tasks=100 ip op
... View more
01-19-2016
09:12 AM
1 Kudo
Do your service checks (Spark, HDFS, Yarn, Mapred etc) work? If they do, have you acquired a ticket, what does "klist" say? If klist lists nothing you have to acquire a ticket using kinit, either as an end-user, or as spark or hdfs service user. Try first to list hdfs: "hdfs dfs -ls /", does it work?
... View more
01-19-2016
12:10 AM
You mean you don't know mapper and reducer classes? You can unzip abc.jar and find out. Otherwise, what's your required number of mappers, is it a fixed number? If so, where is it defined? If there are some additional, non-default settings you need to pass them to Oozie, because Oozie is aware only of items available in its workflow directory.
... View more