Member since
01-18-2016
169
Posts
32
Kudos Received
21
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1532 | 06-27-2025 06:00 AM | |
| 1288 | 01-14-2025 06:30 PM | |
| 1846 | 04-06-2018 09:24 PM | |
| 1980 | 05-02-2017 10:43 PM | |
| 5149 | 01-24-2017 08:21 PM |
11-28-2016
06:26 PM
Note that the above is deletes older files based on file modification time, not based on the timestamp in the filename. I did use the filename with a timestamp, which probably makes the example confusing. So that command could be used with any kind of file such as keeping the last 5 copies of your backup files. Also, if you use logrotate (e.g. where log4j rolling files is not an option), you can use the maxage option, which also uses modified time. This is from the logrotate man page: maxage count
Remove rotated logs older than <count> days. The age is only checked if the logfile is to be rotated. The files are mailed to the configured address if maillast and mail are configured.
... View more
12-30-2016
03:04 AM
good job,tks
... View more
11-21-2016
01:29 AM
--split-by option is possible for text column after add sqoop-site.xml in ambari or add that option in command line. Like this. I think oracle record count is not relevant splitted file size. Because actual file size depends on column count and column type and column value size per one record. And here's my interesting sqoop import results. One file total size : 2.2 GB sqoop import ... --direct --fetch size 1000 --num-mappers 10 --split-by EMP_NO (TEXT) 0 bytes each 3 mappers, and 1.1GB to 1 mapper. and re-test with same value except below option. --split-by REEDER_ID (NUMBER) In my opinion, Sqoop mappers only parallel processing without regard to the file size for selected query results in oracle record, these are not evenly split file size. Also --split-by with NUMBER TYPE COLUMN option is useful than TEXT TYPE COLUMN that is not accurate for splitted file size.
... View more
02-04-2017
08:06 PM
This solved my issue. In my case, the ambari database is a postgresql Database.
... View more
07-28-2016
07:55 PM
Someone had entered two entries in the spark-defaults.conf which caused spark shell and pyspark to run as "spark" in yarn. spark.yarn.keytab and spark.yarn.principal. Removing them fixed it.
... View more
10-24-2018
10:19 AM
Ah! James Jones --- I have a Recommendation - Never believe recommendations from Ambari 😉
... View more
06-29-2016
05:12 PM
Awesome. Thanks. I wasn't sure if under the covers Ranger was just doing sql grants.
... View more
05-19-2016
12:41 AM
I want to correct what I said above. On the client, kafka.consumer exists, but I was looking on the broker. And, the answer below is correct (newly named metric/Mbean is on the new consumer).
... View more
- « Previous
- Next »