Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2666 | 12-25-2018 10:42 PM | |
12217 | 10-09-2018 03:52 AM | |
4208 | 02-23-2018 11:46 PM | |
1891 | 09-02-2017 01:49 AM | |
2214 | 06-21-2017 12:06 AM |
04-08-2016
03:49 AM
1 Kudo
Yes, for Mysql-5.5 and higher your need the latest version of mysql-connector and it cannot be installed using yum, download it from here. After installing it retry sqoop. Also on HDP Sandbox add "--driver com.mysql.jdbc.Driver" to your sqoop command.
... View more
04-08-2016
12:06 AM
3 Kudos
Hi @Wes Floyd, support for HS2 HA Zookeeper discovery is also available in Knox-0.7, but the version of Knox packaged in the latest HDP-2.4 is still 0.6, and so it's not yet availalbe in HDP. You can consider to install it yourself.
... View more
04-07-2016
11:34 AM
So, I assume "ssh -p2222 root@127.0.0.1" works? Can you check permissions "ls -ld /data" and try to scp to /tmp.
... View more
04-07-2016
11:09 AM
2 Kudos
There is no option to upgrade only individual components in Ambari, like HBase, and it's intentional because each distribution of HDP is created and tested so that components are compatible with each other. So, if you want to upgrade only your custom service I think you need to follow the manual procedure you suggested.
... View more
04-06-2016
07:33 AM
2 Kudos
It can be done by extending the OutputFormat class and overwriting the OutputFormat.checkOutputSpecs method so that it doesn't throw exception when the output path already exists. After that, register the new class using JobConf.setOutputFormatClass method [some more details here].
... View more
04-06-2016
02:03 AM
Hi @JOSE GUILLEN, this happens because Flume keeps on writing in a .tmp temporary file, and when it's full the file is renamed so that .tmp suffix is dropped. So, when you start your Hive script it may be there, but when the time comes for Hive to read it, the file may have already been renamed. There is a Flume JIRA FLUME-2458 created to separate tmp files to another directory but it's not resolved yet. In the meanwhile you can try to use a workaround described here by setting hdfs.filePrefix and hdfs.inUsePrefix in your Flume conf file, for example hdfs.path=/user/flume/twitter/landing
hdfs.filePrefix=bod/
hdfs.inUsePrefix=tmp/ and pre-create /user/flume/twitter/landing/tmp/bod in HDFS where .tmp files will be stored (please test this since I don't have a Flume setup handy to try). Edit: It might be enough just to set "hdfs.inUsePrefix=.", in this way tmp files will be named like .file101.tmp and will be hidden from Hive. So please try this first, and if you still have issues then try the workaround above.
... View more
04-05-2016
10:53 PM
1 Kudo
It failed after 11 minutes, so there is maybe a permission issue on HFiles (see the details on the Tool page). Can you try to add this to your command, and retry: -Dfs.permissions.umask-mode=000
or if possible run the command as hbase user.
... View more
04-05-2016
10:46 PM
1 Kudo
@Madhavi Amirneni, if you like the answer please consider to accept and/or upvote it. This is how HCC works: users who ask questions are "awarded" by right answers, users who provide right answers are "awarded" by this upvotes/accepts. Tnx!
... View more
04-05-2016
09:13 PM
1 Kudo
Hi @Madhavi Amirneni, Yes, it is possible to store audit records only in HDFS but they cannot be viewed through Ranger UI. The main reason is that search is not supported. To view records in UI, a DB or Solr have to be configured and ranger.audit.source.type set to either db or solr. By the way, audit records in HDFS are stored in text files, as Json objects, see a sample below (audit for HDFS), and can be explored using another tool. The directories are organized by day, for example: /ranger/audit/hdfs/20160404. {"repoType":1,"repo":"Sandbox_hadoop","reqUser":"oozie","evtTime":"2016-04-04 01:27:05.123","access":"READ_EXECUTE","resource":"/user/oozie/share/lib","resType":"path","result":1,"policy":7,"reason":"/user/oozie/share/lib","enforcer":"ranger-acl","cliIP":"10.0.2.15","agentHost":"sandbox.hortonworks.com","logType":"RangerAudit","id":"49abe678-ffa7-46cd-ba1f-de85368dd88c","seq_num":81811,"event_count":1,"event_dur_ms":0}
... View more
04-05-2016
02:01 PM
1 Kudo
Hi @Ram Veer, CsvBulkLoadTool already supports custom delimiter using the '-d' option. To set Ctrl-A add this at the end of your command: -d '^v^a' ... inside quotes click Ctrl-v followed by Ctrl-a, as the result '^A' will appear
... View more