Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2565 | 11-01-2016 05:43 PM | |
| 8495 | 11-01-2016 05:36 PM | |
| 4856 | 07-01-2016 03:20 PM | |
| 8176 | 05-25-2016 11:36 AM | |
| 4321 | 05-24-2016 05:27 PM |
03-03-2016
05:24 PM
2 Kudos
@Smart Solutions When you add Spark through Ambari, you will be asked to choose where to deploy master service (Spark History Service) And then to choose where to deploy clients services Finally you will be asked for several properties screen-shot-2016-03-03-at-61725-pm.png
... View more
02-25-2016
01:01 PM
3 Kudos
Hi @prakash pal there are some differences between these data types, basically string allows a variable length of characters (max 32K chars), char is a fixed length string (max. 255 chars). Usually (I doubt that this is different with Impala) CHAR is more efficient and can speed up operations and is better reg. memory allocation. (This does not mean always use CHAR) See this => "All data in CHAR and VARCHAR columns must be in a character encoding that is compatible with UTF-8. If you have binary data from another database system (that is, a BLOB type), use a STRING column to hold it." There are a lot of use cases where it makes sense to only use CHAR instead of STRING, e.g. lets say you want to have a column that stores the two-letter country code (ISO_3166-1_alpha-2; e.g. US, ES, UK,...), here it makes more sense to use CHAR.
... View more
09-21-2016
02:29 PM
how to increase it for no vm please @Neeraj Sabharwal
... View more
02-24-2016
11:06 AM
@amira chaari Try this http://hortonworks.com/hadoop-tutorial/birt_reporting_tutorial/
... View more
02-24-2016
09:00 AM
Not really. You mean as a persisted storage layer under hibernate and ejbs correct? Hive wouldn't work well for this since it's not an oltp database. It is a wareshoue.. So that would leave hbase most likely with Apache Phoenix. I Googled it a bit and focused on hibernate because that seems to be the most popular recently and did not find a connector for Phoenix. Doesn't mean it's not possible to write one. Googled a bit more and there is Hibernate OGM for NoSQL stores as well. Unfortunately it currently does not support HBase. http://hibernate.org/ogm/ So the two possibilities would be to write an extension for OGM for HBase or rewrite a connector for Apache Phoenix. I wrote one for Netezza a while back and it should not be terribly difficult, although the Phoenix syntax has some differences to standard SQL ( UPSERT instead of INSERT ... )
... View more
02-24-2016
06:03 AM
@Prakash Punj If you are using MySQL then you definitely followed the below steps! The password in my case was "rangerdba" on the UI if you key in the password you used during the setup all should be fine ,otherwise you strictly need to do the initail MySQL setup corrcetly
... View more
07-20-2016
09:34 PM
@Artem Ervits @Mehrdad Niasari I believe we can lose this question. i have opened new one on default namespace here.
... View more
09-05-2017
01:37 PM
@Shishir Saxena, Yes GetFile processor also worked for shared drive. i have doubt on how to pass the credentials for access network drive? For example: my shared drive wants prompt for credentials for access folder inside it. If my shared folder have permission for everyone then i can able to access it. But if shared drive prompts for Credentials then ListFile seems doesn't work. So can you suggest the way to access shared drive with username and password in nifi processors?
... View more
02-15-2019
01:13 PM
Hi, I am struggling with the ame Problem. Mainserver: hadoopmain.hadoop.local Host1: node1.hadoop.local Host2. node2.hadoop.local I configured as it was set in the manual and when I try to connect via ssh it is working without password So I am logged in as root in the Mainserver hadoopmain.hadoop.local I then enter: ssh root@node1.hadoop.local And then I am logged on the node1 without being asked for a password. Nevertheless I a getting the failure: ==========================
Creating target directory...
==========================
Command start time 2019-02-15 12:12:13
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
SSH command execution finished
host=node1.hadoop.local, exitcode=255
Command end time 2019-02-15 12:12:13
ERROR: Bootstrap of host node1.hadoop.local fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
... View more
03-11-2018
08:32 PM
For me adding the line below to spark-defaults.conf helped based on packages installed on my test cluster. spark.executor.extraLibraryPath /usr/hdp/current/hadoop-client/lib/native/:/usr/hdp/current/share/lzo/0.6.0/lib/native/Linux-amd64-64/
... View more