Member since
10-06-2015
273
Posts
202
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4043 | 10-11-2017 09:33 PM | |
3564 | 10-11-2017 07:46 PM | |
2570 | 08-04-2017 01:37 PM | |
2210 | 08-03-2017 03:36 PM | |
2238 | 08-03-2017 12:52 PM |
05-25-2017
02:22 PM
Please take a look at @Matt Clarke's response above on how to extract csv files only. It is the most straight forward way.
... View more
05-24-2017
12:06 PM
@regie canada The extractText processor creates FlowFile attributes from the extracted text. NiFi has an AttributesToJSON processor you can use to generate JSON form these created attributes. For new questions, please open a new question. It makes it easier for community users to search for answers. Thanks, Matt
... View more
05-19-2017
06:32 PM
@Farzaneh Poorjabar Easiest way to assign access only to a specific directory (say /home/farzaneh) is: Resource path : /home/farzaneh isRecursive: false If you need the access granted recursively to a directory and all directories under it, then Resource path : /home/farzaneh isRecursive:true But, there is a side-effect. Access will be granted to all paths starting with /home/farzaneh There is no explicit way to specify in a ranger policy, if the specified resource is a file or a directory. That leads to these corner cases. You could still get the effect you want by specifying two policies, one with resource as '/home/farzaneh/*', isRecursive = true and another with two resources ['/home/farzaneh', '/home/farzaneh/'] with isRecursive = false.
... View more
05-18-2017
04:29 PM
1 Kudo
Here is a gross simplification that might be helpful:
Exactly-once usually requires that the source system and destination system can somehow be made to agree on a method to manage at-least-once delivery and data-deduplication. NiFi could be the transport layer providing at-least-once delivery between the systems, but Kafka to NiFi without those semantics or some additional approach will not satisfy those requirements alone.
... View more
05-17-2017
08:46 PM
Thanks @mqureshi . I didn't realize ExecuteSQL used a connection pool.
... View more
06-19-2017
03:00 PM
Could you please clarify what you mean by (exclusive) in this paragraph "Make sure that the blocksize ('dfs.blocksize'in'hdfs-site.xml')is within the recommended range of 134217728 to 1073741824(exclusive)" ? You mean the maximum value used should be 1073741823 and we shouldn't use 1073741824 with is exactly 1GiB ?
... View more
12-21-2016
12:28 AM
4 Kudos
This : Insufficient permissions (user=atlas, scope=default,params=[namespace=default,table=default:atlas_titan,family=s],action=CREATE Simply means that your Atlas service user has no longer access to the backend tables , therefore the Atlas Rest API service can't serve the records which in turn causes the empty http response . First , stop the Atlas service in Ambari Admin Ui, then connect in the container as a root follow these steps: # su hbase hbase shell
Execute the following command in HBase shell, to grant global permissions to 'atlas' user so it can create the tables it needs: hbase(main):001:0> grant 'atlas', 'RWXCA'
Start Atlas service in Ambari Admin Ui, Execute the following command in HBase shell, to revoke global permissions granted to 'atlas' user: hbase(main):001:0> revoke 'atlas' Execute the following commands in HBase shell, to enable Atlas to access necessary HBase tables hbase(main):001:0> grant 'atlas', 'RWXCA', 'atlas_titan' hbase(main):001:0> grant 'atlas', 'RWXCA', 'ATLAS_ENTITY_AUDIT_EVENTS' hbase(main):001:0> exit Return to the Ambari Admin UI , start the Altas service You should be able to check by connecting back to Atlas dashboard , or simple watch the log tail -f /var/log/atlas/application.log For details please refer to the following : http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_data-governance/content/ch_hdp_data_governance_install_atlas_ambari.html
... View more
12-12-2016
11:50 PM
@anand maurya It's best to use the binaries from the repo. Install Ambari first following the steps in the below link: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html Once the Ambari Server is up, use it to download and install the HDP binaries as shown in the following link: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Deploy_and_Configure_a_HDP_Cluster.html
... View more
10-24-2017
11:24 AM
Hi Amit Kumar Agarwal I am looking to run Hive HQL from SPARK SQL.. could you please provide the guidance for same.
Thanks, Deepesh deepeshnema@gmail.com
... View more