Member since
04-27-2016
218
Posts
133
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3251 | 08-31-2017 03:34 PM | |
6791 | 02-08-2017 03:17 AM | |
2837 | 01-24-2017 03:37 AM | |
9899 | 01-19-2017 03:57 AM | |
5405 | 01-17-2017 09:51 PM |
04-19-2024
05:49 PM
@SamarApple Hello! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
02-23-2021
02:01 AM
I'm new to NiFi, and I'm not sure your data flow has same condition as mine, but I have a same issue of being occurred the same exception that you mentioned. I'm using Oracle 11g XE, there was no invalid query nor invalid data. In addition, I had another problem with the Oracle session of PutSQL been locked when I let a lot of flowfile flow to PutSQL processor, e.g., 5,000 flowfile in 0.5 sec. I have spent all day long to fix this problem today modifying almost every single properties of all processors connected to the flow, and even of DBCP controller service... and finally found the cause. In processor PutSQL, there is a property named 'Support Fragmented Transactions'. I don't know pretty much about this and need to know how it works, but when I have set it false, the problem was solved. And it took some time more than before. I'm not an expert of NiFi, but I hope this might be helpful for you.
... View more
04-06-2020
07:40 AM
I'm glad you were able to find a workaround. But if I understand the solution correctly, then we must use JQuery and abandon hopes of using the Fetch API? I would like to use CORS for the added security and feel like this is not a holistic solution. The root issue seems to be that NiFi admins cannot add origins. Is this really the only option?
... View more
02-01-2020
08:36 AM
Hello, did you make this work? Is it possible to run the Nifi as a service on Windows in some way? Please give us more detailed information about it, if you succeeded... Can't apps like AlwaysUp or JSL work? Java Service Wrappers... Thanks.
... View more
01-22-2019
04:45 PM
Ananya, script was updated long back to take care of this. You should able to use existing vpc and subnet. Only issue you might face if internet gateway is already attached to vpc as script prefers to add new internet gateway.
... View more
04-09-2018
08:52 PM
5 Kudos
This is the continuation of article Part-1 provisioning HDP/HDF cluster on google cloud. Now that we have Google credentials created, we can provision the HDP/HDF cluster. Lets first start with HDP cluster. Login to Cloudbreak UI and click on create cluster which will open the create cluster wizard with both basic and advanced options. On the general configuration page Select the previously created Google credentials, Enter name of the cluster , Select region as shown below, Select either HDP or HDF version. For cluster type select the appropriate cluster blueprint based on your requirements. The available blueprint option in cloudbreak 2.5 tech preview are shown below. Next is configuring the Hardware and Storage piece. Select the Google VM instance type from the dropdown. Enter number of instances for each group. You must select one node for ambari server for one of the host group for which the Group Size should be set to "1". Next is setup the Network group. You can select the existing network or you have option to create new network. On the Security config page provide the cluster admin username and password. Select the new ssh key public key option or the existing ssh public key option. You will use the matching private key to access your nodes via SSH. Finally you will hit create cluster which will redirect you to cloudbreak dashboard. The following left image shows the cluster creation in progress and right image shows the successfully creation of HDP cluster on Google cloud. Once successful deploying the HDP cluster you can login to HDP nodes using your ssh private key with choice of your tool. Following image shows the node login using google cloud browser option. Similarly you can provision the HDF (NiFi: Flow management ) cluster using cloudbreak which is included as part of 2.5 tech preview. Following are some key screenshots for the reference. The Network, Storage and security configuration is similar as we have seen in HDP section earlier. With limitation with my google cloud account subscription I ran into the exception while creating HDF cluster which was rightly shown on cloudbreak. I had to select different region to resolve it. The nifi cluster got created successfully as shown below. Conclusion: Cloudbreak can provide you the easy button to provision and monitor the connected data platform (HDP and HDF) in the cloud vendor of your choice to build the modern data applications.
... View more
Labels:
04-09-2018
05:42 PM
5 Kudos
Cloudbreak Overview Overview Cloudbreak enables enterprises to provision Hortonworks
platforms in Public (AWS + GCP + Azure) and Private (OpenStack) cloud
environments. It simplifies the provisioning, management, and monitoring of
on-demand HDP and HDF clusters in virtual and cloud environments. Following are primary use cases for Cloudbreak:
Dynamically configure and manage
clusters on public or private clouds. Seamlessly manage elasticity
requirements as cluster workloads change Supports configuration defining
network boundaries and configuring security groups. This article focuses on deploying HDP and HDF cluster on Google
Cloud. Cloudbreak Benefits You can spin up connected data platform (HDP and HDF clusters)
on choice of your cloud vendor using open source Cloudbreak 2.0 which address
the following scenarios.
Defining the comprehensive
Data Strategy irrespective of deployment architecture (cloud or on premise). Addressing the Hybrid (on-premise
& cloud) requirements. Supporting the key Multi-cloud
approach requirements. Consistent and familiar
security and governance across on-premise and cloud environments. Cloudbreak 2 Enhancements Recently Hortonworks announced the general Availability of the
Cloudbreak 2.4 release. Following are some of the major enhancements in the
Cloudbreak 2.4:
New UX / UI: a greatly simplified and streamlined user
experience. New CLI: a new CLI that eases automation, an important
capability for cloud DevOps.
Custom Images: advanced support for “bring your own image”, a
critical feature to meet enterprise infrastructure requirements.
Kerberos: ability to enable Kerberos security on your
clusters, must for any enterprise deployment. You can check the following HCC article for detail overview
of Cloudbreak 2.4 https://community.hortonworks.com/articles/174532/overview-of-cloudbreak-240.html Also check the following article for the Cloudbreak 2.5 tech
preview details. https://community.hortonworks.com/content/kbentry/182293/whats-new-in-cloudbreak-250-tp.html Prerequisites for
Google Cloud Platform. Article assumes that you have already installed and launch
the Cloudbreak instance either on your own custom VM image or on Google Cloud
Platform. You can follow the Cloudbreak documentation which describes
both the options. https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.5.0/content/index.html https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.5.0/content/gcp-launch/index.html
In order to launch the Cloudbreak
and provision the clusters make sure you have the Google cloud account. You can
create one at https://console.cloud.google.com Create new project in GCP
(e.g. GCPIntegration project as shown below).
In order to launch the
clusters on GCP you must have service account that Cloudbreak can use. Assign
the admin roles for the Compute Engine and Storage. You can check the required service account admin roles at Admin Roles Make sure you create the P12 key and store it safely.
This article assumes that you have successfully meet the prereqs and able to launch the cloudbreak UI as shown left below by visiting https://<IP_Addr or HostName> and Upon successful login you are redirected to the dashboard which looks like the image on right. Create Cloudbreak Credential for GCP. First step before provisioning cluster is to create the Cloudbreak credential for GCP. Cloudbreak uses this GCP credentials to create the required resources on GCP. Following are steps to create GCP credential:
In Cloudbreak UI select credentials from Navigation pane and click create credentials. Under cloud provider select Google Cloud Platform.
As shown below provide the Google project id, Service Account email id from google project and upload the P12 key that you created the above section.
Once you provide all the right details , cloudbreak will create the GCP credential and that should be displayed in the Credential pane. Next article Part 2 covers in detail how to provision the HDP and HDF cluster using the GCP credential.
... View more
Labels:
12-21-2017
08:20 PM
1 Kudo
@sally sally Check the back pressure object threshold value in the connection feeding the MergeContent processor to make sure it has been changed to a value that will allow enough files to queue up.
... View more
08-31-2017
03:34 PM
It started working after I disabled the vectorized execution mode. set hive.vectorized.execution.enabled = false;
... View more
02-20-2019
07:50 PM
I believe the ask was can we add "Custom Fields" eg; new fields other that the defined "valid" fields. How can I add pat or time stamp etc?
... View more