Member since
03-28-2016
49
Posts
4
Kudos Received
0
Solutions
06-24-2019
07:04 PM
Thank you very much for this answer.
... View more
06-24-2019
06:59 PM
I was looking into the Open ID way of user authentication for Nifi Registry. I see that we can implement OpenID for Nifi but it is not available for Nifi Registry? Is there a way we can implement Open ID authentication for NiFi registry also?
... View more
Labels:
- Labels:
-
Apache NiFi
05-22-2019
04:18 AM
Hi , we are in the process of creating HDP 2.6 cluster where in RHEL OS will be integrated with AD for authentication. We will using AD as the KDC. My question if we create a local UNIX user called HIVEUSER and use any BI tool to connect to HIVE using this user, will the local user be able to get authenticated and access Hive tables in kerberized cluster? or the HIVEUSER should be in AD?
... View more
Labels:
- Labels:
-
Apache Hive
12-27-2017
02:46 PM
In one of our data nodes the root partition is almost 98% full and I see that there is nothing much I can clean up. I saw below tar.gz files in the root partition: rw-r--r-- 1 root root 196M /usr/hdp/2.5.0.0-1245/hadoop/mapreduce.tar.gz r--r--r-- 1 root root 378M /usr/hdp/2.5.0.0-1245/oozie/oozie-sharelib.tar.gz -rw-r--r-- 1 root root 117M /usr/hdp/2.5.0.0-1245/pig/pig.tar.gz -rwxr-xr-x 1 root root 121M /usr/hdp/share/hst/smartsense-activity-explorer-1.3.0.0-22.tar.gz Will there be any impact if I move these tar.gz files to a different lodcation so that we can have more space in the root partition?
... View more
11-09-2017
11:34 AM
I am also seeing this error--> org.apache.nifi.bootstrap.RunNiFi Status File no longer exists. Will not restart NiFi in bootstrap.log
... View more
11-09-2017
11:12 AM
Hi All, I am facing a wierd problem today. We noticed that the space in the root partition was full. The /var/log/nifi is configured in the same partition. I cleared almost all the nifi logs, however within a minute they are appearing back and making root partition full. What might be the cause for this ? How do I get it resolved? I have a standalone nifi instance
... View more
Labels:
- Labels:
-
Apache NiFi
10-30-2017
03:02 PM
I don't want to add one more column to the actual index columns, but in the INCLUDE() section. Will that also have same effect?
... View more
10-24-2017
02:42 PM
I have an existing covered index which I created using statement like below. CREATE INDEX my_index ON my_table (v1,v2) INCLUDE(v3).
Now I want to include one more column v4, like INCLUDE(v3,v4). v4 column will not be part of actual index but it will be in covered part.
I am not sure if we have an Alter statement to do this. Request for assistance.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
09-11-2017
02:48 PM
Thank you Geo, in the article he mentions bout creating a new blueprint. I wanted to know if I already have an existing blueprint from a different cluster, how will I set up the cluster using that blueprint. @Kuldeep Kulkarni request for your comments.
... View more
09-11-2017
01:40 PM
Hi All, I have a requirement where in I have to setup a new HDP cluster which will be replica of our one another cluster. WHat I am planning is to export the Ambari Blueprint and use the same blue print to setup another server. What are the right steps for me to do and what else I need to take care to get this task done?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Atlas
08-29-2017
03:34 PM
I have a requirement where in I have to maintain a cache of about 200,000 records. I want to use H2 database as a cache. If any body have used H2 as a cache please let me know the steps.
... View more
Labels:
- Labels:
-
Apache NiFi
07-16-2017
06:36 PM
Thank you Matt, ListHDFS was a good hint. I was able to accomplish my task with you inputs.
... View more
07-16-2017
06:33 PM
Hi All, I have taken a Snapshot in my Dev Environment and Exported to PreProd environment so that I can clone a new table from snapshot or else I can restore it. I exported the snapshot from Dev to Pre Prod using below command: /usr/bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snapshot_table_name -copy-to hdfs://NAME_NODE:8020/hbase -mappers 5 My problem is when I try to clone the table from this exported snapshot I get error Unknown Snapshot and also this exported snapshot is not showing up when I hit list_snapshot commad in Hbase shell. I can see the exported snapshot in hdfs using below command as I gave the hdfs path as hdfs://NAME_NODE:8020/hbase: hadoop fs -ls /hbase However all the other snapshots which I created in Pre Prod are present in the below location: hadoop fs -ls /apps/hbase/data How can I resolve this issue? Is there a problem in the way I exported the Snapshot? Do i have to export the snapshot and give the path as hdfs://NAME_NODE:8020/apps/hbase/data ? Request for assistance.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-13-2017
08:38 AM
Hi All, I want to fetch the data that is stored in HDFS using FetchHDFS processor . The folder structure to store our data is like /MajorData/Location/Year/Month/Day/file1.txt (/MajorData/Location/2017/01/01/file1.txt) As the day changes the folder structure will change to /MajorData/Location/2017/01/02/file2.txt How can I write a Nifi expression which will traverse through all the folders, fetch the data in NiFi?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
07-11-2017
03:52 PM
Hi All, I have a requirement in which we have a table which has about 100 Million records. On this table we want to enable Salting. Inorder to enable the salting I would have to recreate the table which has 100 Million records. Can any one suggest what would be the best way to migrate these 100 million records to a different place, recreate the table with SALTING and put the data back into the SALTED table. Also I am reading in the official documentation that "There are some cautions and difference in behavior you should be aware about when using a salted table." Can anyone guide what could be the possible difference in behavior we can see?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-04-2017
04:18 PM
I have created an INDEX on my tables using below DDL: CREATE INDEX my_index ON my_table (col1, col2, col3 , EVENT DESC) INCLUDE (col4 , col5); However the EVENT column is not sorted in descending order. Is there anything I am missing? Have I created the Index correctly? One thing I wanted to highlight was my EVENT column datatype is VARCHAR....is that making EVENT column not to sort?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-04-2017
01:30 PM
Thank you @Toshihiro Suzuki. I created the index something like CREATE INDEX my_index ON my_table (col1, col2, col3 , EVENT DESC) INCLUDE (col4 , col5); However the EVENT column is not sorted in descending order. Is there anything I am missing? Have I created the Index correctly? One thing I wanted to highlight was my EVENT column datatype is VARCHAR....is that making EVENT column not to sort?
... View more
07-03-2017
03:29 PM
Hi, I have a requirement in which I want the event column to be indexed. However I wanted to know if it is possible to ORDER the event column in Descending order something like below: CREATE INDEX my_index ON my_table (EVENT ORDER BY DESC) INCLUDE (v2);
My Base table has EVENT column but it is not ordered, I am thinking if indexing EVENT column in Descending order will have me get away with the ORDERBY clause and reduce query time.
Request for comments.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
06-27-2017
11:40 AM
Hi @Wynner I have about 300 processors and we always have about 20MB streaming data flowing. What are the optimum values for Maximum Timer Driven Thread Count and Maximum Event Driven Thread Count? How can I decide how much value I have to set for above two parameters.
... View more
06-20-2017
03:02 PM
Hi , I am trying to configure nifi Execute Process to connect to MSSQL and execute the store procedure . I followed this link https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html where in @M. Mashayekhi has provided the steps how he connected to MSSQL and executed the Stored Procedure. @Mashayekhi , I wanted to know how your execute process configuration screen looks like and also please let me know if any additional client tools are required to be installed so that I can execute the stored procedure. Thank you.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Spark
06-08-2017
02:03 PM
Hi I have a flow file like: server|list|number|3|abc|xyz|pqr|2015-06-06 13:00:00 , here records are separated by pipe character. In the above record, we see there a number 2 and it is followed by abc and xyz. My requirement is, I want to split the above flow file into files based on the number, my output should look like below: server|list|number|abc|2015-06-06 13:00:00 server|list|number|xyz|2015-06-06 13:00:00 server|list|number|pqr|2015-06-06 13:00:00 I have come to a stage wherein I have converted above flow file in JSON and split the json file and I have captured abc|xyz|pqr in one attribute, I request help on how I can split them further into Individual records in Nifi so that I can insert them in HBase.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache NiFi
-
Apache Phoenix
06-06-2017
07:57 AM
Hi All, we have a HBase/Phoenix Table, wherein we have created a covered index in improve read performance. There is a requirement that we create one more covered index on the same HBase/Phoenix Table. Will that cause any performance issues on the table due to two indexes on the same table?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
06-02-2017
12:51 PM
@Rajeshbabu Chintaguntla Request for assiatnace
... View more
05-29-2017
04:07 PM
We have an Phoenix table in which records are inserted by Nifi workflow. Our table and primary key looks something like below: PRIMARY KEY COL1 COL2 COL3 COL4 COL5 COL1 | COL2 | COL3 We faced some issues in Nifi and |COL2|COL3 was inserted as primary keys for all rows without COL1 value. Now our primary key looks like this -> ( |COL2 | COL3) . Is there any upsert statement to update existing primary key with COL1 value so that out primary key become like this --> COL1|COL2|COL3 ? @Sergey Soldatov @Josh Elser
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
05-18-2017
04:04 PM
Some of the users who access our cluster are trying to access HDFS. However when they try accessing HDFS they get permission denied error on production cluster. Below are the steps I am planning to implement in Ranger : 1. Since Ranger plugin is enabled, I will add a policy. 2. Mention the path which I need to provide access to 3. Add there userid's and type of access and save. However problem is the users whom I want to add are not showing up in the list of users. Can we go ahead and manually add the users in Ranger and then provide access? Request you guys help to validate it. or is there anything else I have to take care before doing these?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
05-04-2017
09:40 AM
1 Kudo
Couple of Days Ago I observed that Nifi was not processing any records and it was like hanged. When I saw the bootstrap.log I saw this message "Apache nifi is running at PID () but not responding to ping requests". The issue got fixed by restarting NiFi service, however what could be the possible reason for this issue , it happened twice in our cluster. What can be done so that issue dont reoccur again. Request for comments.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
05-04-2017
09:00 AM
HI All, I want to set the parameter of dfs.datanode.failed.volumes.tolerated to 1 so that DataNode service will be fault tolerant. Once I set this parameter in Ambari it will ask me for HDFS service restart. My question is if I restart HDFS service, which all service will be restarted? Will HBase will also be restarted ? Since I don't have an environment to check before performing this activity posting this question.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
05-02-2017
04:43 PM
Thank you for the comments mqureshi. Will check on this and update
... View more
05-02-2017
04:41 PM
HI Sindhu, datanucleus.connectionPoolingType is dbcp. If there are inputs based on this.. please let me know
... View more
04-27-2017
01:42 PM
HI All, I have ORACLE as my Hive Database and I am getting belwo error in ORACLE DB : "ORA-00060: Deadlock detected. See Note 60.1 at My Oracle Support for Troubleshooting ORA-60 Errors. More info in file /apps/oracle/diag/rdbms/hive" WHat could be the possible reasons for this...how to resolve this? It is creating trace files of very large sizes and sending alerts
... View more