Member since
09-29-2015
63
Posts
19
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1123 | 09-30-2017 06:13 AM | |
848 | 06-09-2017 02:31 AM | |
4316 | 03-15-2017 04:04 PM | |
2872 | 03-15-2017 08:37 AM | |
739 | 12-11-2016 01:15 PM |
04-04-2018
03:13 AM
Does Spark need Python 3 and dependencies on all cluster nodes to work with Python 3 ?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
10-09-2017
04:35 PM
@kgautam thanks for the quick response. It has the below statements suggesting can't change frequency. "Update command doesn't allow update of coordinator name, frequency, start time, end time and timezone and will fail on an attempt to change any of them. To change end time of coordinator use the -change command"
... View more
10-09-2017
04:07 PM
@kgautam can you please refer to an quick example, would be of much help.
... View more
10-09-2017
02:06 PM
Can Oozie Coordinator Frequency be updated on the fly ?
... View more
Labels:
- Labels:
-
Apache Oozie
10-05-2017
02:09 PM
Can Oozie invoke and RestFul service for notification ? Is there an example which can be used ...
... View more
- Tags:
- Oozie
Labels:
- Labels:
-
Apache Oozie
09-30-2017
06:13 AM
1 Kudo
@Sreelakshmi Lingala Although it depends on the requirements of the project or implementation the following link provides details of log size management https://community.hortonworks.com/articles/8882/how-to-control-size-of-log-files-for-various-hdp-c.html Also OS level tools would be important from managing disk space compressing the log files. Also check the level of logging (WARN,DEBUG ....) enabled in the various log components in HDP which would avoid unexpected results. Along with this one needs to make sure Ambari Logs usually /var/log also does not spiral out of control.
... View more
09-27-2017
12:32 PM
Can spnego be used as authentication mechanism for Nifi HTTPInvoke/GetHTTP and any pointers how it can be implemented ?
... View more
Labels:
- Labels:
-
Apache NiFi
09-23-2017
04:21 AM
@Vinay Sikka Can you please check yarn RM UI pointed out in the tutorial and see if those jobs are running on YARN. http://hadooptutorial.info/yarn-web-ui/
... View more
09-22-2017
03:55 AM
@Vinay Sikka Can you please check the status on YARN and if those jobs are running ? Select * and creating hive tables doesn't need YARN.
... View more
09-22-2017
03:09 AM
1 Kudo
@Pratap Champati I assume you might have used the following to connect HIVE. https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-IntegrationwithSQuirrelSQLClient what version of hive you are using, can you please make sure you have hive-service*.jar file also in the classpath.
... View more
09-22-2017
02:51 AM
1 Kudo
@Harsha C I assume the issue is accessing the url (http://<web.server>/hdp/HDP/<OS>/2.x/updates/<latest.version>) ? but you are able to download the tar.gz ? Browsing is not allowed in repo and that is the reason you are not able to see the directory. Let me know if that answers your query ...
... View more
09-22-2017
02:44 AM
@Bala Kolla can you please update the query with jdbc connection string, issue might be with the connection string parameters or the second possible issue could be with version mismatch. can you use the hive-jdbc jar provided by HDP installable rather than donwloading from web ?
... View more
06-18-2017
06:16 PM
Can Ambari Hive Views Queries be shared between user ?
... View more
- Tags:
- ambari-views
- Security
Labels:
- Labels:
-
Apache Ambari
06-12-2017
05:23 AM
@Jay SenSharma The above url provides details pertaning to Ambari, doesn't provide HDP version.
... View more
06-12-2017
04:06 AM
Is there an Ambari Rest API to get the full version, including minor version e.g.: 2.4.x.x, currently I have able to get the high level stack version as 2.4 but nor with minor versions included.
... View more
Labels:
- Labels:
-
Apache Ambari
06-09-2017
10:57 AM
@Fabien VIROT don't see an issue with configuration, it could be either because if incorrect username/password while running the script for DB ...
... View more
06-09-2017
03:19 AM
@Sundara Palanki Ranger Hive Plugin only applies to HS2; Hive CLI would not honor Policies defined in Ranger. "The best way to protect Hive CLI would be to enable permissions for HDFS files/folders mapped to the Hive database and tables. In order to secure metastore, it is also recommended to turn on storage-based authorization." You should define either User/Group permissions for Hive Resources via Ranger while connecting from HS2.
... View more
06-09-2017
02:34 AM
@Fabien VIROT Can you please provide the detailed settings, which might help identify the issue, also if you can get an snapshot of jdbc test with alias and password.
... View more
06-09-2017
02:31 AM
@Prakhar Agrawal One of the reason could be the customprocessor is not stopped; your custom code is blocking and does not react to interrupts and hence you are not able to restart. One of the solution could be restart of Apache Nifi
... View more
06-09-2017
02:05 AM
@Benjamin Hopp Post response might not in be valid format and could be causing the issue.
... View more
06-08-2017
12:29 PM
@Rishit shah Can you please check Ambari logs for any errors and also test connectivity via Beeline.
... View more
06-08-2017
08:56 AM
@Gaurav Jain If you know the number of files upfront in sftp, you can set "min number of Entries"(Files) before a bin is marked complete and forwarded to next processor.
... View more
05-05-2017
04:51 PM
1 Kudo
We have a use case to use "InvokeHttp" for restful service, getting all the details from the DB (URL/username/password), need to pass basic authentication (username and password) via attributes retrieved from DB. Is it possible to pass authentication using attributes ?
... View more
Labels:
- Labels:
-
Apache NiFi
03-15-2017
04:04 PM
Deleted the journal folder and restarted Nifi did solve the issue. Not sure what caused the issue in the first place.
... View more
03-15-2017
08:51 AM
Is there a way to clean up flow files and keep only the attributes after event processing is completed. Need to remove Flow files for security reasons. Any suggestions if we can delete or any other way of handling the above use case.
... View more
Labels:
- Labels:
-
Apache NiFi
03-15-2017
08:47 AM
Hello, Apache Nifi doesn't show data Provenance and show status as "Searching provenance events", checked logs it shows the following error. Any suggestions to resolve the same. 2017-03-15 08:40:06,603 ERROR [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Failed to merge journals. Will try again. journalsToMerge: [./provenance_repository/journals/2642.journal.0, ./provenance_repository/journals/2642.journal.1, ./provenance_repository/journals/2642.journal.2, ./provenance_repository/journals/2642.journal.3, ./provenance_repository/journals/2642.journal.4, ./provenance_repository/journals/2642.journal.5, ./provenance_repository/journals/2642.journal.6, ./provenance_repository/journals/2642.journal.7, ./provenance_repository/journals/2642.journal.8, ./provenance_repository/journals/2642.journal.9, ./provenance_repository/journals/2642.journal.10, ./provenance_repository/journals/2642.journal.11, ./provenance_repository/journals/2642.journal.12, ./provenance_repository/journals/2642.journal.13, ./provenance_repository/journals/2642.journal.14, ./provenance_repository/journals/2642.journal.15], storageDir: ./provenance_repository, cause: java.lang.RuntimeException: java.io.FileNotFoundException: _1g.fdt 2017-03-15 08:40:06,607 ERROR [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository java.lang.RuntimeException: java.io.FileNotFoundException: _1g.fdt at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:258) ~[na:na] at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:238) ~[na:na] at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355) ~[na:1.8.0_101] at java.util.TimSort.sort(TimSort.java:234) ~[na:1.8.0_101] at java.util.Arrays.sort(Arrays.java:1512) ~[na:1.8.0_101] at java.util.ArrayList.sort(ArrayList.java:1454) ~[na:1.8.0_101] at java.util.Collections.sort(Collections.java:175) ~[na:1.8.0_101] at org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:292) ~[na:na] at org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2020) ~[na:na] at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1984) ~[na:na] at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3029) ~[na:na] at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134) ~[na:na] at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101) ~[na:na] at org.apache.nifi.provenance.lucene.SimpleIndexManager.returnIndexWriter(SimpleIndexManager.java:162) ~[na:na] at org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1864) ~[na:na] at org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1332) ~[na:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] Caused by: java.io.FileNotFoundException: _1g.fdt at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:255) ~[na:na] at org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:219) ~[na:na] at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:478) ~[na:na] at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:248) ~[na:na] ... 22 common frames omitted Thanks, Nagesh
... View more
Labels:
- Labels:
-
Apache NiFi
03-15-2017
08:37 AM
1 Kudo
@Prakhar Agrawal There doesn't seems to be an out of box support for xls files. Although you can use something custom for parsing xls files. One of the examples: https://community.hortonworks.com/questions/36875/where-to-convert-xls-file-to-csv-file-inside-nifi.html
... View more
03-06-2017
06:14 AM
Hello, We are trying to read data from Oracle tables, "Date" based data types are converted into "Timestamp" Data types. e.g: Table is Oracle. desc hr.employees; Name Null? Type ----------------------------------------- -------- ---------------------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SSN VARCHAR2(55) and schema read in the DataFrame in Scala |-- EMPLOYEE_ID: decimal(6,0) (nullable = false) |-- FIRST_NAME: string (nullable = true) |-- LAST_NAME: string (nullable = false) |-- EMAIL: string (nullable = false) |-- PHONE_NUMBER: string (nullable = true) |-- HIRE_DATE: timestamp (nullable = false) (Incorrect data type read here) |-- JOB_ID: string (nullable = false) |-- SALARY: decimal(8,2) (nullable = true) |-- COMMISSION_PCT: decimal(2,2) (nullable = true) |-- MANAGER_ID: decimal(6,0) (nullable = true) |-- DEPARTMENT_ID: decimal(4,0) (nullable = true) |-- SSN: string (nullable = true) Hire_Date is read incorrectly as TimeStamp, is there a way to correct. Data is being read from Oracle on the fly and the application does not have an upfront knowledge of datatypes and can't convert it after being read. Thanks in advance. Nagesh
... View more
Labels:
- Labels:
-
Apache Spark
03-01-2017
03:41 AM
@Artem Ervits thanks for quick response, we are using these, but we would like to restrict users from creating new notebooks, which is not part of the documentation.
... View more
03-01-2017
03:24 AM
Ranger HDFS stops syncing HDFS policies, and we don't have any log statement in NN logs, to point to an error, although I can't see HDFS NN on Ranger plugin audit trail. Is there an Log settings which would enable Ranger related logs in NN ?
... View more
Labels:
- Labels:
-
Apache Ranger