Member since
10-20-2016
28
Posts
9
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2279 | 07-13-2017 12:47 PM | |
3099 | 06-30-2017 01:37 PM | |
3433 | 06-30-2017 05:18 AM | |
1409 | 06-29-2017 03:15 PM | |
2698 | 06-23-2017 01:51 PM |
06-30-2017
05:18 AM
1 Kudo
@mliem Did you copy the flow.xml.gz from your old installation to this one after you wiped everything 2.x related and installed 3. All the sensitive properties inside the flow.xml.gz file are encrypted using the sensitive property defined in the nifi.properties file (If blank, NiFi uses an internal default value). If you move your flow.xml.gz file to another NiFi, the sensitive property value used must be the same or NiFi will fail to start because it cannot decrypt the sensitive properties in the file.
... View more
06-29-2017
07:12 PM
@Sandeep Nemuri Awesome. That worked. Thanks.
... View more
06-29-2017
07:03 PM
@Sandeep Nemuri Here are the properties from my cluster. # Generated by Apache Ambari. Mon Jun 12 12:33:51 2017
atlas.authentication.method.kerberos=False
atlas.cluster.name=mycluster
atlas.jaas.KafkaClient.option.renewTicket=true.
atlas.jaas.KafkaClient.option.useTicketCache=true
atlas.kafka.bootstrap.servers=lab1.hwx.com:6667
atlas.kafka.hook.group.id=atlas
atlas.kafka.zookeeper.connect=lab1.hwx.com:2181
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.notification.create.topics=True
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.rest.address=http://lab1.hwx.com:21000
... View more
06-29-2017
06:53 PM
While running the sqoop command I get the following error message ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient] Any pointers are appreciated.
... View more
Labels:
- Labels:
-
Apache Sqoop
06-29-2017
03:15 PM
1 Kudo
The error message points that the permissions on worker-launcher binary are not correct. Please check that the permissions on worker-launcher binary. It should be owned by root:hadoop or the group parameter configured in worker-launcher.cfg. From the log message it appears that the group ownership is root.
Also note that the permissions on the binary should be 6550 else it would fail again after you change the group ownership. Here is the output of the ownership from my test system. # stat /usr/hdp/2.5.0.0-1133/storm/bin/worker-launcher
File: `/usr/hdp/2.5.0.0-1133/storm/bin/worker-launcher'
Size: 56848 Blocks: 112 IO Block: 4096 regular file
Device: fc01h/64513d Inode: 1444319 Links: 1
Access: (6550/-r-sr-s---) Uid: ( 0/ root) Gid: ( 501/ hadoop)
Access: 2016-08-03 13:25:37.000000000 +0000
Modify: 2016-08-03 13:25:37.000000000 +0000
Change: 2016-11-19 13:23:02.764000118 +0000
... View more
06-26-2017
02:36 PM
@Raj B Please check the timestamps of the files remaining in the directory. If they are being added during the process run time. Or if the timestamp is older than the CRON runtime of the processor.
... View more
06-26-2017
10:08 AM
1 Kudo
@Raj B This looks similar to NIFI-4069 As a workaround, please try and change the cron schedule to 0,30 30 0 * *. so that it runs twice in the same minute. Let us know if that helps.
... View more
06-23-2017
01:51 PM
1 Kudo
@AViradia I suspect that Windows locking mechanism restricts the original file to be renamed(Check http://dev.eclipse.org/mhonarc/lists/jetty-users/msg03222.html for details). You can omit the file property in logback.xml, and then the active log file will be computed a new for each period based on the value of fileNamePattern.
A working rollingpolicy for a NiFi node on Windows is as follows: <appender name="APP_FILE">
<!-- <file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file> -->
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
Ref: https://logback.qos.ch/manual/appenders.html
... View more
06-23-2017
10:59 AM
1 Kudo
@subash sharma The error messages you are seeing shows some issues with ssh configurations.
error : "host key verification failed" This error message shows that the ssh client is not able to verify 'edge-node-hostname'. It is possible that the entry for 'edge-node-hostname' is not available in the known_hosts file located in the home directory of the user you are running NiFi as. If you are running NiFi as root, check the known_hosts file in /root/.ssh/known_hosts. And if NiFi is running as a non-root user, check the '.ssh/known_hosts' files in the home directory of that user. You can also run the following command which would then ask you for a confirmation in yes/no input format to add the entry of the 'edge-node-hostname' in the known_hosts file. sudo -u <NiFi_user> ssh username@edge-node-hostname
error : "Pseudo-Terminal will not be allocated because stdin is not a terminal" This error message is because you are just trying to obtain a remote shell in the executeProcess processor which is not possible. You can either add a command to be run after ssh succeeds or you can add -t -t arguments to ssh command which should help. ssh username@edge-node-hostname date
OR ssh -t -t username@edge-node-hostname
... View more
06-08-2017
07:00 PM
@Karan Alang If you are using the new Kafka Consumer, try bootstrap.servers and see if that helps. Check https://community.hortonworks.com/articles/24599/kafka-mirrormaker.html for more details.
... View more
- « Previous
-
- 1
- 2
- Next »