Member since
06-25-2019
12
Posts
0
Kudos Received
0
Solutions
04-30-2019
09:55 AM
Working fine now , thanks
... View more
04-16-2019
01:44 PM
Hi Geoffrey, Thanks , I have placed the same as you suggested , (PFB snapshot) But still the data is not moving from local to HDFS Error Log : 2019-04-16 02:43:14,417 ERROR [Timer-Driven Process Thread-7] org.apache.nifi.util.ReflectionUtils Failed while invoking annotated method 'public final void org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.abstractOnStopped()' with arguments '[]'. java.lang.reflect.InvocationTargetException: null at sun.reflect.GeneratedMethodAccessor379.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:142) at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:130) at org.apache.nifi.util.ReflectionUtils.quietlyInvokeMethodsWithAnnotations(ReflectionUtils.java:268) at org.apache.nifi.util.ReflectionUtils.quietlyInvokeMethodsWithAnnotation(ReflectionUtils.java:90) at org.apache.nifi.controller.StandardProcessorNode.lambda$initiateStart$4(StandardProcessorNode.java:1547) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException: null at org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.abstractOnStopped(AbstractHadoopProcessor.java:286) ... 14 common frames omitted
... View more
04-16-2019
06:25 AM
Can we put data into HDFS from the nifi which in not part of any cluster I have HDP-Ambari with 4 node cluster, and I have installed nifi as standalone server in the master node machine Can get the data from my local machine and store them in HDFS ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
03-25-2019
11:11 AM
WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) When I try to ping from producer command , I'm getting the above error. Kafka 1.0.1 and Zookeeper are running mode only
... View more
Labels:
- Labels:
-
Apache Kafka
03-22-2019
10:24 PM
@Chiran Ravani After doing what you mentioend , still im unable to create the table please find the below error message create table patient(Patient_Id int, . . . . . . . . . . . . . . . . . . . . . > Full_Name string, . . . . . . . . . . . . . . . . . . . . . > SSN string, . . . . . . . . . . . . . . . . . . . . . > Email string, . . . . . . . . . . . . . . . . . . . . . > Phone_no string, . . . . . . . . . . . . . . . . . . . . . > Gender string, . . . . . . . . . . . . . . . . . . . . . > Addr_line1 string, . . . . . . . . . . . . . . . . . . . . . > Addr_line2 string, . . . . . . . . . . . . . . . . . . . . . > Addr_line3 string, . . . . . . . . . . . . . . . . . . . . . > City string, . . . . . . . . . . . . . . . . . . . . . > Country string, . . . . . . . . . . . . . . . . . . . . . > Race string, . . . . . . . . . . . . . . . . . . . . . > Drug1 string, . . . . . . . . . . . . . . . . . . . . . > Drug2 string, . . . . . . . . . . . . . . . . . . . . . > ICD_Code string) . . . . . . . . . . . . . . . . . . . . . > ROW FORMAT DELIMITED . . . . . . . . . . . . . . . . . . . . . > FIELDS TERMINATED BY ',' . . . . . . . . . . . . . . . . . . . . . > LINES TERMINATED BY '\n' . . . . . . . . . . . . . . . . . . . . . > STORED AS TEXTFILE ; INFO : Compiling command(queryId=hive_20190322085132_34c73f65-38bf-4445-8580-44a263066a55): create table patient(Patient_Id int, Full_Name string, SSN string, Email string, Phone_no string, Gender string, Addr_line1 string, Addr_line2 string, Addr_line3 string, City string, Country string, Race string, Drug1 string, Drug2 string, ICD_Code string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20190322085132_34c73f65-38bf-4445-8580-44a263066a55); Time taken: 0.064 seconds INFO : Executing command(queryId=hive_20190322085132_34c73f65-38bf-4445-8580-44a263066a55): create table patient(Patient_Id int, Full_Name string, SSN string, Email string, Phone_no string, Gender string, Addr_line1 string, Addr_line2 string, Addr_line3 string, City string, Country string, Race string, Drug1 string, Drug2 string, ICD_Code string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE INFO : Starting task [Stage-0:DDL] in serial mode ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=dm_user, access=EXECUTE, inode="/warehouse/tablespace/managed/hive/dm_dev.db":hive:hadoop:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1799) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1817) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3091) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678) ) INFO : Completed executing command(queryId=hive_20190322085132_34c73f65-38bf-4445-8580-44a263066a55); Time taken: 0.102 seconds Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=dm_user, access=EXECUTE, inode="/warehouse/tablespace/managed/hive/dm_dev.db":hive:hadoop:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1799) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1817) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3091) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678) ) (state=08S01,code=1)
... View more
03-22-2019
10:19 PM
using the below command i created the producer session ./kafka-console-producer.sh --broker-list mn.xxxxxxxx.com:6667 --topic dm_sample1 <entered some text > open one more session and typed the below consumer command ./kafka-console-consumer.sh --bootstrap-server mn.xxxxxx.com:9092 --zookeeper localhost:2181 --topic dm_sample1 --from-beginning and ./kafka-console-consumer.sh --bootstrap-server mn.xxxx.com:6667--zookeeper localhost:2181 --topic dm_sample1 --from-beginning after trying with the both commands , still Im getting the same error error MSG : Option [bootstrap-server] is not valid with [zookeeper]. version : kafka 1.0.1 , ZK v3.4.6 and hdp v3
... View more
Labels:
- Labels:
-
Apache Kafka