Member since
02-08-2018
17
Posts
1
Kudos Received
0
Solutions
09-24-2019
11:41 AM
Hello Sir,
After installaing kafka in the cloudera manager i have tested the below code,but the consumer not able to consume those messages.While installing kafka in the cloudera manager,i had given the below two configurations as follows:
Destination Broker List ===> quickstart.cloudera:9092
bootstrap.servers
Source Broker List===> quickstart.cloudera:9092
source.bootstrap.servers
Java Heap Size of Broker (broker_max_heap_size) =“256”
Advertised Host (advertised.host.name) = “quickstart.cloudera”
Inter Broker Protocol = “PLAINTEXT”
kafka-topics --zookeeper quickstart.cloudera:2181 --create --topic smoke --partitions 1 --replication-factor 1
created topic smoke
[cloudera@quickstart ~]$ kafka-topics --zookeeper quickstart.cloudera:2181 --listanbu hello-kafka kafka-sanity smoke test xx1 [cloudera@quickstart ~]$ kafka-console-producer --broker-list quickstart.cloudera:9092 --topic smoke >smoke testing for kafka >checking >connection
[cloudera@quickstart ~]$ kafka-console-consumer --bootstrap-server quickstart.cloudera:9092 --topic smoke --from-beginning
Could you someone help me what needs to be fixed, why did the consumer not able to consume those messages.
Zookeeper checks
--------------------------
[cloudera@quickstart ~]$ zookeeper-client Connecting to localhost:2181 2019-09-24 11:40:48,109 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.13.0--1, built on 10/04/2017 18:04 GMT 2019-09-24 11:40:48,123 [myid:] - INFO [main:Environment@100] - Client environment:host.name=quickstart.cloudera 2019-09-24 11:40:48,123 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_221 2019-09-24 11:40:48,125 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2019-09-24 11:40:48,125 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8/jre 2019-09-24 11:40:48,125 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/lib/zookeeper/bin/../lib/jline-2.11.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.13.0.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.13.0.jar:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/log4j-1.2.16.jar:/usr/lib/zookeeper/lib/netty-3.10.5.Final.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/jline-2.11.jar 2019-09-24 11:40:48,125 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-573.el6.x86_64 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:user.name=cloudera 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/cloudera 2019-09-24 11:40:48,126 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/cloudera 2019-09-24 11:40:48,127 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@306a30c7 Welcome to ZooKeeper! 2019-09-24 11:40:48,151 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) JLine support is enabled 2019-09-24 11:40:48,353 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:48990, server: localhost/127.0.0.1:2181 2019-09-24 11:40:48,381 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x16d647f9f970060, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0]
Thanks
Anbu
... View more
Labels:
- Labels:
-
Apache Kafka
09-23-2019
09:36 PM
Hello,
While installing kafka using cloudera manager add services,what needs to be populated in the below configuration in kafka?
Destination Broker List
bootstrap.servers
Source Broker List
source.bootstrap.servers
Note:
host name = quickstart.cloudera
Thanks
Anbu
... View more
Labels:
- Labels:
-
Apache Kafka
-
Cloudera Manager
09-20-2019
08:48 AM
Hi sir,
I have a question on installing the Kafka in the cloudera quickstart vm.
I)Installed the Kafka parcel, distributed and activated in the single node cluster in the quickstart vm.
2)In the cloudera manager while selecting the Kafka and continue and in the service i could see the below two text box asking me to populate .what need to be populated.
please help me what things i need to update in the below two before start Kafka service.
Destination Broker List
bootstrap.servers
Source Broker List
source.bootstrap.servers
Note:
in the system host-name = quickstart.cloudera.
In the Kafka service in cloudera manager ,TCP Port ----> Kafka Broker Default Group is 9092
In the Kafka service in cloudera manager , TLS/SSL Port ---> Kafka Broker Default Group is 9093
Thanks
Anbu
... View more
Labels:
- Labels:
-
Apache Kafka
-
Cloudera Manager
07-17-2019
10:30 AM
Hi All, I have a sqoop export to postresDB with the following data.csv 1,2,abc,\N,xyz 2,3,aaa,\N,xz, 3,3,bbb,\N,\N Currently all my csv data null columns has "\N" in both strings and non string column.while exporting data to postgresDB i want to handle this "\N" with Null ? please help me on this issue. Thanks Anbu
... View more
Labels:
- Labels:
-
Apache Sqoop
08-05-2018
10:19 AM
1 Kudo
Hi All,
Could you please someone please help me how to install the apache airflow parcel in the cloudera VM.Our project moving towards airflow orchestration.please help me on this
Is there any way we can do this in cloudera quickstart VM 5.13?.please share the thoughts on this.
Thanks
Anbu
... View more
Labels:
- Labels:
-
Manual Installation
-
Quickstart VM
06-01-2018
11:04 PM
Hello Pal,
Could you please share any docs/study materials on Developer training for Spark and Hadoop" or "Cloudera Data Analyst" .it would be help for me to understand the data analyst or developer basics.
i will be grateful to you.
Thanks
Anbu
... View more
06-01-2018
12:15 PM
Hello sir, I'm using CDH 5.13.Im testing the sqoop import following command to import the table into HDFS.But i'm getting the error.Kindly help me on this error.im struggling to fix this issue. [cloudera@quickstart ~]$ hadoop fs -ls / Found 3 items drwxr-xr-x - hbase supergroup 0 2018-06-01 10:33 /hbase drwxr-xr-x - cloudera supergroup 0 2018-06-01 10:53 /sqoop-import drwxrwx--- - mapred supergroup 0 2018-05-31 22:06 /tmp [cloudera@quickstart ~]$ [cloudera@quickstart ~]$ sqoop import --connect jdbc:mysql://quickstart.cloudera:3306/retail_db --username root --password cloudera --table order_items --warehouse-dir /user/cloudera/sqoop-import/retail_db Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/06/01 12:12:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0 18/06/01 12:12:07 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/06/01 12:12:09 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 18/06/01 12:12:09 INFO tool.CodeGenTool: Beginning code generation 18/06/01 12:12:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `order_items` AS t LIMIT 1 18/06/01 12:12:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `order_items` AS t LIMIT 1 18/06/01 12:12:12 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-cloudera/compile/c07954209ddd4b6006811a567a01dc89/order_items.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 18/06/01 12:12:25 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/c07954209ddd4b6006811a567a01dc89/order_items.jar 18/06/01 12:12:25 WARN manager.MySQLManager: It looks like you are importing from mysql. 18/06/01 12:12:25 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 18/06/01 12:12:25 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 18/06/01 12:12:25 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 18/06/01 12:12:25 INFO mapreduce.ImportJobBase: Beginning import of order_items 18/06/01 12:12:25 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 18/06/01 12:12:28 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 18/06/01 12:12:32 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 18/06/01 12:12:33 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/06/01 12:12:34 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/cloudera/.staging/job_1527874221735_0008 18/06/01 12:12:34 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=EXECUTE, inode="/tmp":mapred:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:201) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:154) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3770) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:3718) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6690) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1892) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1872) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:657) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:177) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:465) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220) 18/06/01 12:12:34 ERROR tool.ImportTool: Import failed: org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=EXECUTE, inode="/tmp":mapred:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:201) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:154) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3770) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:3718) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6690) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1892) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1872) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:657) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:177) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:465) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2487) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1440) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1436) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1436) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:616) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:94) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:203) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:176) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:273) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:513) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=cloudera, access=EXECUTE, inode="/tmp":mapred:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:201) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:154) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3770) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:3718) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6690) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1892) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1872) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:657) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:177) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:465) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220) at org.apache.hadoop.ipc.Client.call(Client.java:1504) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy17.setPermission(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:365) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:260) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy18.setPermission(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2485) ... 28 more [cloudera@quickstart ~]$
... View more
Labels:
- Labels:
-
Apache Sqoop
05-03-2018
11:06 AM
Hello sir, Thanks for your reply. I have tried your advise but not able to hear the sound.Find the below screenshot.In the input i didn't see anyting and the output shows output dummy(sterio). I have uncheck the mute but not able to hear any sound.
... View more
05-02-2018
08:28 PM
Hi All, Could you someone help me how to troubleshoot the audio issue in Cloudera VM 5.13 (Centos 6.5).please help me on this issue for watching any tutorial videos. Thanks
... View more
Labels:
- Labels:
-
Manual Installation