Member since
04-18-2017
39
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1083 | 03-06-2018 05:40 PM |
06-27-2018
10:24 AM
Hi Team, I have a table called 'test' and its owner is 'ben', I need to change the ownership of the same table from 'ben' to 'sam' and i don't have login access to sam. How can i achieve this , Is there any possibility to change the ownership or is there any command to change the ownership Regards, Mathivanan
... View more
Labels:
- Labels:
-
Apache Hive
06-08-2018
09:43 AM
Hi all, I need to commission a node in my secured cluster, I'm using MIT kerberos in my environment.Once I added, i need to make that newly added node as kerberised. Kindly let me the step to follow or any links to follow. Regards, Mathivanan
... View more
Labels:
- Labels:
-
Apache Hadoop
04-24-2018
10:09 AM
Hi All, I need to take Hive Metastore to other cluster i.e (DR), Is there any Tools or functionality to achieve this scenario.
... View more
Labels:
- Labels:
-
Apache Hive
04-02-2018
10:49 AM
Hi all, Is there any sqoop streaming mechanism to check the connectivity between two servers, i.e. source server and target server. If so please explain and do share if you have docs to refer.
... View more
Labels:
- Labels:
-
Apache Sqoop
04-02-2018
09:26 AM
Hi All, I need to check the connection validity between sqoop and SAP HANA for several nodes using shell script. Can any one guide me how to write this shell script for connectivity validation. thank you
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
03-06-2018
05:40 PM
@Aymen Rahal Hi, When registering host go with Manual Installation. and edit /etc/ambari-agent/conf/ambari-agent.ini change hostname=ambari server name Then Retry Failed.
... View more
02-09-2018
04:40 AM
@Josh Elser Hi , I have set up Kerberos for authenticating HBase Rest API, Below are the steps which i have follewed: I have configured Kerberos in our HDP Environment using Automated method, and i have made some configuration changes in hbase-site.xml and core-site.xml to secure Rest API. "https://community.hortonworks.com/articles/91425/howto-start-and-test-hbase-rest-server-in-a-kerber.html" "https://developer.ibm.com/hadoop/2016/05/12/hbase-rest-gateway-security/" When initiating a kinit from Local machine it accepts and when accessing the HTTP it show as Authentication required 401 error. I have done configuration changes in Firefox "auth:config" and followed this link for HTTP authentication
"https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/_configuring_http_authentication_for_HDFS_YARN_MapReduce2_HBase_Oozie_Falcon_and_Storm.html" Please find the snap for the reference, Please Guide How to proceed further. Regards, Mathivanan
... View more
Labels:
- Labels:
-
Apache HBase
12-14-2017
06:16 AM
Hi All, I have done upgrade from HDP 2.4.3 to 2.6.1, where my hive jobs are running but i cant get the Full log information of those hive jobs i.e. Map and Reduce logs are missing and i can only see the Query execution time and Application job id and others. It reflects with "log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender". While executing the hive command also i'm getting the same warn. I have tried with setting maxFileSize to 256 in hdfs log4.j configuration, yarn log4.j configuration, Hive log4.j configuration but it reflects the same. Please guide me how to ignore this error and bring back all my log information. Regards, Mathivanan
... View more
Labels:
11-01-2017
10:56 AM
hi Geoffrey Shelton Okot, As you said i had changed the setting in /etc/hosts, but you had mentioned 192.166.66.66 as your local hosts, what do i do in my case. Highlighted is my failed region server node kindly suggest the ip which need to be added with this. PFA Regards, Mathivanan
... View more
11-01-2017
09:51 AM
@Geoffrey Shelton Okot As you mentioned All permission under /var/lib/hadoop-hdfs/ and HDFS--> configs -->Advanced hdfs site parameter dfs.domain.socket.path is pointing to /var/lib/hadoop-hdfs/dn_socket, are correct by default. Is there any other settings i need to change and any other ways to fix it. Regards, Mathivanan
... View more
11-01-2017
06:31 AM
new-text-document.txtHi All, My HBase Region server is getting down due to the Connection Refused error. I got ending with error: "Connection refused when trying to connect to '/var/lib/hadoop-hdfs/dn_socket' " and find the attached logs for reference. What kind of error is this, how to resolve this issue. Kindly suggest Regards, Mathivanan
... View more
Labels:
- Labels:
-
Apache HBase
10-25-2017
09:34 AM
Hi All, We are planning to upgrade our Hadoop cluster, so before that we need to upgrade OS from RHEL 6.8-->RHEL7.x, by doing this OS level upgrade whether there will be any impact for HDP and Ambari level.or Upgrading the OS level will leads to Re-clustering of hadoop. kindly suggest the better way to Upgrade OS and Hadoop cluster and if the OS upgrade happens does it impact Hadoop cluster. Regards, Mathi
... View more
Labels:
- Labels:
-
Apache Hadoop
10-25-2017
09:21 AM
Hi All, I'm planning to upgrade ambari version from 2.2.2 to 2.5.1 and for HDP from 2.4.3 to 2.6.1 is it recommended to upgrade in this same manner, or better to go with lower version upgrade example, ambari 2.2.2 to 2.3.x & to 2.4.x and for HDP 2.4.x to 2.5.x & to 2.6.x. Since im planning to upgrade in production environment i need this clarification.kindly suggest. Regards, Mathi
... View more
Labels:
10-20-2017
09:15 AM
Hi All, I'm having the requirement of retrieving a Particular Rowkey from Hbase tables and Store those in HDFS for taking Backup, is there any option to achieve this scenario.I tried with scan 'TABLENAME',{FILTER =>"(PrefixFilter ('ROWKEY'))"} for retrieving, but i don't know how to store those ROWKEY information in HDFS. Is there best way to do this.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
08-08-2017
11:56 AM
hi Ankit, All my Data nodes are healthy and there is no error log regarding this on HDFS. Is there any other suggestion. Regards, Mathi
... View more
08-07-2017
06:32 AM
hbase.txtHi Team, My H-Base regions are getting frequently failed.we are running 7node cluster in this we have 5 region servers.with the same error i'm facing any of my regions are going down in a regular basis. Please find the attached Log.Kindly suggest Regards, Mathi
... View more
- Tags:
- Data Processing
- HBase
Labels:
- Labels:
-
Apache HBase
07-14-2017
10:30 AM
Hi msumbul, yes i checked my table exists. Also i tried to read the table from the list, but it stated with error as mentioned below. Also i entered my host details in /etc/hosts/ and important is my kafka system is seperate host. Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Jul 13 23:32:28 CDT 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68643: row 'Tenants,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=aps-hadoop7,16020,1497499074661, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableAvailable(ConnectionManager.java:992)
at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1486)
at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1494)
at example$.main(example.scala:45)
at example.main(example.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorI Regards, Mathi
... View more
07-13-2017
11:50 AM
Hi all, I'm using kafka, spark(Scala) and Hbase. Json data is feeded into the Kafka and it is streamed via Spark using scala code but I cant able to write it in Hbase tables.I had attached the logs for reference. 17/07/13 05:57:00 ERROR AsyncProcess: Failed to get region location
org.apache.hadoop.hbase.TableNotFoundException: sample
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1264)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:158)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12$anonfun$apply$5.apply$mcV$sp(PairRDDFunctions.scala:1131)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1359)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1131)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1102)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
17/07/13 05:57:00 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
org.apache.hadoop.hbase.client.RetriesExhaustedWi Kindly suggest.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Kafka
-
Apache Spark
07-07-2017
02:30 PM
Hi, In my case i need to get feed from json and that need to be streamed via spark - scala, so i need to know how to convert json data into case class using scala. For example: {"Id":"1","Name":"Babu","Address":"dsfjskkjfs"} to (1,babu,dsfjskkjfs}
... View more
Labels:
- Labels:
-
Apache Spark
07-05-2017
11:29 AM
Hi Sindhu, By setting this property hive.metastore.try.direct.sql=false and hive.metastore.try.direct.sql.ddl=false, does not work.It listed with error which i had attached below, Then i Rolled back those config.Is there any other suggestion to drop hive tables. I'm using MYSQL -5.6.33 and Hive 1.2.1.2.4 hive-error.txt
... View more
07-02-2017
01:39 PM
Can not drop the Hive tables, if i try it results in Hive server down, then after i restarting the Hive server i can able to access the hive shell. The same issue is posted in below link but no suggestion is given. https://community.hortonworks.com/questions/23690/dropping-hive-table-crashes-hiveserver2.html kindly guide
... View more
Labels:
- Labels:
-
Apache Hive
06-15-2017
10:53 AM
Hi Team,
In my use case we are going to implement Apache Kafka Plugin, i have gone through the site how to deploy this scenerio (https://phoenix.apache.org/kafka.html), but when trying this command, i need to know what should be specified in phoenix-kafka-<version>-minimal.jar to run this command
Command :-
HADOOP_CLASSPATH=$(hbase classpath):/path/to/hbase/conf hadoop jar phoenix-kafka-<version>-minimal.jar org.apache.phoenix.kafka.consumer.PhoenixConsumerTool --file /data/kafka-consumer.properties
Please suggest what should be specified in this jar, whether path or i need to install jar.if so, please provide the link of jar file.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Phoenix
06-14-2017
04:56 AM
hmas.txtregionserver.txtHi nshelke, I had Set this properties as you mentioned , but Hbase-Master and Region server are getting down and there is a backup process running behind this.It too fails. dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS dfs.client.block.write.replace-datanode-on-failure.best-effort=true Please find the attached Log of Hbase master and Regions server. Kindly suggest.
... View more
05-26-2017
10:15 AM
I try setting Hbase HA , i set up HA by adding Hbase Master through service action.After setting I made my Active Master down for testing and my Standby Hbase master become active when i brought up my down master , both master came to Standby state.Then i Removed one Standby Hbase master.The remaining Hbase Master stays in Standby state.HDP version 2.4.2 and Ambari version 2.2.2.0 How to make it Active Hbase kindly suggest
... View more
- Tags:
- Data Processing
- HBase
Labels:
- Labels:
-
Apache HBase
05-25-2017
06:49 AM
Hi all, Im moving the HDFS data to Local File System for backup purpose, the capacity is 779GB when it reaches 769GB it failed with Warning.I used COPYTOLOCAL to transfer this data I googled and found we can set the below property, kindly suggest whether it is recommend to avoid this problem.or kindly give the solution to fix this error. <property>
<name>dfs.datanode.socket.write.timeout</name>
<value>3000000</value>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>3000000</value>
</property> ####### LOG ########## [hdfs@aps-hadoop5 FullBackup]$ hdfs dfs -copyToLocal hdfs://aps-hadoop2:8020/backup/hbase/FullBackup/20170523 /backup/hbase/FullBackup 17/05/24 20:31:53 WARN hdfs.BlockReaderFactory: BlockReaderFactory(fileName=/backup/hbase/FullBackup/20170523/Recipients/part-m-00000, block=BP-1810172115-hadoop2-1478343078462:blk_1080766518_7050974): I/O error requesting file descriptors. Disabling domain socket DomainSocket(fd=359,path=/var/lib/hadoop-hdfs/dn_socket) java.net.SocketTimeoutException: read(2) error: Resource temporarily unavailable
at org.apache.hadoop.net.unix.DomainSocket.readArray0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.access$000(DomainSocket.java:45)
at org.apache.hadoop.net.unix.DomainSocket$DomainInputStream.read(DomainSocket.java:532)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291)
at org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:539)
at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:488)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:467)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:392)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:329)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:264)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:249)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:292)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:292)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:244)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:221)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:350) 17/05/24 20:31:53 WARN shortcircuit.ShortCircuitCache: ShortCircuitCache(0x62ec095e): failed to load 1080766518_BP-1810172115-hadoop2-1478343078462
... View more
Labels:
- Labels:
-
Apache Hadoop
05-24-2017
09:48 AM
All of my datanodes are healthy and having enough space.My replication factor is 3 as default. By setting dfs.client.block.write.replace-datanode-on-failure.best-effort=true, will not result in data loss?.kindly suggest
... View more
05-24-2017
09:47 AM
@nshelke All of my datanodes are healthy and having enough space.My replication factor is 3 as default. By setting dfs.client.block.write.replace-datanode-on-failure.best-effort=true, will not result in data loss?.kindly suggest
... View more
05-24-2017
06:50 AM
1 Kudo
hbase-error.txtWe are having 7node cluster, in which 5node have region server running, on daily basis any one of the region server node goes down. and im getting the same error from the respective nodes.please find the attached log.
... View more
Labels:
- Labels:
-
Apache HBase
05-05-2017
05:58 AM
Hi nshelke, Yes, I have Checked it's Healthy and no datanode failure but i found missing replicas.
... View more
05-05-2017
05:55 AM
Hi Josh, I'm having 5 Data nodes all are healthy and there is no Datanode volume failure. whether there is any other alternative way to fix this issue.If you see the log you can find more repeated Error and Fatal message ERROR [RS_CLOSE_REGION-aps-hadoop5:16020-0] regionserver.HRegion: Memstore size is 147136. FATAL [regionserver/aps-hadoop5/1..1..1..:16020.logRoller] regionserver.HRegionServer: ABORTING region server aps-hadoop5,16020,1493618413009: Failed log close in log roller. whether this is will impact any thing.Kindly suggest
... View more