Member since
06-06-2016
185
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
450 | 07-20-2016 07:47 AM | |
447 | 07-12-2016 12:59 PM |
11-06-2016
03:15 AM
Thanks you@jss I could not find dfs.namenode.http-address property too..i have share my hdfs_site screen short please help me to locate advanced-hdfs.png
... View more
11-06-2016
03:02 AM
@Artem Ervits sorry @Artem Ervits Still i am unable to find the webhdfs.url property in hdfs_site.xml in hdfs service I have attached hdfs services screen short please seehdfs-site1.pnghdfs-site2.pnghdfs-site3.pnghdfs-site4.png
... View more
11-06-2016
03:02 AM
@Artem Ervits sorry @Artem Ervits Still i am unable to find the webhdfs.url property in hdfs_site.xml in hdfs service I have attached hdfs services screen short please seehdfs-site1.pnghdfs-site2.pnghdfs-site3.pnghdfs-site4.png
... View more
11-05-2016
05:54 PM
Thanks you so much@Artem Ervits Can you help me find webhdfs.url property ..where i can find these property?
... View more
11-05-2016
02:45 PM
hd-hive.pngI am new to HDinsight hadoop cluster , when i have open and Ambari ->hive view>query and execute any hive query(show tables etc..) i am getting error: java.net:UnknowhostException: namenode Note: I am able to trigger hive query in HIVE cli . HDP 2.7 hive 1.2
... View more
10-21-2016
01:52 PM
Thank you @Constantin Stanca I have did by below steps and works good.. #I distcp the table from Prod to Dev (but table meta data is not visible in dev cluster) #I created same table schema in Dev what we had created table in Prod Then i get the table with data. So this process is good or will face data loss problem?
... View more
10-20-2016
07:24 AM
Thank you so much @grajagopal Can you enhance the 2 step with example please 2)You can just distcp the /user/hive/warehouse from PROD to DEV and generate the 'create table' DDL statement from hive, change the NN info on the table and recreate them in DEV. You need to also generate the manual ALTER TABLE ADD Partition statement to get the partitions recognized.
... View more
10-19-2016
02:00 PM
1 Kudo
Hi All , I have 4 TB of table in Prod cluster i want to move this to Dev using Dsitcp but i have less for export in Prod.. So i want to to split the table into some chunks ..can any one help me here i have try like this export table tablename where count(customer_id)> 100000 to 'hdfs_exports_location'; export table tablename having count(customer_id)> 100000 to 'hdfs_exports_location'; export table db.tablename partition (count(sname))> "2") to 'apps/hive/warehouse'; and i try like this finally , its also not working export table db.tablename partition (count(sname)) = "2") to 'apps/hive/warehouse' But no use..please suggest me
... View more
Labels:
10-13-2016
06:20 AM
@Ayub Pathan Both are using same version HDP 2.1.2 I for got mansion port but it has 8020 both cluster export table db_c720_dcm.network_matchtables_act_ad to 'apps/hive/warehouse/sankar7_dir'; and i could see sankar7_dir in /user/hdfs/apps/hive/warehouse/sankar7_dir in source cluster... hadoop distcp hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir hdfs://yy.yy.yy.yy:8020/apps/hive/warehous e/sankar7_dir
16/10/13 01:01:05 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=fa lse, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs:///xx.xx.xx.xx:8020/apps/h ive/warehouse/sankar7_dir], targetPath=hdfs://yy.yy.yy.yy4:8020/apps/hive/warehouse/sankar7_dir}
16/10/13 01:01:05 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050
16/10/13 01:01:06 ERROR tools.DistCp: Invalid input:
org.apache.hadoop.tools.CopyListing$InvalidInputException: hdfs:///xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir doesn't exist
at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:84)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:80)
at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:327)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:151)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:375) hdfs:///xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir doesn't existIif see my error while doing Distcp with out creating sankar_7. But i export table to directory :. export table db_c720_dcm.network_matchtables_act_ad to 'apps/hive/warehouse/sankar7_dir';
... View more
10-13-2016
05:42 AM
@Ayub Pathan No con't see this directory..may i know reason for this ..and help me to out of this issue..
... View more
10-13-2016
04:15 AM
Thanks you so much@Ayub Pathan I have below information on user directory hdfs@HADOOP:/root> hadoop fs -ls /user/hdfs/apps/hive/warehouse/sankar5_dir Found 2 items
-rw-r--r-- 3hdfs hdfs 1882 2016-10-12 17:34 /user/hdfs/apps/hive/warehouse/sankar5_dir/_metadata
drwxr-xr-x - hdfs hdfs 0 2016-10-12 17:34 /user/hdfs/apps/hive/warehouse/sankar5_dir/data I could able to import in source cluster but i could not in destination cluster after distcp
... View more
10-13-2016
03:16 AM
HI..I want to migrate some hive table in Prod cluster to dev Cluster to i am doing like this #export the hive table in some tem directory #distcp the tem directory to tem directory in target cluster #import the tem directory to hive database. #01 hdfs@HADOOProot> hadoop fs -mkdir /apps/hive/warehouse/sankar5_dir #02 export table db_c720_dcm.network_matchtables_act_creative to 'apps/hive/warehouse/sankar5_dir';
#03 hadoop distcp hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/sankar5_dir hdfs://xx.xx.xx.xx//apps/hive/warehouse/sankar5_dir FAILED: SemanticException [Error 10027]: Invalid path on 3 step I could import in source cluster but after distcp ,i cont import in destination cluster
... View more
Labels:
10-12-2016
01:07 PM
Hi.. I am trying to copy hive data from once cluster to another cluster using distcp command hadoop distcp hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/db_database.db/cars hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/db_database.db table is migrated and i could see in hue file browser but i cont see in hive table ..
... View more
Labels:
10-05-2016
10:04 AM
Thank you for reply@Predrag Minovic Yes, I checked both command are working.. hdfs dfs -ls hdfs://172.Y.Y.Y8020/user
hdfs dfs -put <afile> hdfs://172.Y.Y.Y8020/tmp
Yes its 172.Y.Y.Y is the active NN..
I thing we have a firewall but its disable..but not know exactly abouton "distcp requires connection between both NNs, and among all DNs on both cluster."
... View more
10-05-2016
05:47 AM
Thank you@Predrag Minovic Yes i have try different files /directory..but same results ..at end of command i have below info 16/10/05 01:11:29 INFO mapreduce.Job: map 0% reduce 0% 16/10/05 01:11:38 INFO mapreduce.Job: map 50% reduce 0%
16/10/05 01:11:40 INFO mapreduce.Job: map 100% reduce 0% 16/10/05 01:11:40 INFO mapreduce.Job: Job job_1441104026398_46847 failed with state FAILED due to: Task failed task_1441104026398_46847_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0 16/10/05 01:11:40 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=8 Total time spent by all maps in occupied slots (ms)=80261
Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=80261 Total vcore-seconds taken by all map tasks=80261 Total megabyte-seconds taken by all map tasks=82187264 Map-Reduce Framework
CPU time spent (ms)=0 Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0 16/10/05 01:11:40 ERROR tools.DistCp: Exception encountered
java.io.IOException: DistCp failure: Job job_1441104026398_46847 has failed: Task failed task_1441104026398_46847_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:166)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:375) I have attached log files ..newdistcplog.txt
... View more
10-04-2016
07:00 AM
@Kuldeep Kulkarni Any updated on my query sir...?
... View more
10-04-2016
06:27 AM
Thanks you for quick response@Sagar Shimpi Can you please explain what "full stack trace"?
... View more
10-04-2016
06:19 AM
1 Kudo
I just want to migrate some hive data from prod to hive ... I am using HDP 2.1.2 i am using below command on destination NN(dev) I don't have any security on Both clusters I have NN HA on Prod & Dev have only single NN(no HA) hdfs@HADOOP:/root> hadoop distcp hdfs://172.X.X.X.:8020/apps/hive/warehouse/testdb1.db/abitest1 hdfs://Y.Y.Y:8020/apps/hive/warehouse/sankardb.db Now i am trying with Active NN in Prod and its almost execute but finally its shows error... hdfs@HADOOP:/root> hadoop distcp hdfs://172.X.X.X.:8020/apps/hive/warehouse/testdb1.db/abitest1 hdfs://Y.Y.Y:8020/apps/hive/warehouse/sankardb.db 16/10/03 07:50:07 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://172.21.12.21:8020/apps/hive/warehouse/testdb1.db/abitest1], targetPath=hdfs://172.21.11.24:8020/apps/hive/warehouse/sankardb.db} 16/10/03 07:50:08 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050 16/10/03 07:50:08 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb 16/10/03 07:50:08 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor 16/10/03 07:50:09 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050 16/10/03 07:50:09 INFO mapreduce.JobSubmitter: number of splits:1 16/10/03 07:50:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441104026398_46496 16/10/03 07:50:09 INFO impl.YarnClientImpl: Submitted application application_1441104026398_46496 16/10/03 07:50:09 INFO mapreduce.Job: The url to track the job: http://HADOOP:0000/proxy/application_1441104028_46496/ 16/10/03 07:50:09 INFO tools.DistCp: DistCp job-id: job_1441104026398_46496 16/10/03 07:50:09 INFO mapreduce.Job: Running job: job_1441104026398_46496 16/10/03 07:50:15 INFO mapreduce.Job: Job job_1441104026398_46496 running in uber mode : false 16/10/03 07:50:15 INFO mapreduce.Job: map 0% reduce 0% 16/10/03 07:50:26 INFO mapreduce.Job: Task Id : attempt_1441104026398_46496_m_000000_0, Status : FAILED Error: java.io.IOException: File copy failed: hdfs://172.X.X.X:8020/apps/hive/warehouse/testdb1.db/abitest1/file1.txt --> hdfs://172.Y.Y.Y:8020/apps/hive/warehouse/sankardb.db/abitest1/file1.txt at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://172.X.X.X:8020/apps/hive/warehouse/testdb1.db/abitest1/file1.txt to hdfs://172.Y.Y.Y:8020/apps/hive/warehouse/sankardb.db/abitest1/file1.txt at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258) ... 10 more
... View more
Labels:
10-03-2016
01:23 PM
yes..i am using same both ..HDP 2.1.2 Now i am trying with Active NN in Prod and its almost execute but finally its shows error... hdfs@HADOOP:/root> hadoop distcp hdfs://172.X.X.X.:8020/apps/hive/warehouse/testdb1.db/abitest1 hdfs://Y.Y.Y:8020/apps/hive/warehouse/sankardb.db
16/10/03 07:50:07 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://172.21.12.21:8020/apps/hive/warehouse/testdb1.db/abitest1], targetPath=hdfs://172.21.11.24:8020/apps/hive/warehouse/sankardb.db}
16/10/03 07:50:08 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050
16/10/03 07:50:08 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
16/10/03 07:50:08 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
16/10/03 07:50:09 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050
16/10/03 07:50:09 INFO mapreduce.JobSubmitter: number of splits:1
16/10/03 07:50:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441104026398_46496
16/10/03 07:50:09 INFO impl.YarnClientImpl: Submitted application application_1441104026398_46496
16/10/03 07:50:09 INFO mapreduce.Job: The url to track the job: http://HADOOP:0000/proxy/application_1441104028_46496/
16/10/03 07:50:09 INFO tools.DistCp: DistCp job-id: job_1441104026398_46496
16/10/03 07:50:09 INFO mapreduce.Job: Running job: job_1441104026398_46496
16/10/03 07:50:15 INFO mapreduce.Job: Job job_1441104026398_46496 running in uber mode : false
16/10/03 07:50:15 INFO mapreduce.Job: map 0% reduce 0%
16/10/03 07:50:26 INFO mapreduce.Job: Task Id : attempt_1441104026398_46496_m_000000_0, Status : FAILED
Error: java.io.IOException: File copy failed: hdfs://172.X.X.X:8020/apps/hive/warehouse/testdb1.db/abitest1/file1.txt --> hdfs://172.Y.Y.Y:8020/apps/hive/warehouse/sankardb.db/abitest1/file1.txt
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://172.X.X.X:8020/apps/hive/warehouse/testdb1.db/abitest1/file1.txt to hdfs://172.Y.Y.Y:8020/apps/hive/warehouse/sankardb.db/abitest1/file1.txt
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
... 10 more
... View more
10-03-2016
09:41 AM
Thank you @Kuldeep Kulkarni # I am not using any kerbros security...on both cluster with HDP 2.1.2 I have below error hadoop distcp hdfs://172.x.x.x:8020/apps/hive/warehouse/testdb1.db/abitest1 hdfs://172.y.y.y:8020/apps/hive/warehouse/sankardb.db
16/10/03 04:35:22 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://172.X.X.X:8020/apps/hive/warehouse/testdb1.db/abitest1], targetPath=hdfs://172.Y.Y.Y:8020/apps/hive/warehouse/sankardb.db}
16/10/03 04:35:23 INFO client.RMProxy: Connecting to ResourceManager at stlpr8712/39.6.64.3:8050
16/10/03 04:35:23 ERROR tools.DistCp: Exception encountered
java.net.ConnectException: Call From SAMHADOOP1-230-8/39.6.64.8 to stlpr8711.corp.sami-musih.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:80)
at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:327)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:151)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:375)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 26 more
... View more
10-03-2016
09:18 AM
Hi.. i want to move some hive data from Prod cluster to Dev, In Prod i have 4 data nodes and 1 edge node and 2 name nodes In dev cluster i have 1 name node and 3 data nodes.. i was try like below on destination cluster(dev name node) <souce cluster> to <destination cluster>
hadoop distcp hdfs://172.x:x:x:50070/apps/hive/warehouse/testdb1.db/abitest1 hdfs://172.y:y:y:50070/apps/hive/warehouse/mydb.db 172.x:x:x : Prod edge node 172.y:y:y: Dev name node Error: "connection refused" Please suggest me...
... View more
Labels:
09-28-2016
10:55 AM
Hi.. I am using HDP 2.1.2 I Have 3 data nodes in Dev cluster and one of disk in data node fail so want to replace it.. i have receive below instruction from team 1. Stop the Hadoop services on the node through AMBARI (stop all services). 2. On the node, run "hcli support configureDisk" 3. You will be required to type "yes" at the question about configuring the disk.
4. restart the Hadoop services through AMBARI But while i am doing Step 2 i got error that $ hcli support configureDisk
Error: Unable to connect to Ambari on stlts8711:8081: HTTP Error 403: Bad credentials Can you please help on this...
... View more
Labels:
09-23-2016
09:20 AM
Thanks@Mats Johansson I have a another basic question is that, what is this directory each of my Node node having 12 directories,can i increase this.and how the logs are distributed between these 4 nodes/12 directories?
... View more
09-23-2016
08:07 AM
I have a container logs configured as below yarn.nodemanager.log-dirs: /data1/hadoop/yarn/log,/data2/hadoop/yarn/log,/data3/hadoop/yarn/log,/data4/hadoop/yarn/log,/data5/hadoop/yarn/log,/data6/hadoop/yarn/log,/data7/hadoop/yarn/log,/data8/hadoop/yarn/log,/data9/hadoop/yarn/log,/data10/hadoop/yarn/log,/data11/hadoop/yarn/log,/data12/hadoop/yarn/log /data9/hadoop/yarn/log file system in one of the data node is full, all logs are older than 1 year Can i deleted these logs ?
... View more
Labels:
09-16-2016
09:48 AM
I am using HDP 2.1.2 I can't see this property i have screen short hiveheap.png
... View more
09-16-2016
06:02 AM
Thank you so much@Balkrishna Yadav But how could find these property i am looking in Amari>hive>advanced properties but i could not find please suggest me...
... View more
09-14-2016
10:05 AM
Hi i am using HDP 2.1.2 and and Hive(Beewex) on HUE and trying to run simple queries like "show databases" and its take lang time and says "time out" i can run same query in hive CLI i have error log INFO tez.TezSessionState: User of session id baacb86a-e54d-4fae-8c16-fff9834d4d8a is y919122 16/08/02 01:11:03 INFO tez.DagUtils: Jar dir is null/directory doesn't exist. Choosing HIVE_INSTALL_DIR - hdfs:/user/y919122/.hiveJars 16/08/02 01:12:52 ERROR thrift.ProcessFunction: Internal error processing query java.lang.OutOfMemoryError: Java heap space I have Check in beeswax_server.sh and trying to find HADOOP_HEAPSIZE property but i could not find ..please help its is pending since long..its is highly appreciated if u can help me..
... View more
09-06-2016
04:45 PM
1 Kudo
Thank you @Constantin Stanca Actually i am new to hdp distribution ..i have below concern #We loaded data into hive as textfile (uncompressed format). #can you expand this answer "I would start moving files out of that folder (HDFS) in reverse chronological order and repeat the query until successful" # I have my query and existing hive file system select count(*)
from db_c720_krux.events as a
where site_name like ('Ka2XfElb')
and day = '2016-06-17';issue4.png
... View more
09-02-2016
02:15 PM
1 Kudo
When i triger simple select count(*) form table on "database.table" its throws error "Invalid distance too far back.." i have attached error log..please help me ..it is very high priority error for me..its really appreciated if you help me..6854-errorlog.png i have below info ..if its is useful to you reduce.am.max-attempts= 2 yarn.resourcemanager.am.max-attempts =2
... View more
09-01-2016
12:13 PM
@mqureshi Am i luck to get new anwser...
... View more