Member since
06-06-2016
185
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1044 | 07-20-2016 07:47 AM | |
968 | 07-12-2016 12:59 PM |
09-10-2018
11:51 AM
@Naga Yamini Polepeddi I am not sure what happens here but i just resolved my issue by placing "if else" case if partitions are zero then again it will automatically load the last file again , i know that this is not proper solution but for temporally i enable, i am still working on this with export
... View more
07-14-2018
06:07 AM
i am trying to remove the hive external data in blob storage using below command and data got delete but i am getting error what is is error mean..can you help me to understand the error and soulution. hdfs dfs -rm -R wasbs://data@datadev.blob.core.windows.net/backup_dwh/ -rm: Fatal internal error
java.lang.NullPointerException
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:448)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:2707)
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1340)
at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:166)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:356)
... View more
Labels:
06-19-2018
07:26 PM
Thanks @ Vinicius Higa Murakami my cluster hive.exec.max.dynamic.partitions = 5000 i have checked the hivemetastore.log but i could not find any locks in hive log. no i have not running MSCK repair table after my batch load this is second time i am facing this issue in row, i resolved this issue by reload the files again, once i reload file all partitions are properly added.but i worried about recurring this issue. Please note one point i forgot to mention, after i load the file it will check the if partitions are existed or not if partitioned already exist it will drop the old one and add the new parititons, BUT here the complete partitions are not visible. EX: I have partitions based on key and month, i have keys A,B,C and months 01,02,03 and if i load a new file with key=A and month=03 , it is drooping only and key=A and month=03 and adding the new partitions but after process complete i could not see data for entire Key=A for all months in hive tables. After some time if reload the file , i could see entire data. I have bellow error message on my hivemetastore.log is this causing the any issue? and related to this? .IOException Scope named api_alter_partitions is not closed, cannot be opened. java.io.IOException: Scope named api_alter_partitions is not closed, cannot be opened.
... View more
06-18-2018
02:04 PM
HI @Paul Hernandez I am not triggering MSCK REPAIR TABLE <tablename>, i hope which may works here i will do it but my quick question till now we have not use MSCK REPAIR, why should we need this now only. i am using external table using azure blob storage .
... View more
06-18-2018
09:09 AM
Hi Team, i am facing this issue recursively. Currently i am working on HIVE tables and facing issue with hive partitions ,we have script to drop partitions if exist based on dynamic values and add the new partitions based on new data comes. i have uploaded two files with same partitions with different data in gap of 1 hours, both files are processed successfully and logs showing that partitions are adding drooped properly, but while checking the data in table, data was not there. After some time i have load same files same which i have uploaded last and partitions are properly placed and i could able to see the data in hive. Please help us here how to resolve this issue ? what might be root cause of this issue, we are facing this issue recursively.
... View more
Labels:
04-09-2018
12:03 PM
Hi..Team, I am using HDI 3.4 ..Today i am facing HIVE meta store issue and unable open the HIVE cli , while i manually start the hive meta store .it is not starting and giving below error resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType azuredb -userName fsfewrwffsfsfgwerwregdfgw -passWord [PROTECTED] -verbose' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:sqlserver://zbbprodsqlserver.database.secure.windows.net;database=zbbprodHiveOozie;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=300
Metastore Connection Driver : com.microsoft.sqlserver.jdbc.SQLServerDriver
Metastore connection User: fsfewrwffsfsfgwerwregdfgw
Exception in thread "main" java.lang.ExceptionInInitializerError
at javax.crypto.KeyAgreement.getInstance(KeyAgreement.java:179)
at sun.security.ssl.JsseJce.getKeyAgreement(JsseJce.java:287)
at sun.security.ssl.JsseJce.isEcAvailable(JsseJce.java:199)
at sun.security.ssl.CipherSuite$KeyExchange.isAvailable(CipherSuite.java:378)
at sun.security.ssl.CipherSuite.isAvailable(CipherSuite.java:194)
at sun.security.ssl.SSLContextImpl.getApplicableCipherSuiteList(SSLContextImpl.java:340)
at sun.security.ssl.SSLContextImpl.getDefaultCipherSuiteList(SSLContextImpl.java:298)
at sun.security.ssl.SSLSocketImpl.init(SSLSocketImpl.java:593)
at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:557)
at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:109)
at com.microsoft.sqlserver.jdbc.TDSChannel.enableSSL(IOBuffer.java:1616)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1401)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1068)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:904)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:451)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1014)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:77)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:121)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:169)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:272)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:258)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:508)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:94)
... 30 more
Caused by: java.lang.SecurityException: Cannot locate policy or framework files!
at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:317)
at javax.crypto.JceSecurity.access$000(JceSecurity.java:50)
at javax.crypto.JceSecurity$1.run(JceSecurity.java:86)
at java.security.AccessController.doPrivileged(Native Method)
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:83)
... View more
Labels:
03-01-2018
08:29 AM
Thanks @djaiswal for your response i could not find any logs for query..at initial stage only it got failed i hope its not giving any job/Application id..while i execute from Ambari HIVE view i am getting below error and i attached completed error message meta-execption.txt org.apache.ambari.view.hive.client.HiveInvalidQueryException: Error while compiling statement: FAILED: SemanticException MetaException(message:Exception thrown when executing query) [ERROR_STATUS]
org.apache.ambari.view.hive.client.HiveInvalidQueryException: Error while compiling statement: FAILED: SemanticException MetaException(message:Exception thrown when executing query) [ERROR_STATUS]
at org.apache.ambari.view.hive.client.Utils.verifySuccess(Utils.java:46)
at org.apache.ambari.view.hive.client.Connection.execute(Connection.java:614)
at org.apache.ambari.view.hive.client.Connection.executeAsync(Connection.java:625)
at org.apache.ambari.view.hive.resources.jobs.ConnectionController.executeQuery(ConnectionController.java:67)
at org.apache.ambari.view.hive.resources.jobs.viewJobs.JobControllerImpl.submit(JobControllerImpl.java:109)
at org.apache.ambari.view.hive.resources.jobs.JobService.create(JobService.java:414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at
... View more
02-07-2018
09:01 AM
Hi i have a data for 2015 to 2018 year i am using hive query to get the data based on below condition but i am getting meta exception Query : select ct_code,date_code from dim_table where substr(date_code,1,4)>=2015 and substr(date_code,1,4)<=2018; FAILED: SemanticException MetaException(message:Exception thrown when executing query) If i give select with out where conditon: select * from dim_table it is working fine. or when i split above query to parts based on years it also working fine. select ct_code,date_code from dim_table where substr(date_code,1,4)>=2015 and substr(date_code,1,4)<=2017 ; select ct_code,date_code from dim_table where substr(date_code,1,4)=2018; But when i give only where substr(date_code,1,4)>=2015 it also not working Can you please help me to understand that why i am getting this issue.
... View more
Labels:
08-01-2017
01:53 PM
Hi team i have hql query for data insertion INSERT OVERWRITE TABLE ${hiveconf:TB_MASTER} . i am aware hiveconf:TB_MASTER is variable but i am not aware where i can get variable value for TB_MASTER, can you help me to possibilities to find value?
... View more
Labels:
- Labels:
-
Apache Hive
07-06-2017
06:52 PM
Thanks @Sindhu It was working fine with hive --hiveconf hive.execution.engine=mr". but why should we use this? instead "hive" which i was using since starting?
... View more
07-06-2017
05:59 PM
Hi Team, I am using HDP 2.4 on HDinsight 3.6(azure) with HIVE 1.2.1.2.4 , I am unable to connect to hive CLI through putty but i am able to connect the hive view in ambari. I was getting stuck in below stage ssh-hadoop:~$ hive WARNING: Use "yarn jar" to launch YARN applications. Logging initialized using configuration in file:/etc/hive/2.4.2.0-258/0/hive-log4j.properties Can you Please help me to get out this issue ASAP
... View more
Labels:
06-13-2017
12:21 PM
Thanks you @Sagar Morakhia I have try above query but no luck still it is running since 4 hours
... View more
06-10-2017
10:45 AM
Hi@Sagar Morakhia currently no other job are running except this job,actually it was stuck at 97.22% and when i kill the job and rerun again it was stuck same position , last reducer was running forever. up to table creation and insert the values into drome_master_5 it was good but while insert value into .drome_master_6 it was stuck. below is step where the query stuck insert into table dropme_master_6 select * from dropme_master_5 where dropme_master_5.consumer_sequence_id not in ( select consumer_sequence_id from dropme_master_6); and while executing the this step it shows below warning message Warning: Map Join MAPJOIN[30][bigTable=dropme_master_5] in task 'Map 4' is a cross product i was split query and execute it like below and it gave result very fast select count(consumer_sequence_id) from dropme_master_6; result:10352059 select count(*) from dropme_master_5; OK 21287539 i have added below parameter to the job but no use set hive.execution.engine=mr; set hive.vectorized.execution.enabled = true;
set hive.vectorized.execution.reduce.enabled = true;
set hive.tez.container.size=10240;
set hive.tez.java.opts=-Xmx9216m;
set mapreduce.map.memory.mb=8192; set mapreduce.map.java.opts=-Xmx7372m;
set mapreduce.reduce.memory.mb=9216;
set mapreduce.reduce.java.opts=-Xmx8294m;
set yarn.scheduler.minimum-allocation-mb=1024; set yarn.scheduler.maximun-allocation-mb=11264;
set hive.cbo.enable=true; cluster-status.png job-status.png
... View more
06-08-2017
07:46 PM
job-status.pngHi team, I am using hive with tez in HDP and i am running hive query,it was completed 99.27% and it is stuck in reducer phase, attached the screenshot. hive-log.png job-status.pn mem-space.png same job was done in 30min in last run, please suggest why it is happen and solution for this issue? i have below memory space in linux :swap total used free shared buffers cached
Mem: 258041 254721 3320 0 3449 237158 -/+ buffers/cache: 14114 243927
Swap: 49151 660 48491 Please suggest
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Tez
04-10-2017
12:27 PM
Thanks you so much @mqureshi I could not find mapreduce.task.files.preserve.failedtasks , i am using MRv2 HDP 2.1.3 and currently i dont have running jobs..
... View more
04-10-2017
12:05 PM
I have noticed that in dfs file system /user/username/.staging directory reaches 4tb..and directory having old file 15160634 2016-02-09 09:30 /user/userprod/.staging/job_1443521267046_99999/job.jar
/job_1443521267046_99999/job.split
.staging/job_1443521267046_99999/job.splitme
/.staging/job_1443521267046_99999/job.xml
/.staging/job_1443521267046_99999/libjars
/.staging/job_1443521267046_99999/tez-conf.pb
/.staging/job_1443521267046_99999/tez-dag.pb.
/.staging/job_1443521267046_99999/tez.session Can i remove this data?
... View more
- Tags:
- HDFS
- hdp-2.3.2
- Hive
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
03-22-2017
04:31 AM
@Jay SenSharma Thanks its working fine warning masg are not coming ..
... View more
03-22-2017
03:51 AM
Hi i want to install Amari 2.4 in Ubuntu 14.2 on EC2 . i got warning msge that transparent_hugepage is enable ..so did below steps sudo vi /etc/default/grub add the below GRUB_CMDLINE_LINUX_DEFAULT=”transparent_hugepage=never” sudo update-grub sudo reboot cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] but no luck i got same error masge..please help me out of this warning
... View more
Labels:
03-14-2017
02:16 PM
Thank you so much@Jay SenSharma what are step should i follow after restart the ambari? can you suggest any document which help me step by step?
... View more
03-14-2017
02:10 PM
Thanks @Jay SenSharma Yes i have not found files-2.2.2.xxx.jar -rw-r--r-- 1 root root 571873 Jul 12 2016 ambari-admin-2.2.1.12.4.jar
-rw-r--r-- 1 root root 45379969 Jul 12 2016 capacity-scheduler-2.2.1.12.4.jar
-rw-r--r-- 1 root root 98659271 Jul 12 2016 hive-2.2.1.12.4.jar
-rw-r--r-- 1 root root 46912006 Jul 12 2016 pig-2.2.1.12.4.jar
-rw-r--r-- 1 root root 52241298 Jul 12 2016 slider-2.2.1.12.4.jar
-rw-r--r-- 1 root root 48027227 Jul 12 2016 tez-view-2.2.1.12.4.jar
drwxr-xr-x 8 root root 4096 Aug 5 2016 work
how could i add the jar file here?
... View more
03-14-2017
01:59 PM
Thanks for quick response@Ravi Mutyala But here my concern i could not see any FILE View instance to expand,How to enable this?
Browse to the Ambari Administration interface. Click Views, expand the Files View, and click Create Instance.
... View more
03-14-2017
01:27 PM
Hi , I am using HDP 2.4 and ambari 2.2 want to create file view in ambari but when go to manage ambari-->views i could not see any "file" instance ..how can i create file view here..please help me
... View more
Labels:
- Labels:
-
Apache Ambari
03-14-2017
04:48 AM
Thanks @Artem Ervits , /var/log/hive having large log file, so how we can search and find failure services time-stamp .is there any keyword or error text should use here using "grep"?
... View more
03-13-2017
05:22 PM
Hi , I am using HDP 2.4, last day i notice that hive service down and after some time its up and running.how to find hive service failure/down logs?
... View more
- Tags:
- Hadoop Core
- hdp-2.4.0
- Hive
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
03-01-2017
02:42 PM
Hi , I am new to HD insight cluster..i want to test the connectivity of hive database using Microsoft ODBC driver I have below details given data souce name: ABC HIVE host: current namenode hostname
port : 443 database: default hive server type: hive server 2
mechanism: windows azure HDinsight servive
username:
password: Can you please help me what is username and password here? and any other details if need to change like port number(1000)
... View more
Labels:
- Labels:
-
Apache Hive
02-06-2017
09:16 AM
@Jay SenSharma i am using root user for installing ambari..and i am using only one system for both ambari server and client..
... View more
02-06-2017
09:13 AM
HI@Ashnee Sharma Not yet it is resolved, i was stuck here, i have try all possibilities 1) i have deleted the .ssh file and created again 2) i have disable the iptable 3) i have selinux is disable(not installed) is there any other option to install with ssh or any suggestion ?
... View more
02-01-2017
09:58 AM
@Ashnee Sharma I wanted to install ambari-sever and agent on single machine(standalone). I have try ssh root@localhost and its working fine..
... View more
01-31-2017
07:09 PM
Hi All, I got below error mes while try to registered host in ambari 2.2 i have install ssh and copy the ssh private key in Host Registration Information i have copy the public key to >> authorized_key cat /root/.ssh/id_rsa.pub >>authorized_keys chmod 700 -R /root/.ssh chmod 600 /root/.ssh/authorized_keys Creating target directory...
==========================
Command start time 2017-01-31 11:03:04
Permission denied (publickey,password).
SSH command execution finished
host=ubuntu, exitcode=255
Command end time 2017-01-31 11:03:04
ERROR: Bootstrap of host ubuntu fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,password).
STDOUT:
Permission denied (publickey,password).
... View more
Labels: