Member since
06-06-2016
185
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
450 | 07-20-2016 07:47 AM | |
447 | 07-12-2016 12:59 PM |
05-02-2020
09:35 AM
@ssubhas This did not work as well. Can you help me out. I am unable to connect to HIVE SERVICE in Putty.
... View more
07-18-2018
06:43 PM
@rama, thanks for reporting the issue. It is a bug since FileSystem client should show NullPointerException to users. Would you like to file a JIRA to Apache ( https://issues.apache.org/jira/projects/HDFS )? I am happy to help.
... View more
07-09-2018
02:00 PM
Hi Team, while i deleting the hdfs data in blob which is having hive paritioned data , i am getting below error but data was deleted successfully..can you please us to understand why i am receiving this error ? hadoop fs -rm -r wasbs://dev001data@datadev.blob.core.windows.net/prodcp/fact_tm_all/
18/07/09 13:43:34 WARN azure.AzureFileSystemThreadPoolExecutor: Disabling threads for Rename operation as thread count 0 is <= 1
18/07/09 13:47:04 INFO azure.AzureFileSystemThreadPoolExecutor: Time taken for Rename operation is: 210770 ms with threads: 0
-rm: Fatal internal error
java.lang.NullPointerException
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:448)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:2707)
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1340)
at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:166)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:356)
... View more
Labels:
09-10-2018
01:55 PM
Hello @rama. Hm. that's strange. By any chance to have the add partition + insert in a short period between them? I'm asking this, cause I'm suspecting 2 things: - That you had your partition added and somehow the table got locked. You can check this by running show locks; - Check if your HiveMetaStore DB (mysql, derby or etc) is healthy. So guess, in the next time you can try to do the following: - Enable DEBUG mode for HiveMetastore logs and check if you find something. - Login into the DB and check if your partitions have been added properly - Login into Hive with verbose and run SHOW LOCKS; - Just to confirm, make sure that you're running the msck repair table <TABLE>; after the whole process ended. Hope this helps!
... View more
04-23-2018
08:09 AM
Hi Team, i am using HDP 3.6(hdp 2.6) i have bash script like below after execution this script it return value to next process but even though the script run successfully and executing all stages in script completed successfully it is returning 1, because of this next process (hql script having some business log steps ) is not taking place and if i remove the return statement the next process going smoothly. this problem came after the Hadoop cluster up gradation(HDP 3.4 to 3.6 ) which having Ubuntu 14 and new cluster have Ubuntu 16. rm ~/sqoop/"$TABLE"/*
rmdir ~/sqoop/"$TABLE"
return $?
Can you please help us to understand what is issue and what happening if remove the return command here and what is impact in production if remove the Return statement?
... View more
Labels:
10-21-2018
07:10 PM
Did you install the Java JCE Unlimited strength policy jars ?
... View more
03-01-2018
09:02 AM
@rama I guess the issue here is due to incorrect mapping between the substr return type and the value being compared. I verified similar scenario and below are the details: hive> desc flight_details;
OK
flightnum string
tailnum string
uniquecarrier string
origin string
dest string
Time taken: 0.295 seconds, Fetched: 5 row(s)
hive> select * from flight_details where substr(tailnum,2,3)>=500 limit 10;
OK
1018 N828UA UA OAK ORD
1020 N567UA UA IAD BOS
1020 N561UA UA IAD BOS
1020 N554UA UA IAD BOS
1020 N535UA UA IAD BOS
1020 N571UA UA IAD BOS
1020 N530UA UA IAD BOS
1020 N553UA UA IAD BOS
1020 N525UA UA IAD BOS
1020 N585UA UA IAD BOS
... View more
01-24-2018
10:41 AM
HI , I am using pig script in tez, in pig script i am storing the output file into some hdfs directory..previously the output of pig script generate like part-r-00000..but now pig script generate output file like part-v000-o000-r-00000..can you please help me to understand what is difference on these two..why it has change
... View more
09-01-2017
11:49 AM
Hi Team, i am currently using HDI 3.4 cluster for PROD and Dev i got request to copy the all prod hive table from Prod HIVE TO DEV , i am using blob storage to store the data . can you help me how to do in blob?
... View more
Labels:
08-01-2017
03:38 PM
hi @rama Typically, the variable would be defined somewhere earlier in the script that contains the query, or in the CLI, with a SET statement. Something like: SET TB_MASTER=table_name; If you don't see where it was defined, then you would most likely be guessing from the total databases and tables within your system. If the system is very small, that might be feasible. You can use the "show databases" command to list the databases. If you see one that makes sense, you can issue a "use <database_name>" command and then issue a "show tables" to view the list of table names that exist within that Hive database. Here are some links on Hive variables and how they work. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VariableSubstitution#LanguageManualVariableSubstitution-UsingVariables https://community.hortonworks.com/articles/60309/working-with-variables-in-hive-hive-shell-and-beel.html
... View more
06-13-2017
12:21 PM
Thanks you @Sagar Morakhia I have try above query but no luck still it is running since 4 hours
... View more
04-10-2017
12:27 PM
Thanks you so much @mqureshi I could not find mapreduce.task.files.preserve.failedtasks , i am using MRv2 HDP 2.1.3 and currently i dont have running jobs..
... View more
03-22-2017
04:31 AM
@Jay SenSharma Thanks its working fine warning masg are not coming ..
... View more
03-14-2017
02:16 PM
Thank you so much@Jay SenSharma what are step should i follow after restart the ambari? can you suggest any document which help me step by step?
... View more
03-14-2017
01:29 PM
if you use the log search utility, it automatically parses logs for you by severity level, if you intend to do it manually, you can search for an ERROR code.
... View more
03-13-2017
10:19 PM
@rama Host < HiveServer2 m/c FQDN>, Port: 10000 Hive Server Type: Hive Server 2, Mechanism: User Name and Password, User Name : give your name and ensure that user has access in that edge node. Paswd: XXXXX Refer: http://hortonworks.com/wp-content/uploads/2014/05/Product-Guide-HDP-2.1-v1.01.pdf if it helps don't forget to upvote
... View more
02-08-2017
10:19 AM
Please check your /etc/hosts file. As well as ntp and permission of .ssh folder and all files which is under the .ssh. One query are you installing the hdp with root user?
... View more
01-10-2017
07:27 AM
@rama
If you really want to add Java Scripts to the Ambari View then in that case i will suggest you to use Custom Ambari View. Which provides you various option to manage the ambari resources/monitor then as well. Views are easily deploy able as a JAR. https://cwiki.apache.org/confluence/display/AMBARI/Views There are variety of views available including examples, you can refer to them and then choose accordingly to customize. Existing Views: https://github.com/apache/ambari/tree/trunk/contrib/views Examples: https://github.com/apache/ambari/tree/trunk/ambari-views/examples
. It will be really good to also go through the Ambari Development & Build process so that your developers can rebuild ambari with the code modifications in it: https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development .
... View more
01-31-2017
09:37 AM
Cuando introduces el host en Ambari pones 'localhost'?
... View more
12-23-2016
09:49 AM
Hi @Rama. I believe that Qlik caches data. You will want to check that the Qlik cache is cleared - or that your Qlik results are refreshed/rebuilt. I'm not sure of the specific way to do that... but ensure that your Qlik data is "fresh."
... View more
12-26-2016
01:43 PM
@ Rajkumar Singh TThanks you so much .. In my case i don't want to display some data so that i have drooped some records in table using partitioned drooped.. so again if i want to display that partitioned records .is there way to display that records? Note: i could see records in warehouse directory , i have droped only partitioned , i have't touch the data still i could browse the data..
... View more
12-10-2016
02:25 PM
Please select the best answer to close out the thread
... View more
11-08-2016
08:58 AM
HI @jss I could find some configuration difference here In PROD cluster ->managed ambari->views>hive->hive views->cluster configuration is set to "Local Ambari Managed Cluster" and its working fine IN Dev cluster ->managed ambari->views>hive->hive views->cluster configuration is set to " custom" here it is not working So should i change to "Local Ambari Managed Cluster" if you want to me do this one , after save this should i need to restart the ambari server? please suggest me
... View more
12-20-2016
03:45 PM
This is a standard approach. You could add a data count check on source and target, just to give you peace of mind.
... View more
10-13-2016
06:20 AM
@Ayub Pathan Both are using same version HDP 2.1.2 I for got mansion port but it has 8020 both cluster export table db_c720_dcm.network_matchtables_act_ad to 'apps/hive/warehouse/sankar7_dir'; and i could see sankar7_dir in /user/hdfs/apps/hive/warehouse/sankar7_dir in source cluster... hadoop distcp hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir hdfs://yy.yy.yy.yy:8020/apps/hive/warehous e/sankar7_dir
16/10/13 01:01:05 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=fa lse, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs:///xx.xx.xx.xx:8020/apps/h ive/warehouse/sankar7_dir], targetPath=hdfs://yy.yy.yy.yy4:8020/apps/hive/warehouse/sankar7_dir}
16/10/13 01:01:05 INFO client.RMProxy: Connecting to ResourceManager at stlts8711/39.0.8.13:8050
16/10/13 01:01:06 ERROR tools.DistCp: Invalid input:
org.apache.hadoop.tools.CopyListing$InvalidInputException: hdfs:///xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir doesn't exist
at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:84)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:80)
at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:327)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:151)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:375) hdfs:///xx.xx.xx.xx:8020/apps/hive/warehouse/sankar7_dir doesn't existIif see my error while doing Distcp with out creating sankar_7. But i export table to directory :. export table db_c720_dcm.network_matchtables_act_ad to 'apps/hive/warehouse/sankar7_dir';
... View more
10-12-2016
01:07 PM
Hi.. I am trying to copy hive data from once cluster to another cluster using distcp command hadoop distcp hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/db_database.db/cars hdfs://xx.xx.xx.xx:8020/apps/hive/warehouse/db_database.db table is migrated and i could see in hue file browser but i cont see in hive table ..
... View more
Labels:
10-06-2016
06:09 PM
You can try using webhdfs. hadoop distcp webhdfs://<nn>/user
... View more
10-04-2016
07:00 AM
@Kuldeep Kulkarni Any updated on my query sir...?
... View more
10-24-2016
06:38 AM
@rama "hcli support configureDisk" seems to be custom command used in your company/environment. Below are generic steps to add disk - 1. Add disk to the datanode and make sure its reflected in "df -h" and "fdisk -l" 2. Login to Ambari -> Click on "Services"->HDFS->Configs 3. Over here you can see "DataNode directories". Please add with comma separated list of disk here or replace existing value. Pls find screenshot below - Or Once added - Save and Restart HDFS.
... View more