Member since
12-14-2016
58
Posts
1
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1369 | 04-19-2017 05:49 PM | |
1183 | 04-19-2017 11:43 AM | |
1791 | 04-19-2017 09:07 AM | |
2885 | 03-26-2017 04:20 PM | |
4551 | 02-03-2017 04:44 AM |
03-26-2017
04:12 PM
@pbarna Thanks a lot for the reply ! We have a On Prem environment and tables are external in HDFS and are running extremely fast! We move this table data to S3 later we query this from another environment which I have mentioned above. Once this is confirmed by a Team we move this data to AWS Redshift from S3. Anyways I have found the issue for timeout errors which I'll be posting below.
... View more
03-23-2017
06:47 AM
We
have external tables on AWS S3 buckets in CSV format not compressed, when we
try to query tables with simple Select * from example_table Limit 10 or Where
serial = “SomeID” it takes time of
minimum of 30 secs and consumes complete resources which are available in RM before
the final display of output where table data is less which is approximate of
500 to 1000 records and there are also very large tables with 3 million records
which displays even faster. Also one of the table with just 8331 records and 19
columns takes to 5-6 mins to complete Count Clause. Initiating itself takes 2-3 mins and once after initiated it completes quickly, this happens only with this table! I have changed the Execution engine for this table to MR which initiated quickly and completed in 80 Secs. I do not understand the TEZ
execution plan if someone would be able to help me out would be appreciated! We
have 3 node cluster built on ec2 installed HDP 2.5.2 and Hive 1.2.100. Out of which 2 are Datanodes, RM resources are 24 Vcores
and 108 GB RAM.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
03-20-2017
10:24 AM
Thanks Namit, this worked for me on my Dev Environment. Will try on next Change on Prod too. Thanks.
... View more
03-16-2017
10:10 AM
Hi All, HDFS Audit logs size have become huge since recent past days, on weekends /root was completely occupied, upon checking largest files in /root it was hdfs.audit.log file which were sizing between 300 MB to 9.9 GB. When I checked file sizes from Oldest to Newest, oldest were 300 MB- 500MB sized, the Newest which are from last 6 days were sizing between 1.9 GB to 9.9 GB! Are there any particular reason for huge space occupancy all of a sudden? Ps: We have not installed Ranger, KNOX, SOLR and ATLAS.
... View more
Labels:
02-03-2017
12:54 PM
Please try to copy the link from the Datasheet and paste it in new window, It works! Clicking on the link is considering just part of first line of URL. Copy and Paste works !! hdpca-exam-datasheet.png
... View more
02-03-2017
04:44 AM
All, I've checked with Hortonwoks with respect to Symlink /usr/hdp to different location. It was confirmed that we can Symlink to another mount point without any issues and the same has to be done across the cluster nodes to avoid issues. Normal Debugging and Upgrades does not have any issues going forward. Regards, Ram
... View more
02-02-2017
06:39 AM
Thanks for your time! I have separate mount point as /opt. I'm going to symlink /usr/hdp to /opt and install now. Will let you know once I complete the setup.
... View more
02-02-2017
06:37 AM
Thanks for your time in replying my question, what was the problem with Ranger log directories and what was the resolution? How about the performance will it be the same as the Standard or going forward in future do we experience any? Have you faced any such?
... View more
02-01-2017
02:36 PM
I wanted to install HDP 2.5 under /opt but hdp-select usually installs under/usr/hdp. Can we symlink /usr/hdp to /opt. I wanted to know if there are disadvantages after setting up with symlink? Due to IT policies, our Linux team is asking me to install under /opt which is a standard procedure for us for any 3rd party software especially for production boxes. Why can't we install under /opt or any other user specified location? Is there any particular reason?
... View more
Labels:
01-27-2017
06:08 PM
You cannot cd into HDFS folders, Try creating a NFS Gateway for your HDFS and you can cd into those directory! After creating a NFS Gateway, run this command :
mount -t nfs -o vers=3,proto=tcp,nolock <ip addr>:/ /data/hadoop/hdfs/
... View more
- « Previous
- Next »