Member since
01-13-2020
6
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2492 | 03-30-2021 01:30 AM |
03-30-2021
12:22 PM
1 Kudo
Hi @Naush007 When you say remote server you mean a different host in the same network but on different cluster ? -copyToLocal is just for the same host on which you have the data. It will not copy to a different host. To do this you can use "DistCp" Distributed Copy tool to copy the data you want. Refer to below links for more information and how to use them. https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_admin_distcp_data_cluster_migrate.html - If this is not what you are looking for please let us know what exactly needs to be achieved here. Thank you. If this helps you don't forget to click on accepted solution.
... View more
03-30-2021
08:43 AM
@abagal / @PabitraDas Appreciate all your assistance / inputs on this. Thanks Wert
... View more
03-30-2021
04:23 AM
1 Kudo
Hello @Amn_468 Please note that, you get the block count alert after hitting the warning/critical threshold value set in HDFS Configuration. It is a Monitoring alert and doesn't impact any HDFS operations as such. You may increase the monitoring threshold value in CM ( CM > HDFS > Configurations > DataNode Block Count Thresholds) However, CM monitors the block counts on the DataNodes is to ensure you are not writing many small files into HDFS. Increase in block counts on DNs is an early warning of small files accumulation in HDFS. The simplest way to check if you are hitting small files issue is to check the average block size of HDFS files. Fsck should show the average block size. If it's too low a value (eg ~ 1MB), you might be hitting the problems of small files which would be worth looking at, otherwise, there is no need to review the number of blocks. [..] $ hdfs fsck / .. ... Total blocks (validated): 2899 (avg. block size 11475601 B) <<<<< [..] Similarly, you can get the average file size in HDFS by running a script as follows: $hdfs dfs -ls -R / | grep -v "^d" |awk '{OFMT="%f"; sum+=$5} END {print "AVG File Size =",sum/NR/1024/1024 " MB"}' The file size reported by Reports Manager under "HDFS Reports" in Cloudera Manager can be different as the report is extracted from >1hour old FSImage (not a latest one). Hope this helps. Any question further, feel free to update the thread. Else mark solved. Regards, Pabitra Das
... View more
11-22-2020
09:28 PM
Hello @Manoj690 RegionServer is a Service & your team can add the RegionServer Service interactively using via Ambari (HDP) or Cloudera Manager (CDH or CDP). - Smarak
... View more
10-20-2020
01:00 AM
Hi @bingyu628 You will need to raise support case to get the credentials, as internal license management team to provide you the credential of downloading files behind the paywall. Let me know if any further query or question.
... View more
10-12-2020
09:53 AM
Hi @jeroenr Once you have connected to sqlline you can use # !tables to list the tables created by phoenix. By default SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.LOG", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS" are created.
... View more