Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 946 | 06-04-2025 11:36 PM | |
| 1552 | 03-23-2025 05:23 AM | |
| 772 | 03-17-2025 10:18 AM | |
| 2784 | 03-05-2025 01:34 PM | |
| 1833 | 03-03-2025 01:09 PM |
10-27-2020
12:58 AM
@Amn_468 The NameNode is solely responsible for the Cluster Metadata so please increase the NN heap size and restart the services. Please revert
... View more
10-26-2020
11:55 PM
@Amn_468 Increasing the Java Heap Size for the NameNode and Secondary NameNode Services,you could be using the default 1GB setting for heap size As a general rule of thumb take a look at the configuration of your Heap Sizes for every 1 Million Blocks in your cluster should have at least 1GB of Heap Size. 2 Million Blocks 2GB heap size 3 Million Blocks 3GB heap size ..... n Million Blocks n GB heap size After increasing the Java Heap Size and restart the HDFS Services that should resolve the issue. Please revert
... View more
10-26-2020
02:36 PM
@sriram72 Can you share screenshots I just posted a response to a similar query? Please have a look at this Sandbox issue Hope that helps
... View more
10-26-2020
02:32 PM
@ParthiCyberPunk Unfortunately, you didn't share the connect string. below is an example you could use jdbc:hive2://host:10000/DB_name;ssl=true;sslTrustStore=$JAVA_HOME/jre/lib/security/certs_name;trustStorePassword=$password Substitute host,port, truststore location and certificate name and password accordingly. Keep me posted
... View more
10-26-2020
11:47 AM
@anhthu You will need to fire-up your cluster start by scrolling at the bottom see the attached screen and start the Cloudera Manager [CM] the blue triangular shape will give you a drop-down menu chose start you will see some startup logs and if all goes well it will be green You can see exactly the same error I have on my Quickstart sandbox because my service are not started again see the attached screenshot on the blue inverted triangle you will see a drop-down list chose start the services. This will start all the service in the right order ,again once all is GREEN you are good to go Happy Hadooping
... View more
10-26-2020
10:32 AM
1 Kudo
@sgovi Can you confirm you have only one network card enabled? Please share the output of the below command from the web-shell CLI $ ifconfig Please revert
... View more
10-18-2020
03:26 AM
@cbfr Before you decide whether the cluster has corrupt files can you check the replication factor? If it's set to 2 then that normal The help option will give you a full list of sub commands $ hdfs fsck / ? fsck: can only operate on one path at a time '?' The list of sub command options $ fsck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks | -replicaDetails | -upgradedomains]]]] [-includeSnapshots] [-showprogress] [-storagepolicies]
[-maintenance] [-blockId <blk_Id>]
start checking from this path
-move move corrupted files to /lost+found
-delete delete corrupted files
-files print out files being checked
-openforwrite print out files opened for write
-includeSnapshots include snapshot data of a snapshottable directory
-list-corruptfileblocks print out list of missing blocks and files they belong to
-files -blocks print out block report
-files -blocks -locations print out locations for every block
-files -blocks -racks print out network topology for data-node locations
-files -blocks -replicaDetails print out each replica details
-files -blocks -upgradedomains print out upgrade domains for every block
-storagepolicies print out storage policy summary for the blocks
-maintenance print out maintenance state node details
-showprogress show progress in output. Default is OFF (no progress)
-blockId print out which file this blockId belongs to, locations (nodes, racks) It would be good to first check for corrupt files and then run the delete $ hdfs fsck / -list-corruptfileblocks
Connecting to namenode via http://mackenzie.test.com:50070/fsck?ugi=hdfs&listcorruptfileblocks=1&path=%2F
---output--
The filesystem under path '/' has 0 CORRUPT files A simple demo here my replication factor is 1 see above screenshot when I create a new file in hdfs the default repélication factor is set to 1 $ hdfs dfs -touch /user/tester.txt Now to check the replication fact see the number 1 before the group:user hdfs:hdfs $ hdfs dfs -ls /user/tester.txt
-rw-r--r-- 1 hdfs hdfs 0 2020-10-18 10:21 /user/tester.txt Hope that helps
... View more
10-17-2020
02:55 PM
1 Kudo
@sgovi I have just downloaded a Virtual box sandbox image and imported it into the VirtualBox successfully. In my configuration, I enabled only one network card Bridge Adapter so it picks the IP from my LAN of 192.168.0.x After uncompressing the Docker image, the initial screen shows it picked my local LAN IP which I used to access the browser CLI as shown below. Make sure you update your windows hosts file in C:\Windows\System32\drivers\etc Using the above URL change the initial root and Ambari passwords see steps I completed the below steps changing the initial root password Hadoop and then reset the Ambai user password. Once that is successful is starts the Ambari server sandbox-hdp login: root
root@sandbox-hdp.hortonworks.com's password:
You are required to change your password immediately (root enforced)
Last login: Sat Oct 17 20:21:47 2020
Changing password for root.
(current) UNIX password:
New password: Ambari user password reset steps ambari-admin-password-reset
Please set the password for admin:
Please retype the password for admin:
The admin password has been set.
Restarting ambari-server to make the password change effective...
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start................................
Server started listening on 8080
DB configs consistency check: no errors and warnings were found. Using the Local IP given to the VirtualBox from my LAN,I could access Ambari with the new password I reset above and restarted all serive though some were running. Can you confirm you followed those steps and still failed?? Happy hadooping
... View more
10-16-2020
01:47 PM
@bhoken In a kerberized cluster the Kafka ACL is leveraged by Ranger,if the Kafka plugin is enabled don't look further than the Ranger Please share you Ranger-Kafka policy
... View more
10-10-2020
02:50 PM
@kumarkeshav The parameter you are looking for hbase.regionserver.global.memstore.size is found in the hbase-site.xml I don't know where this value can be separately edit or centrally by Ambari
... View more