Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 609 | 06-04-2025 11:36 PM | |
| 1173 | 03-23-2025 05:23 AM | |
| 579 | 03-17-2025 10:18 AM | |
| 2182 | 03-05-2025 01:34 PM | |
| 1373 | 03-03-2025 01:09 PM |
10-28-2020
11:29 PM
Thanks for the reply @smdas.
... View more
10-28-2020
02:23 AM
I am with you too. I am able to get the Hortonworks 2.5 version working without any problem. Others versions after 2.5 are consistently behaving the same way. The VM itself comes up fine - with the splash screen. When I try connecting to this VM using the browser and the URLs (have tried various combinations, localhost, specific IP address, ports 4200,1000,1080, etc), the connection is unsuccessful. I can get to the command prompt using the Alt+Fn_F5 screen. I can login successfully with the root/hadoop combination. It does not ask me to change the password the first time I login. From within this shell, almost no command works. I have tried 1. ifconfig 2. hadoop version 3. hive and a host of others. A command not found message is what greets me. From going through many articles, I understand, the later versions of the VM (after 2.5) are running these deamons inside a doctor and we have to connect to it? I tried running a few "docker" commands and based on my little research, find that there is no docker container running inside this VM, at least by default. I am stuck. Any help is appreciated. Thanks, Sriram
... View more
10-27-2020
06:06 AM
Hello Sheldon, Thanks for your message. You can find the screenshots attached. The top left displays the splash screen of the hortonworks sandbox. The top right is the "Page Not Found" error when using the localhost:4200 URL. Even if I use the ip address, the result is no different. The bottom screen is the message I get when I run ifconfig! It is a "command not found" message. This is run from within the image.
... View more
10-26-2020
02:32 PM
@ParthiCyberPunk Unfortunately, you didn't share the connect string. below is an example you could use jdbc:hive2://host:10000/DB_name;ssl=true;sslTrustStore=$JAVA_HOME/jre/lib/security/certs_name;trustStorePassword=$password Substitute host,port, truststore location and certificate name and password accordingly. Keep me posted
... View more
10-26-2020
11:47 AM
@anhthu You will need to fire-up your cluster start by scrolling at the bottom see the attached screen and start the Cloudera Manager [CM] the blue triangular shape will give you a drop-down menu chose start you will see some startup logs and if all goes well it will be green You can see exactly the same error I have on my Quickstart sandbox because my service are not started again see the attached screenshot on the blue inverted triangle you will see a drop-down list chose start the services. This will start all the service in the right order ,again once all is GREEN you are good to go Happy Hadooping
... View more
10-19-2020
05:00 AM
Try below. Some times the ambari cluster environment variable security_enabled might still hold the value true and hence all services expect keytabs . To validate the value of the environment variable /var/lib/ambari-server/resources/scripts/configs.py -a get -l <ambari-server host> -t 8080 -n <cluster-name> -u <admin-user> -p <admin-password> -c cluster-env | grep security
"security_enabled": "true",
"smokeuser_keytab": "/etc/security/keytabs/smokeuser.headless.keytab" /var/lib/ambari-server/resources/scripts/configs.py -a set -k security_enabled -v false -l <ambari-server host> -t 8080 -n <cluster name> -u <admin user> -p <admin password> -c cluster-env Try setting that variable to false
... View more
10-18-2020
03:26 AM
@cbfr Before you decide whether the cluster has corrupt files can you check the replication factor? If it's set to 2 then that normal The help option will give you a full list of sub commands $ hdfs fsck / ? fsck: can only operate on one path at a time '?' The list of sub command options $ fsck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks | -replicaDetails | -upgradedomains]]]] [-includeSnapshots] [-showprogress] [-storagepolicies]
[-maintenance] [-blockId <blk_Id>]
start checking from this path
-move move corrupted files to /lost+found
-delete delete corrupted files
-files print out files being checked
-openforwrite print out files opened for write
-includeSnapshots include snapshot data of a snapshottable directory
-list-corruptfileblocks print out list of missing blocks and files they belong to
-files -blocks print out block report
-files -blocks -locations print out locations for every block
-files -blocks -racks print out network topology for data-node locations
-files -blocks -replicaDetails print out each replica details
-files -blocks -upgradedomains print out upgrade domains for every block
-storagepolicies print out storage policy summary for the blocks
-maintenance print out maintenance state node details
-showprogress show progress in output. Default is OFF (no progress)
-blockId print out which file this blockId belongs to, locations (nodes, racks) It would be good to first check for corrupt files and then run the delete $ hdfs fsck / -list-corruptfileblocks
Connecting to namenode via http://mackenzie.test.com:50070/fsck?ugi=hdfs&listcorruptfileblocks=1&path=%2F
---output--
The filesystem under path '/' has 0 CORRUPT files A simple demo here my replication factor is 1 see above screenshot when I create a new file in hdfs the default repélication factor is set to 1 $ hdfs dfs -touch /user/tester.txt Now to check the replication fact see the number 1 before the group:user hdfs:hdfs $ hdfs dfs -ls /user/tester.txt
-rw-r--r-- 1 hdfs hdfs 0 2020-10-18 10:21 /user/tester.txt Hope that helps
... View more
10-16-2020
02:26 PM
Hi and thank you for your reply! Reading the logs of kafka i found that the controller was null, due to a enabling-desabling kerberos error. After cancelling the znode and restart kafka, i am able now to start kafka connect correctly. Also i configured debezium plugin and i have all the data of the mysql server in kafka topic. However i am not able to configure correctly the hdfssink connector to convert kafka topic in hive table. Can you please help me? @Shelton
... View more
10-12-2020
02:47 AM
little question why just not stop the service - HDFS on each new data node and set it to maintenance mode ?
... View more
10-10-2020
11:35 AM
1 Kudo
@mike_bronson7 Always stick to the Cloudera documentation. Yes !!! there is no risk in running that command I can understand your reservation.
... View more