Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 929 | 06-04-2025 11:36 PM | |
| 1535 | 03-23-2025 05:23 AM | |
| 761 | 03-17-2025 10:18 AM | |
| 2745 | 03-05-2025 01:34 PM | |
| 1811 | 03-03-2025 01:09 PM |
02-05-2019
09:58 AM
@Ruslan Fialkovsky How did you install the Airflow, I would think you need the Ambari integration for it to work. Have a look at this git Apache Airflow management pack for Ambari after the integration and configuration that's when I think you can see the lineage in Atlas. If you use Ambari it will generate automatically the Kerberos principal and keytab !!
... View more
02-05-2019
09:37 AM
@Tushar Bhoyar The following mount points already exist on the servers 50 and 52 /datadrv1/hadoop/hdfs/data and /data1/hadoop/hdfs/data. Create the new ones on these servers Datadrv2 on server 50 and 51 # mkdir -p /datadrv2/hadoop/hdfs/data
# chown -R hdfs:hadoop /datadrv2/hadoop/hdfs/data Datadrv3 on server 50 # mkdir -p /datadrv3/hadoop/hdfs/data
# chown -R hdfs:hadoop /datadrv3/hadoop/hdfs/data Data on server 54 # mkdir -p /data/hadoop/hdfs/data
# chown -R hdfs:hadoop /data/hadoop/hdfs/data Can you include the full output of the below command please tokenize your sensitive data $ hdfs dfsadmin -report Please revert
... View more
02-05-2019
09:20 AM
@Chris Jenkins Yes for sure I understood you were running the wget from the guest, that proves that Ambari is responding on port 8080. FW Firewall (FW) there could be a firewall running on your host. Have you tried the port forwarding see attache screenshots! If you can access 8080 from the guest then for sure the issue is the port forwarding between your guest and host
... View more
02-04-2019
10:07 PM
1 Kudo
@Sampath Kumar Have a look at this Github for zookeeper REST API's HTH
... View more
02-04-2019
10:05 PM
@Chris Jenkins We are almost there 🙂 By now your ambari should have been accessible from localhost: 8080, as you can run a WGET successfully then you have a DNS,FW or do you have any popup blocker, can you try Chrome incognito? Can you try the Putty Local Port Forwarding Please revert
... View more
02-04-2019
09:42 PM
@Tushar Bhoyar From your configuration and the errors, you have encountered it seems disks mounted /datadrv1 and /data1 are completely full yet /datadrv2, /datadrv3 and /data are not used. HDFS picks the valid HDFS gets the value/location from the dfs.datanode.data.dir which should have the below comma separated values !!
/datadrv1/hadoop/hdfs/data,/data1/hadoop/hdfs/data,/data/hadoop/hdfs/data,/datadrv2/hadoop/hdfs/data,/datadrv3/hadoop/hdfs/data Instead of ONLY /datadrv1/hadoop/hdfs/data and /data1/hadoop/hdfs/data. Solution First, run the below command as hdfs user ,you should see all the 6 data nodes report here with the DFS Remaining, DFS used, used% and not all the values $ hdfs dfsadmin -report
Configured Capacity: 8152940544 (7.59 GB)
Present Capacity: 6912588663 (6.44 GB)
DFS Remaining: 3826170743 (3.56 GB)
DFS Used: 3086417920 (2.87 GB)
DFS Used%: 44.65%
Under replicated blocks: 1389
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0 ----Output---- Live datanodes (1):
Name: 192.xxx.0.31:50010 (beta.frog.cr)
Hostname: beta.frog.cr
Decommission Status : Normal
Configured Capacity: 8152940544 (7.59 GB)
DFS Used: 3086417920 (2.87 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 3826170743 (3.56 GB)
DFS Used%: 37.86%
DFS Remaining%: 46.93%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 12
Last contact: Mon Feb 04 22:13:13 CET 2019
Last Block Report: Mon Feb 04 21:26:27 CET 2019 Create the below mount points datadrv2 # mkdir -p /datadrv2/hadoop/hdfs/data
# chown -R hdfs:hadoop /datadrv2/hadoop/hdfs/data datadrv3 # mkdir -p /datadrv3/hadoop/hdfs/data
# chown -R hdfs:hadoop /datadrv3/hadoop/hdfs/data data # mkdir -p /data/hadoop/hdfs/data
# chown -R hdfs:hadoop /data/hadoop/hdfs/data Now update the dfs.datanode.data.dir paste the below comma separated mount points in the window and save /datadrv1/hadoop/hdfs/data,/data1/hadoop/hdfs/data,/data/hadoop/hdfs/data,/datadrv2/hadoop/hdfs/data,/datadrv3/hadoop/hdfs/data You will be requested to start HDFS At this point run the HDFS rebalancer, depending on the data to copy You mount points and they should look like this After the above change restart the data nodes. Rebalancing HDFS HDFS provides a “balancer” utility to help balance the blocks across DataNodes in the cluster. To initiate a balancing process, follow these steps: In Ambari Web, browse to Services > HDFS > Summary. Click Service Actions, and then click Rebalance HDFS. Enter the Balance Threshold value as a percentage of disk capacity. Click Start This balancer command uses the default threshold of 10 per cent. This means that the balancer will balance data by moving blocks from over-utilized to under-utilized nodes until each DataNode’s disk usage differs by no more than plus or minus 10 per cent of the average disk usage in the cluster. Sometimes, you may wish to set the threshold to a different level—for example, when free space in the cluster is getting low and you want to keep the used storage levels on the individual DataNodes within a smaller range than the default of plus or minus 10 per cent .e.g 5 This should resolve your problem, please let me know
... View more
02-04-2019
05:39 PM
@João DC Have a look at this document it might give you the idea quick links in Ambari
... View more
02-04-2019
03:30 PM
@Chris Jenkins I didn't see you run the below command as root, this resets the Ambari admin password which is shipped with the image and starts it at port 8080. # ambari-admin-password-reset Please could you try that and let me know
... View more
02-04-2019
08:30 AM
@Chris Jenkins I just downloaded and successfully deployed the sandbox please see the short document attached and see if there is something you did wrong. Please let me know
... View more
02-03-2019
01:52 PM
@Shraddha Singh This is database connection issue it seems you haven't set the database for rangerkms !! If you other databases are running on Mysql or MariaDB do the following are the root user if not use the appropriate syntax. Usually the all the databases are co-hosted on the same node for hive, oozie, Ambari etc mysql -uroot -p{root_password}
create database rangerkms;
create user 'rangerkms'@'localhost' identified by '{rangerkms_password}';
grant all privileges on rangerkms.* to 'rangerkms'@'localhost';
grant all privileges on rangerkms.* to 'rangerkms'@'%';
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' identified by '{rangerkms_password}';
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' with grant option;
grant all privileges on rangerkms.* to 'rangerkms'@'%' with grant option;
flush privileges;
quit; After the above statements have run successfully use the above user/password to reconfigure your rangerkms it should start up HTH
... View more