Member since
10-04-2018
39
Posts
0
Kudos Received
0
Solutions
12-10-2019
08:59 AM
Hi @Shelton When I run that command its saying: How do I know the exact path?or what other commands I can try? <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/ranger_audits/update. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
... View more
12-09-2019
05:56 PM
@Shelton , I am using below CURL command !/bin/sh
curl -v --negotiate -u : 'http://<host>:8886/solr/ranger_audits/update?commit=true' -H 'Content-Type: text/xml' --data-binary '<delete><query>evtTime:[* TO NOW-30DAYS]</query></delete>' you mentioned after that launch notice commit=true. How should I launch that ? @Shelton
... View more
12-09-2019
03:05 PM
Hi ,
My solr data directory is full.
/opt/ambari_infra_solr/data/ranger_audits_shard1_replica1/data/index
Can I delete any data from this? I tried to use CURL command but it didn't work as it throws error as no space left on device.
Can you please help?
@jsensharma @Shelton
... View more
Labels:
- Labels:
-
Apache Solr
07-29-2019
03:34 PM
@Jay Kumar SenSharma @Geoffrey Shelton Okot Can you please help?
... View more
07-29-2019
03:34 PM
Hi, I want to rebalance cluster, but I am not sure how this threshold value is calculated? What threshold I should keep to finish it quickly? I am not looking for a good balance right now. But one of our datanode is now over 80% used and we do want to reduce it to atleast 60% or 70% Can you please tell me what threshold we should keep and how I can monitor HDFS balancing is finished or not in ambari?
... View more
Labels:
- Labels:
-
Apache Hadoop
07-26-2019
04:19 PM
How to calculate number of total containers in cluster? Does it depend on Memory? or VCores ? How can I get the exact number of containers which are available in my cluster.?
... View more
Labels:
- Labels:
-
Apache YARN
07-11-2019
01:46 PM
@Jay Kumar SenSharma @Geoffrey Shelton Okot ... Can you please help in this?
... View more
07-08-2019
06:22 PM
Hi, We received below alert "NameNode RPC Latency" - I checked hadoop-hdfs/namenode logs, but I couldnt find anything wrong. Why this alert gets triggered? and is there any other logs I can check for this alert?
... View more
Labels:
- Labels:
-
Apache Hadoop
05-14-2019
07:02 PM
Hi @Geoffrey Shelton Okot, Thank you for your response. Yes, we currently have that ibdata file on a disk- which we believe we can increase the size of the of the same disk. if we change the size of the same disk then do we need to update the location anywhere? As we are not changing location of the ibdata file. is that correct? Please suggest.
... View more
05-14-2019
05:53 PM
@Jay Kumar SenSharma - Can you please suggest based upon the requested information?
... View more
05-13-2019
01:47 PM
@Jay Kumar SenSharma ... Can you please suggest based upon the requested information?
... View more
05-12-2019
10:59 PM
@Jay Kumar SenSharma Thank you for your response. Here is the requested output : +--------------------+---------------+
| DB Name | DB Size in MB |
+--------------------+---------------+
| ambaridb | 651.8 |
| hivedb | 30274.0 |
| information_schema | 0.1 |
| mysql | 0.6 |
| ooziedb | 7041.9 |
| performance_schema | 0.0 |
| rangerdb | 29.4 |
+--------------------+---------------+ I have few followup questions: 1. If we delete the data from the table will ibdata1 file size will shrink? 2. Can we increase the disk size for that mount point where the ibdata1 file resides? what steps we will need to take? appreciate your help in this!
... View more
05-12-2019
10:19 PM
Hi , We are using mysql for our cluster database. Currently we are seeing size of -rw-rw----. 1 mysql mysql 37G May 1 18:55 ibdata1 This database has - ambaridb/rangerdb/hivedb etc. is increasing day by day. and we don't enough space on the disk.. Is there any easy way where we can reduce the size of this mysql db? Or is there a way if we can move this to some other disk? or increase the size of this mount point? Please need suggestion
... View more
04-26-2019
01:26 AM
@Geoffrey Shelton Okot Thank you. I think Decommissioning is a best way to do this. Do I need to turn on maintainance mode while I do decommissioning datanode? Also Do I need to stop the services like - NodeManager, Ambari metrics for that datanode after decommissioning?
... View more
04-25-2019
02:48 AM
@Geoffrey Shelton Okot Thank you for your response. Instead of Decommissioning the node Can I stop it? What is the difference? Whats the best way to stop it or Decommission?
... View more
04-24-2019
03:55 PM
Hi , We are using 12 Datanodes in our cluster. And we have 3 replication factor. Currently we are doing cost optimization and our total cluster is 35% used. We want to remove one datanode from our cluster, but we want to see first how our cluster performs on 11 datanodes. if it doesn't perform well, we need to restart that datanode again. To experiment this- Do I need to Stop the datanode or Decomission datanode? Please need suggestion.
... View more
Labels:
- Labels:
-
Apache Hadoop
03-22-2019
01:56 AM
Currently we are seeing our HDFS DFS directory is getting filled up and we have to remove the data at faster rate. We currently have 12 datanodes and 4 masternodes 1 edgenode. Can I delete the files from HDFS from masternodes and edgenodes at once? I have created a script on edgenode which deletes the HDFS files but speed is really slow. How can I delete multiple files at a time ? Can I place that script on multiple server and delete the files?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-12-2019
04:15 PM
@Geoffrey Shelton Okot Even after doing Rebalance HDFS to 25% threshold value, I still see the disk is 100% , IS hdfs not able to read it from the disk as its full, Also I had to set the DataNode failed disk tolerance to 1 as HDFS service was not coming up on that node. Can we delete the data manually from that particular disk? is there any way?
... View more
02-11-2019
10:59 PM
Hi, We currently have 8 Datanodes which has two hdfs disk mounted on each of the datanodes. One of the disk from the datanode is full. On this node HDFS(nodemanager) service was not coming up as it had below error Upon checking into articles found out that we can setup- "DataNode failed disk tolerance" value to 1 and ignore this volume as its 100% full, But I would like to understand how I can cleanup the data from this disk? I tried doing rebalance HDFS but which threshold value I should use? StorageLocation [DISK]file:/hdfs/data1/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Error checking directory /hdfs/data1/hadoop/hdfs/data /dev/xvdl 50G 49G 0 100% /hdfs/data1
/dev/xvdk 50G 27G 21G 56% /hdfs/data0
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera DataFlow (CDF)
02-08-2019
07:42 PM
Hey @Geoffrey Shelton Okot . I tried this command but it does not work as my current disk is 2.5TB and disk I am copying to is 1.7TB ... and this command requires that it should be of same disk size. Is there any other alternate way?
... View more
02-08-2019
03:37 AM
Hi, We currently have 13 EBS disks volumes in our one datanode, we have around 10 such datanodes each disk of the data node is of 1.5 TB used. We want to copy 1.5TB of volume to new disk. copying 100GB take about an hour or more. so 1.5TB will take so many hours to copy. Is there any faster way to copy the data? I am using rsync command to copy to new disk.
... View more
Labels:
- Labels:
-
Apache Hadoop
10-26-2018
07:08 PM
Hi I do see lot of under replicated blocks in Ambari UI. Block Errors: 0 corrupt replica / 0 missing / 26463 under replicated What should I do in this case? Can someone help me how to fix this? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
10-22-2018
08:24 PM
@Aditya Sirna and @Jay Kumar SenSharma Can you please help me with this?
... View more
10-19-2018
06:05 PM
Hi, I have installed spark2 on the server. I do see some jobs are in the running status in resource manager and not getting completed. the user and name of those job is : hive Thrift JDBC/ODBC Server
hive org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 Which are these jobs and why its not getting completed? Is it expected that after installation of spart2 services such as Spark2 History Server Livy for Spark2 Server Spark2 Thrift Server that these jobs will run? Kindly help, Thanks
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
10-17-2018
06:53 PM
Hi @Aditya Sirna I tried to follow this steps however, Spark2 History server is not available to ADD on another node? I could add Livy for spark2 and Spark2 Thrift Server.. Can you please tell me how I should move the clients? Can you please help ? Thank you
... View more
10-17-2018
06:45 PM
No Move button for Spark2 in Ambari . Why? If I have to move spark2. I will need to add service to node and then restart the service right? What happens to the existing spark2-clients in the node. if I try to add another spark2 components?
... View more
Labels:
- Labels:
-
Apache Spark
10-15-2018
02:39 PM
@Jay Kumar SenSharma Can you please help me with this one?
... View more
10-14-2018
07:24 PM
Hi All, I have spark components installed on one host. Here are the components- Livy for Spark2 Server Spark2 Thrift Server Spark2 History Server I want to move this components to other host, as I have to stop the current running host which has these components. Can you please let me know how I should perform this moving components and do I need to change anything on application side? Can I move this through Ambari ? Also, any service restart is required? Appreciate your help! Thanks.
... View more
- Tags:
- ambari-server
- Data Science & Advanced Analytics
- hostname
- spark2
- spark2-history-server
- spark2-thrift-server
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
10-09-2018
04:46 AM
Hi @Jay Kumar SenSharma, I already have user oozie. From command line I am able to access the db using oozie user I am not sure suddenly oozie jobs stopped working and not running it.
... View more