Member since
05-12-2017
23
Posts
0
Kudos Received
0
Solutions
04-01-2020
06:43 AM
@jsensharma I have same issue as above, i exported path and script ran fine. I can see alert " [Custom] Host Mount Point Usage" added in Ambari alerts page but i'm not seeing alerts .I followed all the steps.
... View more
02-11-2020
02:54 AM
Hi, i see this is mentioned in above link " AMS data would be stored in 'hbase.rootdir' identified above. Backup and remove the AMS data. If the Metrics Service operation mode is ' embedded ', then the data is stored in OS files. Use regular OS commands to backup and remove the files in hbase.rootdir" so we need to remove dir structure or only files inside folders? please let me know
... View more
06-11-2018
05:16 AM
Reviewing the druid indexing tasks we find that they take a long time to finish, the tasks await a Handoff from the coordinator and this process depends on some metadata queries which are found in postgres of master1 node. our postgressql.conf: 1000 max connections, 256MB shared buffer So we believe that this is related to the problem of ingestion tasks, please help with review of this case
... View more
Labels:
05-07-2018
05:53 PM
We couldn't create a new processor, or load a template and It says something like transaction still in progress but it never ends.
... View more
Labels:
05-07-2018
04:26 PM
but still the same issue..
... View more
05-07-2018
04:25 PM
@Matt Clarke I did not find "HTTP requests" or ""Request Counts Per URI" . in nifi-app.logs I have increased ifi.cluster.node.protocol.threads=50" and Increased "nifi.web.jetty.threads=400 My cluster is 16 nodes cluster among 3 nodes are NIFI nodes. Thanks, Srinivas
... View more
05-07-2018
02:58 PM
@Davide Vergari , I made above changes but still the same issue , even i have increased both to 60 sec .
... View more
05-07-2018
02:56 PM
@Matt Clarke, I made above changes but still the same issue , even i have increased both to 60 sec .
... View more
05-07-2018
02:55 PM
I made above changes but still the same issue , even i have increased both to 60 sec .
... View more
09-28-2017
05:36 PM
09-28-2017
05:00 PM
@Rajkumar Singh
Accepted all dependant configurations (default values). Restarted Hive and any other related service (Stall configs) Created a test table: CREATETABLE resource.hello_acid (key int, value int)
PARTITIONED BY (load_date date)
CLUSTERED BY(key) INTO 3 BUCKETS
STORED AS ORC TBLPROPERTIES ('transactional'='true');
Inserted a few rows: INSERT INTO hello_acid partition (load_date='2016-03-03') VALUES (1, 1);
INSERT INTO hello_acid partition (load_date='2016-03-03') VALUES (2, 2);
INSERT INTO hello_acid partition (load_date='2016-03-03') VALUES (3, 3);
Everything look great at this point. I've been able to add rows,
and query the table as usual. This is the content of HDFS directory of table partition (/apps/hive/warehouse/resource.db/hello_acid/load_date=2016-03-03/): 3 delta directories (1 per insert transaction)
In Hive, I issued a minor compaction command. If i understood it
right, this should have merged all delta directories into one. Didn't work! ALTER TABLE hello_acid partition (load_date='2016-03-03') COMPACT 'minor';
Next, I issued a major compaction command. This should have deleted
all delta files and created a base file, with all the info. Didn't work either! Finally, I ran this last command: SHOW COMPACTIONS; +-----------+-------------------------------+-----------------------+--------+------------+---------------------------------+----------------+--+ | dbname | tabname
| partname
| type | state |
workerid
| starttime | +-----------+-------------------------------+-----------------------+--------+------------+---------------------------------+----------------+--+ | Database | Table
| Partition
| Type | State | Worker
| Start Time | | resource | hello_acid
| load_date=2016-03-03 | MINOR | failed
| hadoop-master1.claro.com.co-52 | 1506440161747 | | resource | hello_acid
| load_date=2016-03-03 | MAJOR | failed
| hadoop-master2.claro.com.co-46 | 1506440185353 | +-----------+-------------------------------+-----------------------+--------+------------+---------------------------------+----------------+--+
... View more
07-20-2017
05:47 PM
@kalai selvan, All 3 datanodes going down frequently..The datanodes are going down one after the other,
quite seemingly, one of the node gets hit harder than the rest.
... View more
07-18-2017
12:45 PM
@jsensharma,@nkumar, We have a cluster running HDP 2.5 with 3 worker nodes and around 9.1 million blocks with
an average block size of 0.5 MB.Is could be the reason for frequent JVM pause ?
... View more
07-17-2017
07:31 AM
@jsensharma, did you check hdfs-site.xml,,,core-site.xml?please have a look and let me know if any changes needded.
... View more
07-16-2017
09:44 AM
@jsensharma what is the recommendation for Datanode heap size and new generation heap size? now i set datanode heapsize to 24 GB and new genreration heap size to 10 GB.
... View more
07-16-2017
09:36 AM
@jsensharma What I found curious is that the Cached Mem grew a lot
just before the node stopped sending heartbeats. Do you know why would that be? cache.jpg
... View more
07-16-2017
09:27 AM
@jsensharma I did not see Datanode prcoess generating the "hs_err_pid" files under /var/log/hadoop/$USER.
... View more
07-16-2017
09:17 AM
@jsensharma, I have already added above -XX:CMSInitiatingOccupancyFraction=60-XX:+UseCMSInitiatingOccupancyOnly to HADOOP_DATANODE_OPTS for both if and else .
... View more
07-16-2017
08:20 AM
@nkumar, I tried increasing Java heap memory for datanodes from 16Gb to 24 GB but still same issue.
... View more
07-16-2017
08:19 AM
Any one faced this issue before?Any resloution? , Any one faced this issue?Any resolution?
... View more
07-15-2017
05:55 PM
@nkumar, I tried increasing Java heap memory for datanodes from 16 GB to 24 GB still same . , @nkumar, I tried increasing Java heap memory for datanodes from 16Gb to 24 GB but still same issue.
... View more
07-15-2017
02:32 PM
We have a cluster running HDP 2.5 with 3 worker nodes. Recently two of our datanodes go down frequently - usually they both go down at least once a day, frequently more often than that. While they can be started up without any difficulty, they will usually fail again within 12 hours. There is nothing out of the ordinary in the logs except very long GC wait times before failure. For example, shortly before failing this morning, I saw the following in the logs: i set datanode heap size to 16 gb and new generation to 8 gb. please help... <br>
... View more
Labels:
06-14-2017
11:22 AM
Hi Rahul, I'm also getting same kind of issue and i have followed your instructions and made changes to core-site.xml .but issue is not resolved.
... View more