Member since
05-22-2017
126
Posts
16
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2469 | 02-07-2019 11:03 AM | |
6649 | 08-09-2018 05:08 AM | |
1254 | 07-06-2018 07:51 AM | |
3168 | 06-22-2018 02:28 PM | |
3197 | 05-29-2018 01:14 PM |
05-22-2018
07:47 PM
Can you check whether below classpath parameter contains hadoop conf (/etc/hadoop/conf) directory? mapreduce.application.classpath If not, append /etc/hadoop/conf in mapreduce.application.classpath parameter value and restart services.Try running job again.
... View more
05-22-2018
07:30 PM
JMX metrics can provide you compaction related parameters. http://<region server>:16030/jmx
... View more
05-22-2018
07:08 PM
Exit code 137 generally means, containers are killed are killed by OS due to lack of memory. Check output of below command: cat /var/log/messages|grep 'Kill process' There is less memory available in nodemanager to run container. Check you memory parameters settings for yarn nodemanager and container whether there is possiblity to decrease.
... View more
05-22-2018
07:01 PM
Looks like it is connection issue between Zookeeper and region server. Can you provide region server logs and zookeeper logs ?
... View more
05-22-2018
06:50 PM
These are yarn parameters which controls the maximum and
minimum conatiner sizes which yarn can allocate to containers: YARN PARAMETERS: ---->
yarn.scheduler.minimum-allocation-mb - The minimum allocation for every
container request at the RM, in MBs. Memory requests lower than this won't take
effect, and the specified value will get allocated at minimum. ---->
yarn.scheduler.maximum-allocation-mb - The maximum allocation for every
container request at the RM, in MBs. Memory requests higher than this won't
take effect, and will get capped to this value. MAPREDUCE PARAMETERS: Client side parameters which job requests. We can override
this. mapreduce.map.memory.mb - Map container size mapreduce.map.reduce.mb
- Reducer container size Note : If we request memory > yarn max allocation limit, Job
will fail as yarn will report it can not allocate that much memory. Below given are few examples: ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Example: (Following will fail) +=============================+ Server side: yarn.scheduler.minimum-allocation-mb=1024 yarn.scheduler.maximum-allocation-mb=8196 Client size: mapreduce.map.memory.mb=10240 ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Another example: (Following will work): +=============================+ Server side: yarn.scheduler.minimum-allocation-mb=1024 yarn.scheduler.maximum-allocation-mb=8196 Client size: mapreduce.map.memory.mb=800 In this case mapper will get 1024 (Minimum conatiner size) ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Another example: (Following will work): +=============================+ Server side: yarn.scheduler.minimum-allocation-mb=1024 yarn.scheduler.maximum-allocation-mb=8196 Client size: mapreduce.map.memory.mb=1800 In this case mapper will get 2048 Note: Single job can use single/multiple containers depending upon size of input data, split size and nature of data.
... View more
05-15-2018
08:09 PM
Error shows there are missing blocks Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-267577882-40.133.26.59-1515787116650:blk_1076168453_2430591 file=/user/backupdev/machineID=XEUS/delta_21551841_21551940/bucket_00003 at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:995) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:638) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.tools.util.ThrottledInputStream.read(ThrottledInputStream.java:77) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.readBytes(RetriableFileCopyCommand.java:285) ... 16 more Check Namenode UI to see whether you have missing blocks.
... View more
05-04-2018
11:49 AM
This will try to write to local filesystem on any node manager on which yarn will execute script. echo "sample script execution">/exam/user/example/path/test.txt #assume this is local path Please ensure /exam/user/example/path directory is present on all node manager logs. To test you can try below script. echo "sample script execution">/tmp/test.txt
... View more
04-27-2018
09:19 AM
Current error shows query is failing while scan. Please try increasing values for below properties. hbase.rpc.timeout phoenix.query.timeoutMs hbase.client.scanner.timeout.period If properties are not present in hbase-site.xml, add them. Default value of these parameters is 60000ms.
... View more
04-27-2018
08:45 AM
Please share the full stacktrace of error which is populated after increasing timeout.
... View more
04-26-2018
06:28 PM
1 Kudo
Hi @raj pati, Ensure that Hbase master is up and running. Please check hbase master logs for error messages. -Shubham
... View more