Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6609 | 09-21-2018 03:16 AM | |
3137 | 07-25-2018 05:03 AM | |
4088 | 02-13-2018 02:00 AM | |
1901 | 01-21-2018 02:47 AM | |
37784 | 08-08-2017 10:32 AM |
07-27-2016
03:59 AM
This issue is with Namenode was on Safe mode 1)Ran command to came out of Safe mode 2) Stopped the HDFS service 3) Started the Zookeeper and HDFS service 4) After start namenode came up and running.
... View more
07-27-2016
03:17 AM
select
* from database.table_name limit 2; FAILED: RuntimeException
org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAuthzPluginException:
Failed to retrieve roles for a1272279: java.net.SocketTimeoutException: Read
timed out
... View more
Labels:
- Labels:
-
Apache Hive
07-27-2016
02:30 AM
1) AppMaster will launch one Maptask for each map splits ,there is map splits for each input fils. If the input file is too big(bigger than Block Size) then we have two or more map splits assoicated to same input file. 2)AppMaster will be launched first and create Maptask for each input splits 3) Correcting it was typo error 1 GB , it has 4 splits with block size 256 MB , for each Mapsplits it ask for 1 container in MR1 and where MR2 with Tez it use 1 container for its job.
... View more
07-26-2016
07:40 AM
Once client submit the request , YARN create the App Master, While creating AppMaster it occupy the maximum Available memory and cores , container will be created. 1)During the Map task , it will read inputsplits data on jar (by default text input format), if it is 1 gb data with 256 MB block size, 10 splits will be created. 2) Inputs splits are read by Linerecordreader , linereocrd is able read data from FSDataInputStream, it will till it complete the all input splits for MAP task, 3) Once it complete MAP task with Linerecord , Recordreader read completed and reducer task will run on it.
... View more
07-26-2016
06:16 AM
2016-07-26 0000 INFO ipc.Server (Server.java:run(2034)) - IPC Server handler 57 on 8020,
call org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease fromto 000.0.0.0 Call#2478456 Retry#3 org.apache.hadoop.ipc.RetriableException: org.apache.hadoop.hdfs.server.namenode.SafeModeException:
Cannot renew lease for DFSClient_task1468766070206_82805_m_000013_0_1388738965_1.
Name node is in safe mode. The reported blocks 68892652 needs additional 1837307
blocks to reach the threshold 1.0000 of total blocks 70729958. The number of live
datanodes 12 has reached the minimum number 0. Safe mode will be turned off automatically
once the thresholds have been reached. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1206)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4133) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:767) at
... View more
Labels:
- Labels:
-
Apache Hadoop
07-26-2016
04:37 AM
As you have given its correct one, check Which database would you like to use for Hive Metastore. Version Compatibility checks :- https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_HDP_RelNotes/content/ch_relnotes_v240.html
... View more
07-22-2016
07:05 AM
you can check this $hadoop dfsadmin -report as below . You can check without "ROOT" user as well. :~> hadoop dfsadmin -report Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Configured Capacity: 95930 (87.50 TB)
Present Capacity: 95869819 (87.93 TB) DFS Remaining: 37094235 (33.37 TB) DFS Used: 587755833 (53.56 TB) DFS Used%: 61.31%
Under replicated blocks: 0 Blocks with corrupt replicas: 5 Missing blocks: 0
------------------------------------------------- report: Access denied for user "username". Superuser privilege is required
:~>
... View more
07-22-2016
06:15 AM
Hi, Once it get a request from client to YARN will co-ordinate for execution. During YARN job it has RM and Application Master, While creating application master it will ask for Node manager to provide the best available memory and cores as default mentioned in .xml file. Once it get information from initial memory and cores , then container will be created and it depends on what type of scheduler your are using for YARN. It has 1:FIFO 2:Fair 3:Capacity Scheduler By default it uses the Fair scheduler for the JOB.
... View more
07-21-2016
04:15 PM
thank you Artem i am doing it for personal test. Now i able to install ambari and setup as well on other virtuwal machine. Any ways thanks for your guidance, future it will help me.
... View more
- « Previous
- Next »