Member since
01-13-2017
12
Posts
1
Kudos Received
0
Solutions
12-18-2020
09:17 AM
@aakulov - Did you get a chance to look at this one ? See anything concerning?
... View more
12-17-2020
07:45 AM
It was found to have multiple inconsistencies on hbase which were fixed yesterday. So dont see this for now. This could also be due to following message on HBASE master (this message wasn't there before repair/fix which was performed) - The Load Balancer is not enabled which will eventually cause performance degradation in HBase as Regions will not be distributed across all RegionServers. The balancer is only expected to be disabled during rolling upgrade scenarios. Question - Seems this is triggered by balance_switch flag on hbase. What's the best practice for this on production environment? is it supposed to be true or false? Kindly advise.
... View more
12-17-2020
07:33 AM
I tried to grep for 'user' or 'host in all configs under /etc/hadoop/conf.cloudera.hdfs and dont see any reference for that. Any direction here would be certainly helpful Wondering if it could be due to following kerberos property - <property> <name>dfs.namenode.kerberos.principal</name> <value>hdfs/_HOST@MOB.NUANCE.COM</value> </property>
... View more
12-17-2020
06:36 AM
Which config this could be ? I can share more logs if you’d like to see. This started coming after I added 6 nodes to scale.
... View more
12-16-2020
03:49 PM
Any response on this would be highly appreciated.
... View more
12-16-2020
03:48 PM
Have following error reported on Cloudera CM Node - /var/log/hadoop-hdfs/hadoop-cmf-hdfs-NAMENODE*.log.out Need help in resolution. 2020-12-16 23:38:19,034 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: unable to return groups for user host PartialGroupNameException The user name 'host' is not found. id: host: no such user id: host: no such user at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:212) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:133) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:72) at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:368) at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:309) at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:267) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at com.google.common.cache.LocalCache.get(LocalCache.java:3965) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) at org.apache.hadoop.security.Groups.getGroups(Groups.java:225) at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1778) at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1766) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:66) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getPermissionChecker(FSDirectory.java:3468) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:4079) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4269) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:901) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) at org.apache.hadoop.security.Groups.getGroups(Groups.java:225) at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1778) at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1766) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:66) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getPermissionChecker(FSDirectory.java:3468) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:4079) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4269) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:901) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:528) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:839) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
... View more
Labels:
12-16-2020
09:53 AM
Added 6 new DN nodes for scaling but HBASE is not assigning any regions on new DN's Have errors like this on HBASE master - AppDispatcher,\xEB\xFDE\x08\xEB\xDD\x11\xE9\xA4I\x0AX\x0A\xF4\x8Ar,1574112884653.8808c0e1917bf0b4acea2d83d9548463. state=FAILED_CLOSE, ts=Wed Dec 16 16:53:03 UTC 2020 (3008s ago), server=cdh-dn-28.prod.mcs.az-eastus2.mob.nuance.com,60020,1608136647199
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
12-11-2020
10:53 AM
Need Help here!. Keep getting follow messages in cloudera-scm-agent log and noticed it deletes everything from /etc/hadoop/ and puts back the content every minute - This is causing some Mapreduce tasks to fail with File not found errors. [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Ensuring alternatives entries are deactivated for parcel CDH-5.10.2-1.cdh5.10.2.p0.5. [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'sqoop-import', '/usr/bin/sqoop-import', 'bin/sqoop-import', '10', 'False'] [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'solr-conf', '/etc/solr/conf', 'etc/solr/conf.dist', '10', 'True'] [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'hive-hcatalog-conf', '/etc/hive-hcatalog/conf', 'etc/hive-hcatalog/conf.dist', '10', 'True'] [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'llama-conf', '/etc/llama/conf', 'etc/llama/conf.dist', '10', 'True'] [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'sqoop-codegen', '/usr/bin/sqoop-codegen', 'bin/sqoop-codegen', '10', 'False'] [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:34 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'sqoop-import-all-tables', '/usr/bin/sqoop-import-all-tables', 'bin/sqoop-import-all-tables', '10', 'False'] [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'sqoop', '/usr/bin/sqoop', 'bin/sqoop', '10', 'False'] [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'cli_mt', '/usr/bin/cli_mt', 'bin/cli_mt', '10', 'False'] [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel ERROR Failed to deactivate alternatives for parcel CDH-5.10.2-1.cdh5.10.2.p0.5: 2 [11/Dec/2020 18:39:35 +0000] 11426 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'deactivate', 'impalad', '/usr/bin/impalad', 'bin/impalad', '10', 'False']
... View more
Labels:
01-17-2017
01:40 PM
i got the output. Thanks very much for all your help,
... View more
01-13-2017
02:07 PM
Thanks so much for your prompt reply. I ran the suggested command but i see size as 0 whereas i know it has some data. So what does that mean? hive> describe extended bee_master_20170113_010001 > ; OK entity_id string account_id string bill_cycle string entity_type string col1 string col2 string col3 string col4 string col5 string col6 string col7 string col8 string col9 string col10 string col11 string col12 string Detailed Table Information Table(tableName:bee_master_20170113_010001, dbName:default, owner:sagarpa, createTime:1484297904, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:entity_id, type:string, comment:null), FieldSchema(name:account_id, type:string, comment:null), FieldSchema(name:bill_cycle, type:string, comment:null), FieldSchema(name:entity_type, type:string, comment:null), FieldSchema(name:col1, type:string, comment:null), FieldSchema(name:col2, type:string, comment:null), FieldSchema(name:col3, type:string, comment:null), FieldSchema(name:col4, type:string, comment:null), FieldSchema(name:col5, type:string, comment:null), FieldSchema(name:col6, type:string, comment:null), FieldSchema(name:col7, type:string, comment:null), FieldSchema(name:col8, type:string, comment:null), FieldSchema(name:col9, type:string, comment:null), FieldSchema(name:col10, type:string, comment:null), FieldSchema(name:col11, type:string, comment:null), FieldSchema(name:col12, type:string, comment:null)], location:hdfs://cmilcb521.amdocs.com:8020/user/insighte/bee_data/bee_run_20170113_010001, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{field.delim= , serialization.format= Time taken: 0.328 seconds, Fetched: 18 row(s) hive> describe formatted bee_master_20170113_010001 > ; OK # col_name data_type comment entity_id string account_id string bill_cycle string entity_type string col1 string col2 string col3 string col4 string col5 string col6 string col7 string col8 string col9 string col10 string col11 string col12 string # Detailed Table Information Database: default Owner: sagarpa CreateTime: Fri Jan 13 02:58:24 CST 2017 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://cmilcb521.amdocs.com:8020/user/insighte/bee_data/bee_run_20170113_010001 Table Type: EXTERNAL_TABLE Table Parameters: COLUMN_STATS_ACCURATE false EXTERNAL TRUE numFiles 0 numRows -1 rawDataSize -1 totalSize 0 transient_lastDdlTime 1484297904 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: field.delim \t serialization.format \t Time taken: 0.081 seconds, Fetched: 48 row(s) hive> describe formatted bee_ppv; OK # col_name data_type comment entity_id string account_id string bill_cycle string ref_event string amount double ppv_category string ppv_order_status string ppv_order_date timestamp # Detailed Table Information Database: default Owner: sagarpa CreateTime: Thu Dec 22 12:56:34 CST 2016 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://cmilcb521.amdocs.com:8020/user/insighte/bee_data/tables/bee_ppv Table Type: EXTERNAL_TABLE Table Parameters: COLUMN_STATS_ACCURATE true EXTERNAL TRUE numFiles 0 numRows 0 rawDataSize 0 totalSize 0 transient_lastDdlTime 1484340138 # Storage Information SerDe Library: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: field.delim \t serialization.format \t Time taken: 0.072 seconds, Fetched: 40 row(s)
... View more