Member since
04-10-2021
18
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
233 | 05-26-2022 03:00 AM |
01-24-2023
09:56 AM
Even after passing the executor memory and driver memory as 1 Gb still it is getting maximum heap size rounded to 512 MB spark-shell --conf spark.executor.memory=1g --conf spark.driver.memory=1g Warning: Maximum heap size rounded up to 512 MB Anything that is missing?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Spark
12-12-2022
10:22 PM
Hi All, I am using Postgress for Hive metastore How can we configure postgress HA with Hive with true HA on the Hive Metastore without changing the connection string manually?
... View more
12-08-2022
01:11 AM
Hi All We have our hive database on Postgress and wanted to migrate it to MySQL, Is there any easy way to do it? Thanks
... View more
Labels:
11-27-2022
09:51 PM
I am getting info messages like the one below while reading and writing files in hdfs. can anyone explain the meaning of this message, will it cause any issues? I have 3 Namenode(open-source Hadoop) in HA. INFO retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2094) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1550) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3342) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1208) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1042) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976) , while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over hostname/ip:8020 after 1 failover attempts. Trying to failover after sleeping for 1195ms. Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
11-13-2022
10:35 PM
We have an MYSQL HA cluster, In case of failover, for the secondary master to take the place of the failed master, the hive metastore connection string configuration would need to be changed to point to the new master, and the services restarted. is it possible to have true HA on the Hive Metastore without changing the connection string manually ?
... View more
Labels:
10-04-2022
05:11 AM
Is there any other way to migrate the Hortonworks HDP3 cluster to the Apache Hadoop 3.3.2 other than the distcp? is it possible to upgrade by updating the jar and pointing the older metadata to the newer stack of Hadoop? Thanks
... View more
Labels:
07-17-2022
09:48 PM
Hi everyone. I am getting stale alerts every hr, if the last point of contact between Data nodes and Namenode is more than 30 s we get these alerts. I am not able to find the root cause of this slowness, I have 32 cores system, but when this alert is generated in htop hdfs usage is more but not all cores are 100% utilized. DataNode Health Summary DataNode Health: [Live=5, Stale=1, Dead=0] Please suggest changes required to resolve this.
... View more
Labels:
- Labels:
-
Apache Ambari
-
HDFS
05-31-2022
03:04 AM
Hi @ChethanYM , Thanks for the help, gracefully shutting down the region server triggers the hbase Master to perform a bulk assignment of all regions hosted by that region server. regards, KPG1
... View more
05-26-2022
03:20 AM
can we directly Decommission the HBase region server after adding the new region server to the HBase?will there be any data loss?
... View more
05-26-2022
03:00 AM
1 Kudo
I solved this by adding value as * in Hadoop.proxy.user.host in hdfs-core-site.xml
... View more
05-26-2022
02:57 AM
I am having 5 region servers in my cluster, I have added an additional 5 region servers, can I directly decommission the older 5 region server with the help of ambari server, will it cause any data loss in HBase?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
12-29-2021
09:54 PM
I am facing an issue while starting resource manager using ambari but able to start it using CLI. when starting using ambari no logs is generated in /var/log/haoop-yarn/ location. Please find logs on ambari. Traceback (most recent call last): File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 995, in restart self.status(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/resourcemanager.py", line 150, in status check_process_status(status_params.resourcemanager_pid_file) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status raise ComponentIsNotRunning() ComponentIsNotRunning The above exception was the cause of the following exception: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/resourcemanager.py", line 261, in <module> Resourcemanager().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/resourcemanager.py", line 135, in start self.configure(env) # FOR SECURITY File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/resourcemanager.py", line 62, in configure yarn(name='resourcemanager') File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn.py", line 369, in yarn setup_atsv2_backend(name,config_dir) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn.py", line 637, in setup_atsv2_backend setup_system_services(config_dir) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn.py", line 769, in setup_system_services group=params.user_group File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 677, in action_create_on_execute self.action_delayed("create") File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 674, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 373, in action_delayed self.action_delayed_for_nameservice(None, action_name, main_resource) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 403, in action_delayed_for_nameservice self._create_resource() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 419, in _create_resource self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 534, in _create_file self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 214, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282, in _run_command _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/get_user_call_output.py", line 62, in get_user_call_output raise ExecutionFailed(err_msg, code, files_output[0], files_output[1]) resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml -H 'Content-Type: application/octet-stream' 'http://masternode:50070/webhdfs/v1/user/yarn-ats/3.1.0.0-78/core-site.xml?op=CREATE&user.name=hdfs&overwrite=True' 1>/tmp/tmpTdj1DW 2>/tmp/tmpJ4DHHk' returned 52. curl: (52) Empty reply from server 100 Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache YARN
08-31-2021
10:33 PM
Is it safe to delete older ranger audit logs? /ranger/audit/hdfs/20200901/hdfs_ranger_audit_xyz.log as my nodes running out of space.
... View more
Labels:
- Labels:
-
Apache Ranger
07-01-2021
11:36 AM
I resolved this by using embedded zookeepers.was getting the same error in the external one. started distributed nifi service in 3 nodes with embedded zookeeper.
... View more
06-14-2021
10:53 AM
I am also facing the same issue, did you manage to resolve it?
... View more
06-08-2021
09:07 PM
I am also facing the same issue. after using an external zookeeper for distributed nifi getting this suspended and connected error. Please let me know if you find any solution.
... View more