Member since
02-02-2021
115
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
287 | 08-13-2021 09:44 AM | |
1419 | 04-27-2021 04:23 PM | |
614 | 04-26-2021 10:47 AM | |
402 | 03-29-2021 06:01 PM | |
1227 | 03-17-2021 04:53 PM |
10-21-2022
04:20 PM
Hi Experts, I am trying to use the nifi processor getHDFS from my CDP cluster in Azure and then use a PutFile processor to download that file to my local filesystem. My Nifi is a standalone server, separate from my CDP cluster. Currently I am seeing this error in the nifi-app.log 2022-10-21 18:18:29,631 ERROR [Timer-Driven Process Thread-5] o.apache.nifi.processors.hadoop.GetHDFS GetHDFS[id=fab61e35-0183-1000-2eb0-4d511c15db51] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to Unable to load custom token provider class. org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException: Unable to load custom token provider class. Any help is much appreciated. Thanks,
... View more
Labels:
06-23-2022
12:54 PM
Hi experts, I was wondering what is the best way to troubleshoot an application or job that is taking longer than usual. Maybe a 5minute job that is taking 1 hour or longer to complete etc. What are some things I should start looking at first? Or can someone bring me through the process? Thanks,
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache YARN
06-23-2022
12:53 PM
Hi experts, I was wondering if it is possible to see based on the application logs alone that it is trying to find available resources from the cluster, assuming that there are currently other running jobs that are utilizing the cluster resources? If so, how would it look like in the application logs or what do I look for? Thanks,
... View more
Labels:
- Labels:
-
Apache YARN
01-12-2022
01:45 PM
Hi experts, Our hadoop cluster has an old version of log4j and we were wondering how to properly upgrade log4j? Can we just replace the log4j jar file with an upgraded version? Currently this is one of the log4j files in our hadoop cluster. /usr/hdp/2.6.1.0-129/hadoop/client/log4j-1.2.17.jar Any help is much appreciated. Thanks,
... View more
- Tags:
- log4j
Labels:
- Labels:
-
Apache Hadoop
12-16-2021
12:40 PM
Hi @willx , Is there a way to see if the hadoop path is a volume or a directory?
... View more
12-15-2021
04:47 PM
Hi experts, Can someone please explain the difference between volumes and folders in hadoop? Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
11-18-2021
08:28 AM
Is there any way to have ambari skip the "distro-select" yum installation? Thanks,
... View more
11-17-2021
03:01 PM
Hi experts, I am having issues installing hadoop 3.x with ambari. Currently Ambari is displaying the error below: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 33, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install distro-select', exited with code '1', message: 'Error: Nothing to do
' Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
10-16-2021
09:00 AM
1 Kudo
@Faizan_Ali Thanks for the explanation. Makes sense. So while an application is running, it logs the container logs into a local directory "$ {yarn.nodemanager.log-dirs}/application_${appid}" then after the application is completed, it aggregates the logs into yarn.nodemanager.remote-app-log-dir. Ok thanks for the explanation.
... View more
10-15-2021
06:05 PM
Hi experts, When trying to register a new version to upgrade my current cluster, I can see the new "register version" under manage Ambari > Versions. Then when I click on Install on... next to the version I want to install, it brings me to the "Admin" tab...Stack and Versions > Versions tab. However, I only see the version that is currently installed, but do not see the version that I want to install on this page. Can someone please help or direct me to some documentation to help me resolve this issue? Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
10-15-2021
05:59 PM
Thanks @Faizan_Ali for the explanation. So in other words, once the job completes, then these logs will be stored in HDFS, or where are the logs stored after the application job is completed? Are the local and log yarn dirs are only for temporary use, usually when a job runs in the hadoop cluster? Thanks,
... View more
10-13-2021
06:15 AM
Hi experts, I just wanted to confirm my understanding or help me better understand the yarn local and log dirs. So my understanding is that yarn will download the data locally to a filesystem so that it is more easily accessible when a job is run as well as logs for that particular application. I believe these are temporary files as they will be stored in HDFS after the job completes. [root@test01 log]# ll /hadoop/yarn/ total 0 drwxr-xr-x. 6 yarn hadoop 78 Oct 13 08:09 local drwxrwxr-x. 8 yarn hadoop 239 Oct 13 08:09 log Can someone please help confirm my understanding or help me better understand this concept? Also what is usually best practice in regards to mounting these directories onto a local filesystem or onto another hard drive or can I have this directory share a hard drive with one of the datanode directories? Any help is much appreciated. Thanks,
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
10-07-2021
08:52 AM
Hi @pvishnu Thanks for the response. So this is a new cluster and ranger was already previously installed on the node so I modified some of the data in the ambari postgres db to make it think that Ranger is already installed. Is there any documentation on what I should do to make sure that everything is synced up? Thanks,
... View more
09-30-2021
09:51 AM
Hi experts, I am currently trying to start/stop Ranger via ambari and got the below error. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 231, in <module>
RangerAdmin().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 89, in start
import params
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/params.py", line 138, in <module>
cred_validator_file = format('{usersync_home}/native/credValidator.uexe')
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 95, in format
return ConfigurationFormatter().format(format_string, args, **result)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 59, in format
result_protected = self.vformat(format_string, args, all_params)
File "/usr/lib64/python2.7/string.py", line 549, in vformat
result = self._vformat(format_string, args, kwargs, used_args, 2)
File "/usr/lib64/python2.7/string.py", line 571, in _vformat
obj, arg_used = self.get_field(field_name, args, kwargs)
File "/usr/lib64/python2.7/string.py", line 632, in get_field
obj = self.get_value(first, args, kwargs)
File "/usr/lib64/python2.7/string.py", line 591, in get_value
return kwargs[key]
File "/usr/lib/python2.6/site-packages/resource_management/core/utils.py", line 63, in __getitem__
return self._convert_value(self._dict[name])
KeyError: 'usersync_home' Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Ranger
09-23-2021
05:56 AM
Ambari version 2.6.1 So I was not adding Hbase. I was trying to add a mpack so I tried to recreate the tar.gz file and install the mpack via command line. I may have been doing things too fast and may have accidentally deleted something. Not sure. But I don't think I touched Hbase.
... View more
09-22-2021
07:18 PM
Hi experts, I was doing some experimenting on this sandbox cluster to installing new components and now my ambari-server process won't start. Below is the error message i see in ambari-server.log when i try to run the ambari-server process. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Sep 22, 2021 9:16:02 PM com.google.inject.internal.ProxyFactory <init> WARNING: Method [public void org.apache.ambari.server.orm.dao.HostVersionDAO.create(java.lang.Object)] is synthetic and is being intercepted by [org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor@44841b43]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. Sep 22, 2021 9:16:02 PM com.google.inject.internal.ProxyFactory <init> WARNING: Method [public void org.apache.ambari.server.orm.dao.RepositoryVersionDAO.create(java.lang.Object)] is synthetic and is being intercepted by [org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor@44841b43]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. Sep 22, 2021 9:16:03 PM com.google.inject.internal.ProxyFactory <init> WARNING: Method [public java.lang.Object org.apache.ambari.server.topology.tasks.ConfigureClusterTask.call() throws java.lang.Exception] is synthetic and is being intercepted by [org.apache.ambari.server.security.authorization.internal.InternalAuthenticationInterceptor@25d2f66]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. An unexpected error occured during starting Ambari Server. com.google.inject.ProvisionException: Guice provision errors: 1) Error injecting method, java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=test, serviceName=HBASE, componentName=HBASE_REST_SERVER, stackInfo=BGTP-1.0 at org.apache.ambari.server.state.cluster.ClustersImpl.loadClustersAndHosts(ClustersImpl.java:173) at org.apache.ambari.server.state.cluster.ClustersImpl.class(ClustersImpl.java:95) while locating org.apache.ambari.server.state.cluster.ClustersImpl while locating org.apache.ambari.server.state.Clusters for parameter 0 at org.apache.ambari.server.agent.HeartBeatHandler.<init>(HeartBeatHandler.java:115) at org.apache.ambari.server.agent.HeartBeatHandler.class(HeartBeatHandler.java:79) while locating org.apache.ambari.server.agent.HeartBeatHandler 1 error at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987) at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013) at org.apache.ambari.server.controller.AmbariServer.performStaticInjection(AmbariServer.java:899) at org.apache.ambari.server.controller.AmbariServer.run(AmbariServer.java:307) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1068) Caused by: java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=test, serviceName=HBASE, componentName=HBASE_REST_SERVER, stackInfo=BGTP-1.0 at org.apache.ambari.server.state.ServiceComponentImpl.updateComponentInfo(ServiceComponentImpl.java:146) at org.apache.ambari.server.state.ServiceComponentImpl.<init>(ServiceComponentImpl.java:170) at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40) at com.google.inject.internal.ProxyFactory$ProxyConstructor.newInstance(ProxyFactory.java:260) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974) at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632) at com.sun.proxy.$Proxy19.createExisting(Unknown Source) at org.apache.ambari.server.state.ServiceImpl.<init>(ServiceImpl.java:163) at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40) at com.google.inject.internal.ProxyFactory$ProxyConstructor.newInstance(ProxyFactory.java:260) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974) at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632) at com.sun.proxy.$Proxy15.createExisting(Unknown Source) at org.apache.ambari.server.state.cluster.ClusterImpl.loadServices(ClusterImpl.java:428) at org.apache.ambari.server.state.cluster.ClusterImpl.<init>(ClusterImpl.java:318) at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40) at com.google.inject.internal.ProxyFactory$ProxyConstructor.newInstance(ProxyFactory.java:260) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974) at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632) at com.sun.proxy.$Proxy11.create(Unknown Source) at org.apache.ambari.server.state.cluster.ClustersImpl.loadClustersAndHosts(ClustersImpl.java:181) at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:128) at org.apache.ambari.server.state.cluster.ClustersImpl$$FastClassByGuice$$7d58855f.invoke(<generated>) at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53) at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56) at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90) at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.Scopes$1$1.get(Scopes.java:65) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40) at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.Scopes$1$1.get(Scopes.java:65) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40) at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024) at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974) ... 4 more Exception in thread "main" org.apache.ambari.server.AmbariException: Error stopping the server at org.apache.ambari.server.controller.AmbariServer.stop(AmbariServer.java:880) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1076) Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache HBase
09-09-2021
09:25 AM
hi experts,
When trying to test tez, i get the following error.
[root@test01 ~]# hadoop --config /etc/hadoop/conf jar /usr/lib/tez/tez-examples*.jar orderedwordcount /tmp/tezsmokeinput/sample-tez-test /tmp/tezsmokeoutput/
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
21/09/09 11:23:56 INFO shim.HadoopShimsLoader: Trying to locate HadoopShimProvider for hadoopVersion=2.10.1, majorVersion=2, minorVersion=10
21/09/09 11:23:56 INFO shim.HadoopShimsLoader: Picked HadoopShim org.apache.tez.hadoop.shim.HadoopShim28, providerName=org.apache.tez.hadoop.shim.HadoopShim28Provider, overrideProviderViaConfig=null, hadoopVersion=2.10.1, majorVersion=2, minorVersion=10
21/09/09 11:23:57 INFO client.TezClient: Tez Client Version: [ component=tez-api, version=0.9.2, revision=81ad7b000cec0503b9a1d5521fdaf0129443b536, SCM-URL=scm:git:https://gitbox.apache.org/repos/asf/tez.git, buildTime=2020-11-28T13:10:15Z ]
21/09/09 11:23:57 INFO client.RMProxy: Connecting to ResourceManager at test01.com/10.49.4.11:8050
21/09/09 11:23:57 INFO client.AHSProxy: Connecting to Application History server at test02.com/10.49.4.12:10200
21/09/09 11:23:57 INFO examples.OrderedWordCount: Running OrderedWordCount
21/09/09 11:23:57 INFO client.TezClient: Submitting DAG application with id: application_1629805664278_0020
21/09/09 11:23:57 INFO client.TezClientUtils: Using tez.lib.uris value from configuration: /bgtp/apps/1.0/tez/tez.tar.gz
21/09/09 11:23:57 INFO client.TezClientUtils: Using tez.lib.uris.classpath value from configuration: null
21/09/09 11:23:58 INFO client.TezClient: Tez system stage directory hdfs://test/tmp/root/staging/.tez/application_1629805664278_0020 doesn't exist and is created
21/09/09 11:23:58 INFO conf.Configuration: resource-types.xml not found
21/09/09 11:23:58 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/09/09 11:23:58 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/09/09 11:23:58 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/09/09 11:23:58 INFO client.TezClient: Submitting DAG to YARN, applicationId=application_1629805664278_0020, dagName=OrderedWordCount, callerContext={ context=TezExamples, callerType=null, callerId=null }
21/09/09 11:23:58 INFO impl.YarnClientImpl: Submitted application application_1629805664278_0020
21/09/09 11:23:58 INFO client.TezClient: The url to track the Tez AM: http://test01.com:8088/proxy/application_1629805664278_0020/
21/09/09 11:23:59 INFO client.TezClient: App did not succeed. Diagnostics: Application application_1629805664278_0020 failed 2 times due to AM Container for appattempt_1629805664278_0020_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2021-09-09 11:23:59.041]Exception from container-launch.
Container id: container_1629805664278_0020_02_000001
Exit code: 1
[2021-09-09 11:23:59.042]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
[2021-09-09 11:23:59.042]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
For more detailed output, check the application tracking page: http://test01.com:8088/cluster/app/application_1629805664278_0020 Then click on links to logs of each attempt.
. Failing the application.
21/09/09 11:23:59 INFO client.DAGClientImpl: DAG completed. FinalState=FAILED
21/09/09 11:23:59 INFO examples.OrderedWordCount: DAG diagnostics: [Application application_1629805664278_0020 failed 2 times due to AM Container for appattempt_1629805664278_0020_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2021-09-09 11:23:59.041]Exception from container-launch.
Container id: container_1629805664278_0020_02_000001
Exit code: 1
[2021-09-09 11:23:59.042]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
[2021-09-09 11:23:59.042]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
For more detailed output, check the application tracking page: http://test01.com:8088/cluster/app/application_1629805664278_0020 Then click on links to logs of each attempt.
. Failing the application.]
[root@test01 ~]#
Any help is much appreciated.
Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Tez
09-09-2021
08:54 AM
Yes, I have copied tables from another cluster to this current cluster.
... View more
09-08-2021
09:12 PM
Also here is the application logs. [hive@sunnymaster01 ~]$ cat application_1629805664278_0004.log End of LogType:prelaunch.err ****************************************************************************** Container: container_1629805664278_0004_02_000001 on sunnydn01.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:prelaunch.out LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:70 LogContents: Setting up env variables Setting up job resources Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1629805664278_0004_02_000001 on sunnydn01.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:stderr LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:77 LogContents: Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster End of LogType:stderr *********************************************************************** Container: container_1629805664278_0004_02_000001 on sunnydn01.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:stdout LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:723 LogContents: Heap PSYoungGen total 149504K, used 5140K [0x00000000e6700000, 0x00000000f0d80000, 0x0000000100000000) eden space 128512K, 4% used [0x00000000e6700000,0x00000000e6c05208,0x00000000ee480000) from space 20992K, 0% used [0x00000000ef900000,0x00000000ef900000,0x00000000f0d80000) to space 20992K, 0% used [0x00000000ee480000,0x00000000ee480000,0x00000000ef900000) ParOldGen total 341504K, used 0K [0x00000000b3400000, 0x00000000c8180000, 0x00000000e6700000) object space 341504K, 0% used [0x00000000b3400000,0x00000000b3400000,0x00000000c8180000) Metaspace used 2971K, capacity 4550K, committed 4864K, reserved 1056768K class space used 316K, capacity 386K, committed 512K, reserved 1048576K End of LogType:stdout *********************************************************************** End of LogType:prelaunch.err ****************************************************************************** Container: container_1629805664278_0004_01_000001 on sunnydn05.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:prelaunch.out LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:70 LogContents: Setting up env variables Setting up job resources Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1629805664278_0004_01_000001 on sunnydn05.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:stderr LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:77 LogContents: Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster End of LogType:stderr *********************************************************************** Container: container_1629805664278_0004_01_000001 on sunnydn05.dmicorp.com_45454 LogAggregationType: AGGREGATED ================================================================================ LogType:stdout LogLastModifiedTime:Wed Sep 08 22:52:35 -0500 2021 LogLength:723 LogContents: Heap PSYoungGen total 149504K, used 5140K [0x00000000e6700000, 0x00000000f0d80000, 0x0000000100000000) eden space 128512K, 4% used [0x00000000e6700000,0x00000000e6c05208,0x00000000ee480000) from space 20992K, 0% used [0x00000000ef900000,0x00000000ef900000,0x00000000f0d80000) to space 20992K, 0% used [0x00000000ee480000,0x00000000ee480000,0x00000000ef900000) ParOldGen total 341504K, used 0K [0x00000000b3400000, 0x00000000c8180000, 0x00000000e6700000) object space 341504K, 0% used [0x00000000b3400000,0x00000000b3400000,0x00000000c8180000) Metaspace used 2973K, capacity 4550K, committed 4864K, reserved 1056768K class space used 316K, capacity 386K, committed 512K, reserved 1048576K End of LogType:stdout *********************************************************************** Any help is much appreciated. Thanks,
... View more
09-08-2021
03:10 PM
Hi experts, I ran a hive query using tez via beeline to Join tables and got the below error. 2021-09-08T17:07:55,932 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Query ID = hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Total jobs = 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Launching Job 1 out of 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: QueueName: null nonDefaultUser: false defaultQueuePool: null hasInitialSessions: false 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: Created new tez session for queue: null with session id: 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 2021-09-08T17:07:55,946 INFO [HiveServer2-Background-Pool: Thread-140] ql.Context: New scratch dir is hdfs://sunny/tmp/hive/hive/334e90cf-525e-47f2-bf12-b227417647c2/hive_2021-09-08_17-07-55_686_3502860413990358095-7 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] exec.Task: Tez session hasn't been created yet. Opening session 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionState: User of session id 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 is hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Localizing resource because it does not exist: file:/usr/bgtp/current/ext/hive to dest: hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Looks like another thread or process is writing the same file 2021-09-08T17:07:55,953 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Waiting for the file hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive (5 attempts, with 5000ms interval) 2021-09-08T17:07:55,978 INFO [ATS Logger 0] hooks.ATSHook: ATS domain created:hive_334e90cf-525e-47f2-bf12-b227417647c2(anonymous,hive,anonymous,hive) 2021-09-08T17:07:55,980 INFO [ATS Logger 0] hooks.ATSHook: Received pre-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Could not find the jar that was being uploaded 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] exec.Task: Failed to execute tez graph. java.io.IOException: Previous writer likely failed to write hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive. Failing because I am unlikely to write too. at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1028) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:471) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:247) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:20,968 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:08:20,969 INFO [ATS Logger 0] hooks.ATSHook: Received post-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,969 ERROR [HiveServer2-Background-Pool: Thread-140] ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask 2021-09-08T17:08:20,969 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Completed executing command(queryId=hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527); Time taken: 25.04 seconds 2021-09-08T17:08:20,984 ERROR [HiveServer2-Background-Pool: Thread-140] operation.Operation: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,480 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] operation.OperationManager: Closing operation: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=3ebe86bb-7347-4350-950e-0e202a1b6f9b] 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] exec.ListSinkOperator: Closing operator LIST_SINK[35] 2021-09-08T17:08:26,508 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-31-2021
08:27 AM
Hi experts, In my current cluster, I have some datanodes that have only 2 disks and some datanodes that have 3 disks. I was wondering if it is ok to have a different number of disks, but specify in the datanode configs 3 disks. Also is it ok if some disks are 2T and some disks are 3T? Any advice is greatly appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
HDFS
08-13-2021
09:44 AM
Ok nevermind, it was a firewall issue. Everything is working now. Thanks,
... View more
08-13-2021
09:11 AM
Hi experts, We recently changed the Ip address of our ambari in our dev enviornment. The cluster seems to be up and running and working properly, however, ambari is not recognizing which namenode is active and which is passive. Also some of the users are unable to access the ambari hive view. This is the error message when trying to access the hive view via ambari. USER HOME Check Message: test01.dmicorp.com:50070: No route to host (Host unreachable) Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-12-2021
06:23 AM
Thanks it worked.
... View more
08-11-2021
10:15 AM
Hi experts, As the root user, I am trying to delete a directory in HDFS which was created by root. However, when I try to delete it, it says "Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x" Why does it say permission denied on "/user" when I am trying to delete the directory "/tmp/root/testdirectory" The error message is below. [root@test02 ~]# hdfs dfs -ls /tmp/root/ Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Found 2 items drwxrwxrwx - root hdfs 0 2021-08-09 20:35 /tmp/root/testdirectory -rw-r--r-- 3 root hdfs 0 2021-08-10 13:54 /tmp/root/test [root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 rmr: DEPRECATED: Please use '-rm -r' instead. 21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2498) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2471) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1243) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1240) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1257) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1232) at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:147) at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109) at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95) at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153) at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:327) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:299) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:281) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:265) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:175) at org.apache.hadoop.fs.FsShell.run(FsShell.java:317) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:380) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1394) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy10.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:587) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2496) ... 21 more rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x [root@test02 ~]# Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
08-09-2021
10:19 AM
Hi experts, We are trying to copy hive tables from one cluster to another to do some testing. What is the proper way of doing this? Is it possible to distcp the hive tables at the hdfs level first and then somehow run a hive query to somehow have those hive tables recognized by hive? Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS
06-28-2021
11:31 AM
@Scharan Thanks for the response. So I added this in the metainfo.xml <metainfo> <schemaVersion>2.0</schemaVersion> <services> <service> ... <quickLinksConfigurations-dir>quicklinks</quickLinksConfigurations-dir> <quickLinksConfigurations> <quickLinksConfiguration> <fileName>quicklinks.json</fileName> <default>true</default> </quickLinksConfiguration> </quickLinksConfigurations> </service> </services> </metainfo> And this is the quicklinks.json file: { "name": "default", "description": "default quick links configuration", "configuration": { "protocol": { "type":"https", "checks":[ { "property":"dfs.http.policy", "desired":"HTTPS_ONLY", "site":"hdfs-site" } ] }, "links": [ { "name": "namenode_ui", "label": "NameNode UI", "url":"%@://%@:%@", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_logs", "label": "NameNode Logs", "url":"%@://%@:%@/logs", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_jmx", "label": "NameNode JMX", "url":"%@://%@:%@/jmx", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "Thread Stacks", "label": "Thread Stacks", "url":"%@://%@:%@/stacks", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } } ] } } I have restarted ambari-server but however, still do not see the quicklinks in ambari UI. Any help is much appreciated. Thanks,
... View more
06-25-2021
04:33 PM
Hi experts, I have deployed a new cluster and our dev and prod clusters currently have quicklinks for HDFS. How do I add a quicklink to HDFS in ambari? Which metainfo.xml file do I modify to add the quicklinks in HDFS? Can someone give me the location of the metainfo.xml file? Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
06-03-2021
03:54 PM
Also Squirrel seems to be connecting to the dev cluster. It just times out when running a query such as "show databases". If squirrel stays connected for a long time, I noticed that the query will eventually return results instead of timing out. Per cloudera "https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html#concept_jsw_bnc_rp" It says that minimum 4 dedicated cores to HS2 and 4 for hive metastore. The server that hosts hs2 and metastore only has a total of 8 cores. Can this be a reason for the performance issue? Any help on this is much appreciated. Thanks,
... View more
05-31-2021
09:05 PM
yeah we currently have 2 HS2 instances. For some reason our production seems to be working fine with Squirrel. Our dev seems to be timing out after running simple queries such as "show databases". Beeline seems to work fine on our dev cluster. The only difference I can think of is that our dev cluster has an external mysql server whereas the production cluster, mysql server is installed on one of the nodes. Am I missing some squirrel drivers or something? Wondering why it is just squirrel that seems to have issues running queries against our dev hiveserver2. Any help is much appreciated. Thanks,
... View more