Member since
07-25-2016
28
Posts
74
Kudos Received
0
Solutions
09-29-2017
09:46 PM
2 Kudos
PROBLEM: A Sqoop export fails with the following error: 17/08/16 17:17:43 WARN conf.HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
17/08/16 17:17:43 INFO hive.metastore: Trying to connect to metastore with URI thrift://XYZ:9083
17/08/16 17:17:47 WARN hive.metastore: set_ugi() not successful, Likely cause: new client talking to old server.
Continuing without it.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:3692)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:3678)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:442)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:181)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.(HiveClientCache.java:330)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1536)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:89)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:135)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:121)
at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:343)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:421)
at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:455)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 45 more
17/08/16 17:17:47 INFO hive.metastore: Connected to metastore.
17/08/16 17:17:47 ERROR hive.log: Got exception: org.apache.thrift.transport.TTransportException java.net.SocketException: Broken pipe
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_databases(ThriftHiveMetastore.java:712)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:704)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1026)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:367)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:155)
at com.sun.proxy.$Proxy5.isOpen(Unknown Source)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:205)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:343)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:421)
at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:455)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244) ROOT CAUSE This issue occurs in the following scenarios:
When there is incorrect HADOOP_CLASSPATH set in shell script with Sqoop export command. When hive-site.xml in home directory of the user is not updated. RESOLUTION : To resolve this issue, do the following:
Comment the HADOOP_CLASSPATH in the shell script. Update hive-site.xml under home directory of the user on edge node.
... View more
Labels:
09-29-2017
09:38 PM
2 Kudos
PROBLEM: During rolling HDP upgrade, the package installation fails with the following error: 2017-08-10 22:29:01,914 - Package Manager failed to install packages. Error: Execution of
'/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-client' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY:
Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 376, in install_packages
retry_count=agent_stack_retry_count
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 58, in action_upgrade
self.upgrade_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 56, in upgrade_package
return self.install_package(name, use_repos, skip_repos, is_upgrade)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 51, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-client' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error,
run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
2017-08-10 22:29:02,293 - Could not install packages. Error: Execution of
'/usr/bin/yum -d 0 -e 0 check dependencies' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 166, in actionexecute
ret_code = self.install_packages(package_list)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 400, in install_packages
if not verifyDependencies():
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/packages_analyzer.py",
line 311, in verifyDependencies
code, out = rmf_shell.checked_call(cmd, sudo=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 check dependencies' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error,
run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py", line 469, in
InstallPackages().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 179, in actionexecute
raise Fail("Failed to distribute repositories/install packages")
resource_management.core.exceptions.Fail: Failed to distribute repositories/install packages ROOT CAUSE This issue occurs because of corrupt RPM database. RESOLUTION To resolve this issue, execute the following commands: #Command 1:
rm -f /var/lib/rpm/__db*
#Command 2:
rpm --rebuilddb
... View more
Labels:
09-29-2017
09:36 PM
2 Kudos
PROBLEM: When installing RPM during HDP Upgrade on few nodes, the installation fails with the following error: rpmdb: /var/lib/rpm/Packages: unexpected file format or type
error: cannot open Packages index using db3 - Invalid argument (22) ROOT CAUSE This issue occurs when the RPM database is corrupt. RESOLUTION To resolve this issue, do the following:
Take a backup of existing RPM database: mkdir /root/rpmdb.bak
cp -rp /var/lib/rpm/__db* /root/rpmdb.bak/
Remove the corrupted RPM database files: cd /var/lib/rpm
rm -rf __db*
Rebuild the RPM database: rpm --rebuilddb
... View more
Labels:
09-29-2017
09:34 PM
2 Kudos
PROBLEM: With a requirement to disable auto-start services before an upgrade, the auto-start services option seems to be invisible in the Ambari server UI. ROOT CAUSE This issue occurs when the cascaded style sheets and javascripts from previous version still exists under webapp location on Ambari Server. RESOLUTION To resolve this issue, remove *.gz files from /usr/lib/ambari-server/web/javascripts on the Ambari server.
... View more
Labels:
09-29-2017
09:29 PM
2 Kudos
PROBLEM: When running a Distcp command with HTTPFS, it fails with the following error: Error: java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException:
java.io.FileNotFoundException: File does not exist: /<some-location>/<file-name>/<same-file-name>
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:223)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException:
java.io.FileNotFoundException: File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
... 10 more
Caused by: java.io.FileNotFoundException: File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance
(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:398)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:683)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:649)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:471)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:501)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:497)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:869)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:884)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:218)
... 9 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:165)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:367)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:622)
... 18 more ROOT CAUSE : This is an issue because the HTTPFS liststatus returns incorrect pathSuffix for path of file. RESOLUTION : This is a known issue and a bug (HDFS-12139) has been submitted for the same. To get a Hotfix for this issue, contact Hortonworks Support.
... View more
Labels:
09-29-2017
09:47 AM
2 Kudos
PROBLEM: Oozie jobs gets stuck in PREP mode ROOT CAUSE : Below are the possible reasons: 1. Wrong Namenode host/port in job.properties 2. Wrong Resource manager host/port in the configurations. If there are lot of jobs stuck in the RESOLUTION : 1. Stop Oozie server from Ambari. 2. Backup Oozie DB is cluster is production. 3. Remove entries for stuck jobs from below tables WF_JOBS COORD_JOBS WF_ACTIONS 4. Start Oozie server
... View more
Labels:
09-29-2017
09:44 AM
1 Kudo
PROBLEM: ATS crashes silently. No errors or exceptions in the logs. ROOT CAUSE: Operating system's OOM Killer kills ATS because it uses highest memory(more memory than assigned heap)
RESOLUTION : Please add below properties in yarn-site.xml to fix this: yarn.timeline-service.ttl-ms=604800000 yarn.timeline-service.rolling-period=daily yarn.timeline-service.leveldb-timeline-store.read-cache-size=4194304 yarn.timeline-service.leveldb-timeline-store.write-buffer-size=4194304 yarn.timeline-service.leveldb-timeline-store.max-open-files=500
... View more
Labels:
09-29-2017
09:41 AM
2 Kudos
ROOT CAUSE: Oozie service check runs mapreduce action, sometimes because of lack of available resources on Yarn cluster, script times out. RESOLUTION: Increase the service check script timeout by following below steps: 1. Please SSH to Ambari server host 2. Navigate to /var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts 3. Backup service_check.py to some other location ( e.g. /root/ ) 4. Edit service_check.py and increase sleep/retires value
... View more
Labels:
09-29-2017
09:01 AM
2 Kudos
SYMPTOM FATAL Services - SERVER[deadpool.lab.local] E0113: class not found [org.apache.oozie.extensions.OozieELExtensions]org.apache.oozie.service.ServiceException: E0113: class not found [org.apache.oozie.extensions.OozieELExtensions]at org.apache.oozie.service.ELService.findMethod(ELService.java:226)at org.apache.oozie.service.ELService.extractFunctions(ELService.java:104)at org.apache.oozie.service.ELService.init(ELService.java:135)at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)at org.apache.oozie.service.Services.setService(Services.java:372)at org.apache.oozie.service.Services.loadServices(Services.java:305)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4210)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4709)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:802)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:676)at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:602)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:503)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1068)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1060)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:497)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) ROOT CAUSE : This happens when Falcon is uninstalled from the cluster. WORKAROUND : Remove properties mentioned in below document from oozie-site.xml and restart Oozie server via Ambari https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_command-line-installation/content/configuring_oozie_for_falcon.html
... View more
Labels:
09-29-2017
08:33 AM
2 Kudos
PROBLEM: Hive metastore isn’t updating for tables after insert into the table. This issue is observed with improper ‘numRows’ value with the describle formatted <tb_name> RESOLUTION: Hive stats are autogathered properly till an 'analyze table [tablename] compute statistics for columns' is run. Then it does not auto-update the stats till the command is run again. This is due to a known issue https://issues.apache.org/jira/browse/HIVE-12661 and has been fixed in HDP-2.5.0
... View more
Labels: