Member since
07-25-2016
28
Posts
74
Kudos Received
0
Solutions
09-29-2017
09:46 PM
2 Kudos
PROBLEM: A Sqoop export fails with the following error: 17/08/16 17:17:43 WARN conf.HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
17/08/16 17:17:43 INFO hive.metastore: Trying to connect to metastore with URI thrift://XYZ:9083
17/08/16 17:17:47 WARN hive.metastore: set_ugi() not successful, Likely cause: new client talking to old server.
Continuing without it.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:3692)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:3678)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:442)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:181)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.(HiveClientCache.java:330)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1536)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:89)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:135)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:121)
at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:343)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:421)
at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:455)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 45 more
17/08/16 17:17:47 INFO hive.metastore: Connected to metastore.
17/08/16 17:17:47 ERROR hive.log: Got exception: org.apache.thrift.transport.TTransportException java.net.SocketException: Broken pipe
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_databases(ThriftHiveMetastore.java:712)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:704)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1026)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:367)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:155)
at com.sun.proxy.$Proxy5.isOpen(Unknown Source)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:205)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:343)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:421)
at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:455)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244) ROOT CAUSE This issue occurs in the following scenarios:
When there is incorrect HADOOP_CLASSPATH set in shell script with Sqoop export command. When hive-site.xml in home directory of the user is not updated. RESOLUTION : To resolve this issue, do the following:
Comment the HADOOP_CLASSPATH in the shell script. Update hive-site.xml under home directory of the user on edge node.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- Issue Resolution
- Sqoop
Labels:
09-29-2017
09:38 PM
2 Kudos
PROBLEM: During rolling HDP upgrade, the package installation fails with the following error: 2017-08-10 22:29:01,914 - Package Manager failed to install packages. Error: Execution of
'/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-client' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY:
Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 376, in install_packages
retry_count=agent_stack_retry_count
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 58, in action_upgrade
self.upgrade_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 56, in upgrade_package
return self.install_package(name, use_repos, skip_repos, is_upgrade)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 51, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-client' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error,
run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
2017-08-10 22:29:02,293 - Could not install packages. Error: Execution of
'/usr/bin/yum -d 0 -e 0 check dependencies' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 166, in actionexecute
ret_code = self.install_packages(package_list)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 400, in install_packages
if not verifyDependencies():
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/packages_analyzer.py",
line 311, in verifyDependencies
code, out = rmf_shell.checked_call(cmd, sudo=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 check dependencies' returned 1.
error: rpmdb: BDB0113 Thread/process 9731/140223883491136 failed: BDB2052 locker has write locks
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error,
run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py", line 469, in
InstallPackages().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py",
line 179, in actionexecute
raise Fail("Failed to distribute repositories/install packages")
resource_management.core.exceptions.Fail: Failed to distribute repositories/install packages ROOT CAUSE This issue occurs because of corrupt RPM database. RESOLUTION To resolve this issue, execute the following commands: #Command 1:
rm -f /var/lib/rpm/__db*
#Command 2:
rpm --rebuilddb
... View more
- Find more articles tagged with:
- Hadoop Core
- Issue Resolution
- namenode
Labels:
09-29-2017
09:36 PM
2 Kudos
PROBLEM: When installing RPM during HDP Upgrade on few nodes, the installation fails with the following error: rpmdb: /var/lib/rpm/Packages: unexpected file format or type
error: cannot open Packages index using db3 - Invalid argument (22) ROOT CAUSE This issue occurs when the RPM database is corrupt. RESOLUTION To resolve this issue, do the following:
Take a backup of existing RPM database: mkdir /root/rpmdb.bak
cp -rp /var/lib/rpm/__db* /root/rpmdb.bak/
Remove the corrupted RPM database files: cd /var/lib/rpm
rm -rf __db*
Rebuild the RPM database: rpm --rebuilddb
... View more
- Find more articles tagged with:
- Issue Resolution
- rpm
Labels:
09-29-2017
09:34 PM
2 Kudos
PROBLEM: With a requirement to disable auto-start services before an upgrade, the auto-start services option seems to be invisible in the Ambari server UI. ROOT CAUSE This issue occurs when the cascaded style sheets and javascripts from previous version still exists under webapp location on Ambari Server. RESOLUTION To resolve this issue, remove *.gz files from /usr/lib/ambari-server/web/javascripts on the Ambari server.
... View more
- Find more articles tagged with:
- ambari-server
- Cloud & Operations
- Issue Resolution
Labels:
09-29-2017
09:29 PM
2 Kudos
PROBLEM: When running a Distcp command with HTTPFS, it fails with the following error: Error: java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException:
java.io.FileNotFoundException: File does not exist: /<some-location>/<file-name>/<same-file-name>
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:223)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException:
java.io.FileNotFoundException: File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
... 10 more
Caused by: java.io.FileNotFoundException: File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance
(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:398)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:683)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:649)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:471)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:501)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:497)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:869)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:884)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:218)
... 9 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
File does not exist: /tmp/distcp_test/passwd_v1/passwd_v1
at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:165)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:367)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:622)
... 18 more ROOT CAUSE : This is an issue because the HTTPFS liststatus returns incorrect pathSuffix for path of file. RESOLUTION : This is a known issue and a bug (HDFS-12139) has been submitted for the same. To get a Hotfix for this issue, contact Hortonworks Support.
... View more
- Find more articles tagged with:
- command
- Distcp
- Issue Resolution
Labels:
09-29-2017
09:47 AM
2 Kudos
PROBLEM: Oozie jobs gets stuck in PREP mode ROOT CAUSE : Below are the possible reasons: 1. Wrong Namenode host/port in job.properties 2. Wrong Resource manager host/port in the configurations. If there are lot of jobs stuck in the RESOLUTION : 1. Stop Oozie server from Ambari. 2. Backup Oozie DB is cluster is production. 3. Remove entries for stuck jobs from below tables WF_JOBS COORD_JOBS WF_ACTIONS 4. Start Oozie server
... View more
- Find more articles tagged with:
- FAQ
- Governance & Lifecycle
- Oozie
Labels:
09-29-2017
09:44 AM
1 Kudo
PROBLEM: ATS crashes silently. No errors or exceptions in the logs. ROOT CAUSE: Operating system's OOM Killer kills ATS because it uses highest memory(more memory than assigned heap)
RESOLUTION : Please add below properties in yarn-site.xml to fix this: yarn.timeline-service.ttl-ms=604800000 yarn.timeline-service.rolling-period=daily yarn.timeline-service.leveldb-timeline-store.read-cache-size=4194304 yarn.timeline-service.leveldb-timeline-store.write-buffer-size=4194304 yarn.timeline-service.leveldb-timeline-store.max-open-files=500
... View more
- Find more articles tagged with:
- FAQ
- Hadoop Core
- YARN
Labels:
09-29-2017
09:41 AM
2 Kudos
ROOT CAUSE: Oozie service check runs mapreduce action, sometimes because of lack of available resources on Yarn cluster, script times out. RESOLUTION: Increase the service check script timeout by following below steps: 1. Please SSH to Ambari server host 2. Navigate to /var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts 3. Backup service_check.py to some other location ( e.g. /root/ ) 4. Edit service_check.py and increase sleep/retires value
... View more
- Find more articles tagged with:
- Cloud & Operations
- FAQ
- Oozie
- upgrade
Labels:
09-29-2017
09:01 AM
2 Kudos
SYMPTOM FATAL Services - SERVER[deadpool.lab.local] E0113: class not found [org.apache.oozie.extensions.OozieELExtensions]org.apache.oozie.service.ServiceException: E0113: class not found [org.apache.oozie.extensions.OozieELExtensions]at org.apache.oozie.service.ELService.findMethod(ELService.java:226)at org.apache.oozie.service.ELService.extractFunctions(ELService.java:104)at org.apache.oozie.service.ELService.init(ELService.java:135)at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)at org.apache.oozie.service.Services.setService(Services.java:372)at org.apache.oozie.service.Services.loadServices(Services.java:305)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4210)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4709)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:802)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:676)at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:602)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:503)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1068)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1060)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:497)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) ROOT CAUSE : This happens when Falcon is uninstalled from the cluster. WORKAROUND : Remove properties mentioned in below document from oozie-site.xml and restart Oozie server via Ambari https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_command-line-installation/content/configuring_oozie_for_falcon.html
... View more
- Find more articles tagged with:
- Ambari
- Cloud & Operations
- FAQ
- Oozie
- upgrade
Labels:
09-29-2017
08:33 AM
2 Kudos
PROBLEM: Hive metastore isn’t updating for tables after insert into the table. This issue is observed with improper ‘numRows’ value with the describle formatted <tb_name> RESOLUTION: Hive stats are autogathered properly till an 'analyze table [tablename] compute statistics for columns' is run. Then it does not auto-update the stats till the command is run again. This is due to a known issue https://issues.apache.org/jira/browse/HIVE-12661 and has been fixed in HDP-2.5.0
... View more
- Find more articles tagged with:
- Data Processing
- FAQ
- hive-metastore
- insert
- into
Labels:
09-29-2017
08:18 AM
2 Kudos
PROBLEM: While trying to execute a simple pyspark script that is trying to select data from Hive transactional table stored in ORC format, customer is facing following exception. java.lang.RuntimeException: serious problem at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2378) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2780) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2377) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2384) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2120) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2119) at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2810) at org.apache.spark.sql.Dataset.head(Dataset.scala:2119) at org.apache.spark.sql.Dataset.take(Dataset.scala:2334) at org.apache.spark.sql.Dataset.showString(Dataset.scala:248) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "0000045_0000" at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:998) ... 50 more Caused by: java.lang.NumberFormatException: For input string: "0000045_0000" ROOT CAUSE: This is unsupported technology. Here is a quick link to the Apache jira https://issues.apache.org/jira/browse/SPARK-15348 RESOLUTION: Currently this can be resolved by using HIVE-LLAP from SPARK-LLAP. The feature is however still in Technical Preview and not made GA. There is no roadmap available for this issue yet from Hortonworks.
... View more
Labels:
09-29-2017
08:13 AM
3 Kudos
PROBLEM:
In hive, on oracle metastore following error is observed for table creation, java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
ROOT CAUSE:
Issue looks to be due to large number of columns in this table.
RESOLUTION:
Currently hive will we store column stats in a table while we store the accuracy in another table (table properites). The best way is to store both the column stats and its accuracy in the same table. This involves modification of schema. As a workaround we can change the column type to CLOB.
... View more
- Find more articles tagged with:
- FAQ
- hive-metastore
- insert
- table
Labels:
09-29-2017
08:07 AM
2 Kudos
While trying to perform an import/export data from/to MS Parallel Data Warehouse following error is observed
Sqoop Command Output:
sqoop export --connect "jdbc:sqlserver://<DB_URL>:<PORT>;database=<DB_NAME>; --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --username <USERNAME> --password ***--table "<TB_NAME>" --input-fields-terminated-by ',' --export-dir <EXPORT_DIR> -m 1 /usr/hdp/2.5.3.0-37//sqoop/conf/sqoop-env.sh: line 23: HADOOP_CLASSPATH=${hcat -classpath}: bad substitution Warning: /usr/hdp/2.5.3.0-37/accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-37/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-37/hive/lib/phoenix-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 17/09/12 13:14:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.5.3.0-37 17/09/12 13:14:07 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 17/09/12 13:14:07 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time. 17/09/12 13:14:07 INFO manager.SqlManager: Using default fetchSize of 1000 17/09/12 13:14:07 INFO tool.CodeGenTool: Beginning code generation 17/09/12 13:14:08 ERROR manager.SqlManager: Error executing statement: com.microsoft.sqlserver.jdbc.SQLServerException: Setting IsolationLevel to ReadCommitted is not supported.
ROOT CAUSE:
This is because databases like PDW have don't accept READ COMMITTED isolation level. Relaxed isolation fails in this case. The isolation for metadata queries doesn't set the transaction isolation level or make it mutable. This issue has been fixed in following apache jira
https://issues.apache.org/jira/browse/SQOOP-2349
RESOLUTION:
This issue can be fixed by upgrading to HDP 2.5.5 version and above.
... View more
- Find more articles tagged with:
- databases
- export
- import
- Issue Resolution
- Sqoop
Labels:
09-29-2017
08:01 AM
2 Kudos
This is an unsupported technology and a concept which hasn't been explored yet. There's no real modification time concept in object stores. It has just creation time, which is that of the observed time at the far end. If you upload a file to a remote timezone, you may get that as your time. The underlying issue here is not a bug. It is just a feature that distcp -update relies on using file checksums for comparing HDFS files, and (a) not all stores export their checksum through the Hadoop API (WASB does, s3a doesn't yet). In addition, because the checksums are different between blobstores and HDFS, you can't use checksum difference as a cue for files being changed. Note that this also occurs when trying to copy between HDFS encryption zones, as the checksums of the encrypted files will differ.
... View more
- Find more articles tagged with:
- Distcp
- FAQ
- Hadoop Core
- HDFS
Labels:
07-11-2017
12:55 AM
1 Kudo
Hi @Larry Wallet I hope this helps https://community.hortonworks.com/questions/96530/how-are-udfs-treated-with-hive-llap.html
... View more
06-30-2017
09:30 PM
6 Kudos
SYMPTOM : Hive query with group by clause stuck in reducer phase for a very long time having large amount of data ROOT CAUSE: This happens in the case when GROUPBY clause is not optimized. By default Hive puts the data with the same group-by keys to the same reducer. If the distinct value of the group-by columns has data skew, one reducer may get most of the shuffled data and will be stuck for a very long time on this reducer. WORKAROUND: In this case increasing the tez container memory will not help. We can avoid data skewness using the following properties before running the query, >set hive.tez.auto.reducer.parallelism=true
>set hive.groupby.skewindata=true ;
>set hive.optimize.skewjoin=true;
... View more
- Find more articles tagged with:
- Data Processing
- FAQ
- groupby
- Hive
- query
Labels:
06-30-2017
03:12 PM
6 Kudos
SYMPTOM Select statement fails for view with different ordering FAILING QUERIES: select id, dept, emp, fname from testview order by id, dept; select id, emp, dept, fname from testview order by id, dept; select emp, dept, id, fnamefrom testview order by id, dept; SUCCESSFUL QUERIES: select emp, fname, id, dept from testview order by id, dept; select emp, citystate, fname, dept from testview order by id, dept; select emp, fname, dept, id from testview order by id, dept; EXCEPTION: Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating VALUE._col1
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:86)
at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343)
... 17 more
Caused by: java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at org.apache.hadoop.io.Text.set(Text.java:225)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryHiveVarchar.init(LazyBinaryHiveVarchar.java:47)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.uncheckedGetField(LazyBinaryStruct.java:267)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:204)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:64)
at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator._evaluate(ExprNodeColumnEvaluator.java:98)
at org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:81)
... 18 more
2017-05-30 20:12:32,035 [INFO] [TezChild] |exec.FileSinkOperator|: FS[1]: records written - 0
2017-05-30 20:12:32,035 [INFO] [TezChild] |exec.FileSinkOperator|: RECORDS_OUT_0:0,
ROOT CAUSE The exception is due to mismatch in the serialization and deserialization on hive table backed upon sequenceinput/sequenceinput file format. The serialization by LazyBinarySerDe from previous MapReduce job used different order of columns. When the current MapReduce job deserialized the intermediate sequence file generated by previous MapReduce job, it will get corrupted data from the deserialization using wrong order of columns by LazyBinaryStruct. The unmatched columns between serialization and deserialization is caused by SelectOperator's Column Pruning ColumnPrunerSelectProc. WORKAROUND 1] Create an orc table from sequence table as follows
create table test_orc stored as orc as select * from testtable; 2] create table view. REFERENCE: https://issues.apache.org/jira/browse/HIVE-14564
... View more
- Find more articles tagged with:
- Data Processing
- Hive
- Issue Resolution
- issue-resolution
- query
Labels:
06-30-2017
07:40 AM
8 Kudos
SYMPTOM CREATE EXTERNAL TABLE test(
id STRING,
dept STRING)
row format delimited
fields terminated by ','
location '/user/hdfs/testdata/';
ROOT CAUSE The files under location provided while creating table are structured in following way /user/hdfs/testdata/1/test1
/user/hdfs/testdata/2/test2
/user/hdfs/testdata/3/test3
/user/hdfs/testdata/4/test4
RESOLUTION To make the subdirectories accessible set the following two properties before executing the create table statement set mapred.input.dir.recursive=true;
set hive.mapred.supports.subdirectories=true;
... View more
- Find more articles tagged with:
- Data Processing
- external
- Hive
- Issue Resolution
- issue-resolution
- query
- tables
Labels:
06-25-2017
02:26 AM
8 Kudos
SYMPTOM : => This problem occurs in case of a partitioned table without any null partitions and contains approximately more than 600 columns in the table => Following stacktrace is observed in hive metastore logs Nested Throwables StackTrace:
org.datanucleus.store.rdbms.exceptions.MappedDatastoreException: INSERT INTO "PARTITION_PARAMS" ("PARAM_VALUE","PART_ID","PARAM_KEY") VALUES (?,?,?)
at org.datanucleus.store.rdbms.scostore.JoinMapStore.internalPut(JoinMapStore.java:1056)
at org.datanucleus.store.rdbms.scostore.JoinMapStore.put(JoinMapStore.java:307)
at org.datanucleus.store.types.wrappers.backed.Map.put(Map.java:653)
at org.apache.hadoop.hive.common.StatsSetupConst.setColumnStatsState(StatsSetupConst.java:285)
at org.apache.hadoop.hive.metastore.ObjectStore.updatePartitionColumnStatistics(ObjectStore.java:6237)
at sun.reflect.GeneratedMethodAccessor118.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
at com.sun.proxy.$Proxy10.updatePartitionColumnStatistics(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.updatePartitonColStats(HiveMetaStore.java:4596)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:5953)
at sun.reflect.GeneratedMethodAccessor117.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:139)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:97)
at com.sun.proxy.$Proxy12.set_aggr_stats_for(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:11062)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:11046)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type character varying(4000)
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:363)
at com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:205)
at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:393)
at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:431)
at org.datanucleus.store.rdbms.scostore.JoinMapStore.internalPut(JoinMapStore.java:1047)
... 30 more
ROOT CAUSE: => Analyze table query updates the statistics in the metastore database => Metastore database has a limitation (4000) for the number of the characters that can be updated in the the PARTITION_PARAMS.PARAMS_VALUE => Hence, too many number of columns causes a limitation on the number of columns that can be updated WORKAROUND: Try increasing the column width for "PARTITION_PARAMS.PARAM_VALUE" column in metastore database. STEPS: 1] Stop metastore/HS2 2] Back up the DB 3] Try to increase the column width to a reasonable value. In case of Postgres database use the following command, ALTER TABLE PARTITION_PARAMS ALTER COLUMN PARAM_VALUE TYPE varchar(64000); 4] Start metastore/HS2 again.
... View more
- Find more articles tagged with:
- Data Processing
- Hive
- Issue Resolution
- Metastore
- partition
- statistics
- tables
Labels:
06-24-2017
10:06 PM
7 Kudos
SYMPTOM: Incorrect status shown for the DAGs in Tez UI ROOT CAUSE This is a known issue (https://issues.apache.org/jira/browse/TEZ-3656). It will only happen for the killed applications or if there was a failure to write into Application Timeline Server. It should not cause any issues, except for the wrong status for the DAG in the TezUI. RESOLUTION: This is fixed in HDP 2.6.1 release
... View more
- Find more articles tagged with:
- Issue Resolution
- issue-resolution
- tez
- UI
Labels:
06-24-2017
09:45 PM
7 Kudos
PROBLEM DEFINITION: CREATE TABLE DT(Dérivation string, Pièce_Générique string); Throws ParserException Error ROOT CAUSE/ WORKAROUND: Hive database name, table name and/or column names cannot contain Unicode string. However, Hive supports UTF-8 and Unicode string for only the table data/comments. LINKS: https://cwiki.apache.org/confluence/display/Hive/User+FAQ
... View more
Labels:
05-25-2017
11:28 PM
hi @ozac,
As per https://issues.apache.org/jira/browse/HIVE-12331 this has been removed. You neednot
set this property Hive 2.x onward. HDP 2.6 ships with Apache Hive 2.1.0 version and hence you
will be facing this issue. This is will set automatically
... View more
05-25-2017
11:19 PM
1 Kudo
Hi @darkz yu Unfortunately, this cannot be set for RM UI. It can only can be correlated in TEZ UI through hive.query.name. Unlike jobs on MR, where mapred.job.name is used for it. So far, you cannot do it for RM UI, but you can set hive.query.name and check it on TEZ UI Reason behind it is that RM UI displays the yarn application name and since in Hive reuses applications heavily there is no 1-1 relationship between application and DAG or query (Tez). For instance. Please have a look at the following links on this.
https://community.hortonworks.com/questions/5309/how-to-set-tez-job-name.html
https://issues.apache.org/jira/browse/HIVE-12357
... View more
03-21-2017
04:47 AM
Does the hive service check pass when metastore is up? Also, can you try to increase the Hive metastore heap size?
... View more
03-21-2017
04:44 AM
Are you using s3a or s3n? What HDP version is it?
Let me know if this article helps: http://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.11.1/bk_hdcloud-aws/content/s3-hive/index.html#improving-performance-for-hive-jobs
... View more
03-21-2017
04:39 AM
Could you try to increase the Hiverserver2 heap size?
... View more
03-13-2017
09:43 PM
2 Kudos
Could you try this, https://hortonworks.secure.force.com/articles/en_US/Issue/java-io-IOException-ORC-does-not-support-type-conversion-from-VARCHAR-to-STRING-while-inserting-into-table
... View more