Member since
12-22-2017
13
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2784 | 02-14-2018 02:07 AM |
12-19-2018
01:08 AM
Thank you for your quick response. The workaround worked for my environment.
... View more
12-13-2018
10:36 AM
I'm using HDP 2.6.1.0-129 cluster with Cloudbreak Deployer 2.4.2. It had worked well until Dec 7th, 2018. Since then, however, it always fails to add node(s) to the cluster using the Cloudbreak Deployer. Following is the relevant error message in cbreak.log of the Cloudbreak Deployer: cloudbreak_1 | 2018-12-13 06:12:11,102 [reactorDispatcher-34] buildLogContextForReactorHandler:69 INFO c.s.c.l.LogContextAspects - [owner:9e997395-c6d1-498d-bfa2-1a0f508c7b21] [type:CLUSTER] [id:2] [name:nakagawa-test-2] [flow:ec31867c-978c-41ef-8796-896ecabd98ba] [tracking:] A Reactor event handler's 'accept' method has been intercepted: execution(Flow2Handler.accept(..)), MDC logger context is built.
cloudbreak_1 | 2018-12-13 06:12:11,116 [reactorDispatcher-34] execute:87 INFO c.s.c.c.f.AbstractAction - [owner:9e997395-c6d1-498d-bfa2-1a0f508c7b21] [type:STACKVIEW] [id:2] [name:nakagawa-test-2] [flow:ec31867c-978c-41ef-8796-896ecabd98ba] [tracking:] Stack: 2, flow state: UPSCALING_AMBARI_STATE, phase: service, execution time 1024 sec
cloudbreak_1 | 2018-12-13 06:12:11,117 [reactorDispatcher-34] clusterUpscaleFailed:88 ERROR c.s.c.c.f.c.u.ClusterUpscaleFlowService - [owner:9e997395-c6d1-498d-bfa2-1a0f508c7b21] [type:STACKVIEW] [id:2] [name:nakagawa-test-2] [flow:ec31867c-978c-41ef-8796-896ecabd98ba] [tracking:] Error during Cluster upscale flow: com.sequenceiq.cloudbreak.orchestrator.exception.CloudbreakOrchestratorFailedException: Failed: Orchestrator component went failed in 7.500000 mins, message: There are missing nodes from job (jid: 20181213061135612084), target: [ip-10-0-1-203.ap-northeast-1.compute.internal, ip-10-0-1-68.ap-northeast-1.compute.internal]
cloudbreak_1 | Node: ip-10-0-1-203.ap-northeast-1.compute.internal Error(s): An exception occurred in this state: Traceback (most recent call last):
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1843, in call
cloudbreak_1 | **cdata['kwargs'])
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1795, in wrapper
cloudbreak_1 | return f(*args, **kwargs)
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/states/pkg.py", line 1631, in installed
cloudbreak_1 | **kwargs)
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/modules/yumpkg.py", line 1415, in install
cloudbreak_1 | if re.match('kernel(-.+)?', name):
cloudbreak_1 | File "/usr/lib64/python2.7/re.py", line 141, in match
cloudbreak_1 | return _compile(pattern, flags).match(string)
cloudbreak_1 | TypeError: expected string or buffer
cloudbreak_1 | | An exception occurred in this state: Traceback (most recent call last):
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1843, in call
cloudbreak_1 | **cdata['kwargs'])
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1795, in wrapper
cloudbreak_1 | return f(*args, **kwargs)
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/states/pkg.py", line 1631, in installed
cloudbreak_1 | **kwargs)
cloudbreak_1 | File "/usr/lib/python2.7/dist-packages/salt/modules/yumpkg.py", line 1415, in install
cloudbreak_1 | if re.match('kernel(-.+)?', name):
cloudbreak_1 | File "/usr/lib64/python2.7/re.py", line 141, in match
cloudbreak_1 | return _compile(pattern, flags).match(string)
cloudbreak_1 | TypeError: expected string or buffe
I also found the relevant error message in /var/log/salt/minion of the node being tried to add: 2018-12-13 05:36:43,749 [salt.state ][INFO ][8299] Running state [/etc/yum.repos.d/ambari.repo] at time 05:36:43.749534
2018-12-13 05:36:43,749 [salt.state ][INFO ][8299] Executing state file.managed for [/etc/yum.repos.d/ambari.repo]
2018-12-13 05:36:43,756 [salt.fileclient ][DEBUG ][8299] In saltenv 'base', looking at rel_path 'ambari/yum/ambari.repo' to resolve 'salt://ambari/yum/ambari.repo'
2018-12-13 05:36:43,757 [salt.fileclient ][DEBUG ][8299] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/ambari/yum/ambari.repo' to resolve 'salt://ambari/yum/ambari.repo'
2018-12-13 05:36:43,757 [salt.utils.jinja ][DEBUG ][8299] Jinja search path: ['/var/cache/salt/minion/files/base']
2018-12-13 05:36:43,761 [salt.state ][INFO ][8299] File /etc/yum.repos.d/ambari.repo is in the correct state
2018-12-13 05:36:43,761 [salt.state ][INFO ][8299] Completed state [/etc/yum.repos.d/ambari.repo] at time 05:36:43.761816 duration_in_ms=12.282
2018-12-13 05:36:43,768 [salt.utils.lazy ][DEBUG ][8299] Could not LazyLoad pkg.ex_mod_init: 'pkg.ex_mod_init' is not available.
2018-12-13 05:36:43,768 [salt.state ][INFO ][8299] Running state [ambari-agent] at time 05:36:43.768609
2018-12-13 05:36:43,768 [salt.state ][INFO ][8299] Executing state pkg.installed for [ambari-agent]
2018-12-13 05:36:43,769 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['rpm', '-qa', '--queryformat', '%{NAME}_|-%{EPOCH}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-(none)'] in directory '/root'
2018-12-13 05:36:44,277 [salt.utils.lazy ][DEBUG ][8299] Could not LazyLoad pkg.check_db: 'pkg.check_db' is not available.
2018-12-13 05:36:44,284 [salt.utils.lazy ][DEBUG ][8299] Could not LazyLoad pkg.check_extra_requirements: 'pkg.check_extra_requirements' is not available.
2018-12-13 05:36:44,290 [salt.utils.lazy ][DEBUG ][8299] Could not LazyLoad pkg.version_clean: 'pkg.version_clean' is not available.
2018-12-13 05:36:44,290 [salt.loaded.int.module.rpm ][WARNING ][8299] rpmdevtools is not installed, please install it for more accurate version comparisons
2018-12-13 05:36:44,291 [salt.loaded.int.states.pkg ][DEBUG ][8299] Current version (['2.6.2.0-155']) did not match desired version specification (2.6.2.0), adding to installation targets
2018-12-13 05:36:44,291 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'clean', 'expire-cache'] in directory '/root'
2018-12-13 05:36:44,435 [salt.loaded.int.module.cmdmod ][DEBUG ][8299] output:
2018-12-13 05:36:44,436 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'check-update'] in directory '/root'
2018-12-13 05:36:46,398 [salt.loaded.int.module.yumpkg ][DEBUG ][8299] Searching for repos in ['/etc/yum/repos.d', '/etc/yum.repos.d']
2018-12-13 05:36:46,401 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--version'] in directory '/root'
2018-12-13 05:36:46,518 [salt.loaded.int.module.cmdmod ][DEBUG ][8299] output: 3.4.3
Installed: rpm-4.11.3-21.75.amzn1.x86_64 at 2017-11-20 22:11
Built : Amazon.com, Inc. <http://aws.amazon.com> at 2017-03-20 00:58
Committed: Amazon Linux AMI <amazon-linux-ami@amazon.com> at 2016-11-04
Installed: yum-3.4.3-150.70.amzn1.noarch at 2017-11-20 22:11
Built : Amazon.com, Inc. <http://aws.amazon.com> at 2017-08-10 23:50
Committed: Heath Petty <hpetty@amazon.com> at 2017-08-10
2018-12-13 05:36:46,519 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'AMBARI.2.6.2.0', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:46,988 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'saltstack-amzn-repo', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:47,470 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'amzn-main', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:50,893 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'HDP-2.6-repo-1', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:51,478 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'amzn-updates', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:52,686 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'HDP-UTILS', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:52,772 [salt.minion ][INFO ][3466] User root Executing command saltutil.running with jid 20181213053652757085
2018-12-13 05:36:52,772 [salt.minion ][DEBUG ][3466] Command details {'tgt_type': 'glob', 'jid': '20181213053652757085', 'tgt': '*', 'ret': '', 'user': 'root', 'arg': [], 'fun': 'saltutil.running'}
2018-12-13 05:36:52,781 [salt.minion ][INFO ][8478] Starting a new job with PID 8478
2018-12-13 05:36:52,795 [salt.utils.lazy ][DEBUG ][8478] LazyLoaded saltutil.running
2018-12-13 05:36:52,796 [salt.utils.lazy ][DEBUG ][8478] LazyLoaded direct_call.get
2018-12-13 05:36:52,797 [salt.minion ][DEBUG ][8478] Minion return retry timer set to 9 seconds (randomized)
2018-12-13 05:36:52,797 [salt.minion ][INFO ][8478] Returning information for job: 20181213053652757085
2018-12-13 05:36:52,798 [salt.transport.zeromq ][DEBUG ][8478] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'ip-10-0-1-68.ap-northeast-1.compute.internal', 'tcp://10.0.1.203:4506', 'aes')
2018-12-13 05:36:52,798 [salt.crypt ][DEBUG ][8478] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'ip-10-0-1-68.ap-northeast-1.compute.internal', 'tcp://10.0.1.203:4506')
2018-12-13 05:36:52,805 [salt.minion ][DEBUG ][8478] minion return: {'fun_args': [], 'jid': '20181213053652757085', 'return': [{'tgt_type': 'glob', 'jid': '20181213053640897161', 'tgt': '*', 'pid': 8299, 'ret': '', 'user': 'saltuser', 'arg': [], 'fun': 'state.highstate'}], 'retcode': 0, 'success': True, 'fun': 'saltutil.running'}
2018-12-13 05:36:53,152 [salt.loaded.int.module.cmdmod ][INFO ][8299] Executing command ['yum', '--quiet', 'repository-packages', 'HDP-UTILS-1.1.0.21-repo-1', 'list', '--showduplicates'] in directory '/root'
2018-12-13 05:36:53,778 [salt.loaded.int.module.rpm ][WARNING ][8299] rpmdevtools is not installed, please install it for more accurate version comparisons
2018-12-13 05:36:53,953 [salt.state ][ERROR ][8299] An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1843, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1795, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/states/pkg.py", line 1631, in installed
**kwargs)
File "/usr/lib/python2.7/dist-packages/salt/modules/yumpkg.py", line 1415, in install
if re.match('kernel(-.+)?', name):
File "/usr/lib64/python2.7/re.py", line 141, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
Can you please help me find the solution or workaround? Thank you in advance.
... View more
Labels:
06-27-2018
12:51 AM
Hi @mmolnar, The error message changed as attached. It indicates the error is because of the unrecognized field "type", however, I have no idea how to fix it unfortunately. I'm wondering if it's a bug of v2.7.0 because the error doesn't occur when I do the same thing on v2.4.0 Thank you, Mai Nakagawa
... View more
06-26-2018
02:13 PM
Hi @mmolnar, Thank you for your quick response. Attached please find the files in the logs subfolder of my deployment directory. Please note that logs-1.tar.gz contains files under logs/autoscale and logs/cloudbreak directories and that logs-2.tar.gz contains the other files under logs directory. Also, please be noted that the both files are actually .tar.bz2 - this is in order to make the size smaller than the limitation (512 kB). I used .tar.gz extension just because the website doesn't allow .tar.bz2 extension for attachments. Cheers, Mai Nakagawa
... View more
06-26-2018
03:41 AM
I launched a Cloudbreak Deployer v2.7.0 (latest) on AWS following the instruction: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.4.1/content/aws-launch/index.html I created a cluster and tested the autoscaling functionality, however, it always failed with the following error found by `cbd logs-tail periscope` command: periscope_1 | 2018-06-26 03:02:30,040 [getThreadPoolExecutorFactoryBean-36] createAmbariClient:44 INFO c.s.p.s.AmbariClientProvider - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Create Ambari client to connect to 172.31.47.29:9443
periscope_1 | 2018-06-26 03:02:30,050 [getThreadPoolExecutorFactoryBean-36] run:60 INFO c.s.p.m.e.MetricEvaluator - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Checking metric based alert: 'disk-usage'
periscope_1 | 2018-06-26 03:02:30,051 [getThreadPoolExecutorFactoryBean-36] getRawResource:84 INFO c.s.a.c.AmbariClientUtils - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] AmbariClient getRawResource, resourceRequestMap: {path=https://172.31.47.29:9443/api/v1/clusters, query={fields=Clusters}}
periscope_1 | 2018-06-26 03:02:30,069 [getThreadPoolExecutorFactoryBean-36] getRawResource:84 INFO c.s.a.c.AmbariClientUtils - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] AmbariClient getRawResource, resourceRequestMap: {path=https://172.31.47.29:9443/api/v1/clusters/aisin-storm/alert_history, query={AlertHistory/definition_name=ambari_agent_disk_usage}}
periscope_1 | 2018-06-26 03:02:30,084 [getThreadPoolExecutorFactoryBean-36] getRawResource:84 INFO c.s.a.c.AmbariClientUtils - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] AmbariClient getRawResource, resourceRequestMap: {path=https://172.31.47.29:9443/api/v1/clusters/aisin-storm/alert_history/28}
periscope_1 | 2018-06-26 03:02:30,097 [getThreadPoolExecutorFactoryBean-36] run:72 INFO c.s.p.m.e.MetricEvaluator - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Alert: disk-usage is in 'CRITICAL' state since 5.22 min(s)
periscope_1 | 2018-06-26 03:02:30,103 [getThreadPoolExecutorFactoryBean-36] createAmbariClient:44 INFO c.s.p.s.AmbariClientProvider - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Create Ambari client to connect to 172.31.47.29:9443
periscope_1 | 2018-06-26 03:02:30,107 [getThreadPoolExecutorFactoryBean-36] getRawResource:84 INFO c.s.a.c.AmbariClientUtils - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] AmbariClient getRawResource, resourceRequestMap: {path=https://172.31.47.29:9443/api/v1/clusters, query={fields=Clusters}}
periscope_1 | 2018-06-26 03:02:30,130 [getThreadPoolExecutorFactoryBean-36] getRawResource:84 INFO c.s.a.c.AmbariClientUtils - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] AmbariClient getRawResource, resourceRequestMap: {path=https://172.31.47.29:9443/api/v1/clusters/aisin-storm}
periscope_1 | 2018-06-26 03:02:30,249 [getThreadPoolExecutorFactoryBean-43] scaleUp:76 INFO c.s.p.m.h.ScalingRequest - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Sending request to add 1 instance(s) into host group 'supervisor', triggered policy 'scale-out'
periscope_1 | 2018-06-26 03:02:30,313 [getThreadPoolExecutorFactoryBean-43] logExceptions:127 WARN o.h.e.j.s.SqlExceptionHelper - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] SQL Error: 0, SQLState: 22001
periscope_1 | 2018-06-26 03:02:30,313 [getThreadPoolExecutorFactoryBean-43] logExceptions:127 WARN o.h.e.j.s.SqlExceptionHelper - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] SQL Error: 0, SQLState: 22001
periscope_1 | 2018-06-26 03:02:30,314 [getThreadPoolExecutorFactoryBean-43] logExceptions:129 ERROR o.h.e.j.s.SqlExceptionHelper - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] ERROR: value too long for type character varying(255)
periscope_1 | 2018-06-26 03:02:30,314 [getThreadPoolExecutorFactoryBean-43] logExceptions:129 ERROR o.h.e.j.s.SqlExceptionHelper - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] ERROR: value too long for type character varying(255)
periscope_1 | 2018-06-26 03:02:30,314 [getThreadPoolExecutorFactoryBean-43] release:193 INFO o.h.e.j.b.i.AbstractBatchImpl - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] HHH000010: On release of batch it still contained JDBC statements
periscope_1 | 2018-06-26 03:02:30,314 [getThreadPoolExecutorFactoryBean-43] release:193 INFO o.h.e.j.b.i.AbstractBatchImpl - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] HHH000010: On release of batch it still contained JDBC statements
periscope_1 | 2018-06-26 03:02:30,315 [getThreadPoolExecutorFactoryBean-43] run:65 ERROR c.s.p.m.h.ScalingRequest - [owner:11d6ae4f-8de0-446e-83d9-1b3492185d6b] [id:3] [cb-stack-id:3] Cannot retrieve an oauth token from the identity server
periscope_1 | org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.DataException: could not execute statement
periscope_1 | at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:282)
periscope_1 | at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:244)
periscope_1 | at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:521)
periscope_1 | at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
periscope_1 | at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
periscope_1 | at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:504)
periscope_1 | at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:292)
periscope_1 | at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
periscope_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
periscope_1 | at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
periscope_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
periscope_1 | at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:133)
periscope_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
periscope_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
periscope_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
periscope_1 | at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57)
periscope_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
periscope_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
periscope_1 | at com.sun.proxy.$Proxy188.save(Unknown Source)
periscope_1 | at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
periscope_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
periscope_1 | at java.lang.reflect.Method.invoke(Method.java:498)
periscope_1 | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
periscope_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
periscope_1 | at com.sun.proxy.$Proxy189.save(Unknown Source)
periscope_1 | at com.sequenceiq.periscope.service.HistoryService.createEntry(HistoryService.java:28)
periscope_1 | at com.sequenceiq.periscope.monitor.handler.ScalingRequest.scaleUp(ScalingRequest.java:88)
periscope_1 | at com.sequenceiq.periscope.monitor.handler.ScalingRequest.run(ScalingRequest.java:60)
periscope_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
periscope_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
periscope_1 | at java.lang.Thread.run(Thread.java:748)
periscope_1 | Caused by: org.hibernate.exception.DataException: could not execute statement
periscope_1 | at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:118)
periscope_1 | at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
periscope_1 | at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
periscope_1 | at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
periscope_1 | at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:207)
periscope_1 | at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:45)
periscope_1 | at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2949)
periscope_1 | at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3449)
periscope_1 | at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:89)
periscope_1 | at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:582)
periscope_1 | at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:456)
periscope_1 | at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:337)
periscope_1 | at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
periscope_1 | at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1282)
periscope_1 | at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:465)
periscope_1 | at org.hibernate.internal.SessionImpl.flushBeforeTransactionCompletion(SessionImpl.java:2963)
periscope_1 | at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:2339)
periscope_1 | at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.beforeTransactionCompletion(JdbcCoordinatorImpl.java:485)
periscope_1 | at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.beforeCompletionCallback(JdbcResourceLocalTransactionCoordinatorImpl.java:147)
periscope_1 | at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.access$100(JdbcResourceLocalTransactionCoordinatorImpl.java:38)
periscope_1 | at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:231)
periscope_1 | at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:65)
periscope_1 | at org.hibernate.jpa.internal.TransactionImpl.commit(TransactionImpl.java:61)
periscope_1 | at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:517)
periscope_1 | ... 28 common frames omitted
periscope_1 | Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type character varying(255)
periscope_1 | at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
periscope_1 | at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
periscope_1 | at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
periscope_1 | at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
periscope_1 | at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
periscope_1 | at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
periscope_1 | at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:135)
periscope_1 | at com.zaxxer.hikari.proxy.PreparedStatementProxy.executeUpdate(PreparedStatementProxy.java:61)
periscope_1 | at com.zaxxer.hikari.proxy.HikariPreparedStatementProxy.executeUpdate(HikariPreparedStatementProxy.java)
periscope_1 | at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:204)
periscope_1 | ... 47 common frames omitted
periscope_1 | 2018-06-26 03:02:31,236 [Timer-1] collectStackDetails:49 INFO c.s.p.s.StackCollectorService - [owner:spring] [id:] [cb-stack-id:] Evaluate cluster management for stack: aisin-storm (ID:3)
periscope_1 | 2018-06-26 03:02:31,260 [getThreadPoolExecutorFactoryBean-35] createAmbariClient:44 INFO c.s.p.s.AmbariClientProvider - [owner:periscope] [id:] [cb-stack-id:] Create Ambari client to connect to 172.31.47.29:9443<br> Looks like Cloudbreak Deployer failed to add or update a record in PostgreSQL because a value to add is too long for the type character varying(255), according to the error log above. However, I have no idea what value it is and how to make it shorter. Could you please help me how to resolve the problem? Thank you in advance.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
02-14-2018
02:07 AM
I found we can configure Zookeeper Server address for Spark worker/slave nodes by setting spark.hadoop.hive.zookeeper.quorum at `Custom spark2-default`
... View more
02-14-2018
01:51 AM
@Umair Khan - Thanks for your prompt reply and your try to help. It would have been more helpful if you had provided how to change the zookeeper address since it's not a bug. I already found how to
... View more
02-13-2018
02:10 PM
I setup Hive LLAP and spark-llap on HDP 2.6.2 cluster as per Row/Column-level Security in SQL for Apache Spark 2.1.1. It seems to work only when a Zookeeper Server is running on a worker/slave node. Is it by design or a bug? I setup the HDP 2.6.2 cluster as per the attached simplified diagram. Zeppelin is able to create a Spark session via Livy and run "SHOW DATABASES" queries through HiveServer2 Interactive. However, it stalls when I try to run "SELECT" queries, which needs to run on worker/slave node. I see the following error in /hadoop/yarn/log/<YARN_APPLICATION_ID>/<YARN_CONTAINER_ID>/stderr: 18/02/13 02:24:38 INFO ZooKeeper: Client environment:java.library.path=/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
18/02/13 02:24:38 INFO ZooKeeper: Client environment:java.io.tmpdir=/hadoop/yarn/local/usercache/livy/appcache/application_1518487093485_0008/container_e07_1518487093485_0008_01_000002/tmp
18/02/13 02:24:38 INFO ZooKeeper: Client environment:java.compiler=<NA>
18/02/13 02:24:38 INFO ZooKeeper: Client environment:os.name=Linux
18/02/13 02:24:38 INFO ZooKeeper: Client environment:os.arch=amd64
18/02/13 02:24:38 INFO ZooKeeper: Client environment:os.version=3.10.0-693.11.6.el7.x86_64
18/02/13 02:24:38 INFO ZooKeeper: Client environment:user.name=yarn
18/02/13 02:24:38 INFO ZooKeeper: Client environment:user.home=/home/yarn
18/02/13 02:24:38 INFO ZooKeeper: Client environment:user.dir=/hadoop/yarn/local/usercache/livy/appcache/application_1518487093485_0008/container_e07_1518487093485_0008_01_000002
18/02/13 02:24:38 INFO ZooKeeper: Initiating client connection, connectString= sessionTimeout=1200000 watcher=shadecurator.org.apache.curator.ConnectionState@1620dca0
18/02/13 02:24:38 INFO LlapRegistryService: Using LLAP registry (client) type: Service LlapRegistryService in state LlapRegistryService: STARTED
18/02/13 02:24:38 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
18/02/13 02:24:38 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1125)
I confirmed "SELECT" queries work when I install and run Zookeeper Server on the worker/slave node.
... View more
Labels:
- Labels:
-
Apache Spark
02-13-2018
01:29 AM
Thank you @Dongjoon Hyun! Confirmed it works in HDP 2.6.3 by replacing the jar file with spark-llap-assembly-1.0.0.2.6.3.0-235.jar
... View more
01-16-2018
04:44 AM
Hello, I followed the instruction with HDP 2.6.3.0, however, Spark2 Thrift Server stops right after starting it with the following error in /var/log/spark2/spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-<HOSTNAME>.out: Exception in thread "main" java.lang.IllegalArgumentException: Unable to instantiate SparkSession with LLAP support because LLAP or Hive classes are not found.
at org.apache.spark.sql.SparkSession$.isLLAPEnabled(SparkSession.scala:1104)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$externalCatalogClassName(SharedState.scala:174)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:95)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:53)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:81)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:782)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Looks like this error indicates Spark2 Thrift Server fails to load org.apache.hadoop.hive.conf.HiveConf or org.apache.spark.sql.hive.llap.LlapSessionStateBuilder classes. I found com.hortonworks.spark_spark-llap_2.11-1.1.3-2.1.jar, which Spark2 Thrift Server is using, does not have org.apache.hadoop.hive.conf.HiveConf but shadehive.org.apache.hadoop.hive.conf.HiveConf Can I ask if it's a bug? Can I also ask if there is a workaround? Thank you in advance, Mai Nakagawa
... View more