Member since
07-01-2016
38
Posts
11
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1329 | 09-21-2016 12:23 AM | |
1497 | 09-16-2016 01:10 PM | |
1506 | 09-04-2016 05:47 PM | |
2387 | 08-08-2016 01:44 AM | |
1165 | 07-18-2016 12:09 AM |
01-25-2017
05:28 PM
I don't have any data in the cluster and hence it was easy for me to remove all bits from the nodes and did fresh install with 2.4. But if you have data in the cluster it may be better to proceed with cluster upgrade steps and verify them as you have already upgraded the Ambari to 2.4. Thanks Ram
... View more
01-25-2017
01:59 PM
Hi, Good morning, I ran into different issues and I felt it may be easy to do fresh install 2.4 after removing the 2.2. So I went with fresh install. Thanks Ram
... View more
09-21-2016
12:23 AM
Hi, I further researched and found that in Ranger there is a Kafka plug-in which is not enabled. I enabled Kafka plugin and restarted the services. Once I restart it, the SQOOP job import worked fine. including Atlas hooks. Thanks ram
... View more
09-19-2016
07:27 PM
Hi, I create a SQOOP import job and trying to execute that job using. sqoop job -exec myjob The job created a table and loaded the data but at the end I am seeing the following error 16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration key.deserializer = org.apache.kafka.common.serialization.StringDeserializer was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration value.deserializer = org.apache.kafka.common.serialization.StringDeserializer was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration hook.group.id = atlas was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration partition.assignment.strategy = roundrobin was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.connection.timeout.ms = 200 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.session.timeout.ms = 400 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.connect = server1:2181,server1:2181,server1:2181 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.sync.time.ms = 20 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration auto.offset.reset = smallest was supplied but isn't a known config.
16/09/18 21:21:15 INFO utils.AppInfoParser: Kafka version : 0.10.0.2.5.0.0-1245
16/09/18 21:21:15 INFO utils.AppInfoParser: Kafka commitId : dae559f56f07e2cd
16/09/18 21:21:15 WARN clients.NetworkClient: Error while fetching metadata with correlation id 0 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:15 ERROR hook.AtlasHook: Failed to send notification - attempt #1; error=java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:16 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:16 ERROR hook.AtlasHook: Failed to send notification - attempt #2; error=java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:17 WARN clients.NetworkClient: Error while fetching metadata with correlation id 2 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:17 ERROR hook.FailedMessagesLogger: {"version":{"version":"1.0.0"},"message":{"entities":[{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751121","version":0,"typeName":"sqoop_dbdatastore","state":"ACTIVE"},"typeName":"sqoop_dbdatastore","values":{"name":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","source":"EndPointCommunicationDetails","storeUse":"TABLE","description":"","storeUri":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","qualifiedName":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","owner":"mdrxsqoop","dbStoreType":"sqlserver"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751119","version":0,"typeName":"hive_table","state":"ACTIVE"},"typeName":"hive_table","values":{"qualifiedName":"destDbName.endpoint@DevCluster01","db":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},"name":"endpoint"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751118","version":0,"typeName":"sqoop_process","state":"ACTIVE"},"typeName":"sqoop_process","values":{"name":"sqoop import --connect jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01","startTime":"2016-09-18T21:19:43.636Z","outputs":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751119","version":0,"typeName":"hive_table","state":"ACTIVE"},"typeName":"hive_table","values":{"qualifiedName":"destDbName.endpoint@DevCluster01","db":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},"name":"endpoint"},"traitNames":[],"traits":{}},"commandlineOpts":{"map.column.hive.IsRequestResponse":"BOOLEAN","db.clear.staging.table":"false","hcatalog.storage.stanza":"stored as orc tblproperties (\"orc.compress\"\u003d\"SNAPPY\")","hive.import":"false","codegen.output.delimiters.enclose":"0","codegen.input.delimiters.field":"0","map.column.hive.CommID":"INT","customtool.options.jsonmap":"{}","hive.compute.stats.table":"false","db.connect.string":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","incremental.mode":"None","db.table":"EndPointCommunicationDetails","verbose":"true","codegen.output.delimiters.enclose.required":"false","mapreduce.num.mappers":"4","hdfs.append.dir":"false","map.column.hive.EndPointUserName":"STRING","direct.import":"false","hive.drop.delims":"false","hive.overwrite.table":"false","hbase.bulk.load.enabled":"false","hive.fail.table.exists":"false","relaxed.isolation":"false","db.password.file":"/user/mdrxsqoop/AzureDev_Password.txt","hdfs.delete-target.dir":"false","split.limit":"null","db.username":"hadoopuser","codegen.input.delimiters.enclose.required":"false","codegen.output.dir":".","import.direct.split.size":"0","map.column.hive.Active":"BOOLEAN","reset.onemapper":"false","map.column.hive.Filter":"STRING","codegen.output.delimiters.record":"10","temporary.dirRoot":"_sqoop","hcatalog.create.table":"true","map.column.hive.Protocol":"STRING","db.batch":"false","map.column.hive.TransformType":"STRING","hcatalog.database.name":"Enrollment_full","import.fetch.size":"1000","accumulo.max.latency":"5000","hdfs.file.format":"TextFile","codegen.output.delimiters.field":"44","mainframe.input.dataset.type":"p","hcatalog.table.name":"EndPointCommunicationDetails","codegen.output.delimiters.escape":"0","hcatalog.drop.and.create.table":"false","map.column.hive.AuthenticationSource":"STRING","map.column.hive.EncodingType":"STRING","import.max.inline.lob.size":"16777216","hbase.create.table":"false","codegen.auto.compile.dir":"true","codegen.compile.dir":"/tmp/sqoop-mdrxsqoop/compile/134166a19963465594d21d605c8790ac","codegen.input.delimiters.enclose":"0","export.new.update":"UpdateOnly","enable.compression":"false","map.column.hive.WrapperDocumentNamespace":"STRING","accumulo.batch.size":"10240000","map.column.hive.Uri":"STRING","map.column.hive.EndPointPassword":"STRING","codegen.input.delimiters.record":"0","codegen.input.delimiters.escape":"0","accumulo.create.table":"false"},"endTime":"2016-09-18T21:21:12.560Z","inputs":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751121","version":0,"typeName":"sqoop_dbdatastore","state":"ACTIVE"},"typeName":"sqoop_dbdatastore","values":{"name":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","source":"EndPointCommunicationDetails","storeUse":"TABLE","description":"","storeUri":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","qualifiedName":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","owner":"mdrxsqoop","dbStoreType":"sqlserver"},"traitNames":[],"traits":{}},"operation":"import","qualifiedName":"sqoop import --connect jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01","userName":"mdrxsqoop"},"traitNames":[],"traits":{}}],"type":"ENTITY_CREATE","user":"mdrxsqoop"}}
16/09/18 21:21:17 ERROR hook.AtlasHook: Failed to notify atlas for entity [[{Id='(type: sqoop_dbdatastore, id: <unassigned>)', traits=[], values={owner=mdrxsqoop, storeUri=jdbc:sqlserver://10.0.4.4;database=Enrollment, dbStoreType=sqlserver, qualifiedName=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, name=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, description=, source=EndPointCommunicationDetails, storeUse=TABLE}}, {Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}, {Id='(type: hive_table, id: <unassigned>)', traits=[], values={qualifiedName=destDbName.endpoint@DevCluster01, name=endpoint, db={Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}}}, {Id='(type: sqoop_process, id: <unassigned>)', traits=[], values={outputs={Id='(type: hive_table, id: <unassigned>)', traits=[], values={qualifiedName=destDbName.endpoint@DevCluster01, name=endpoint, db={Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}}}, commandlineOpts={reset.onemapper=false, map.column.hive.Filter=STRING, codegen.output.delimiters.enclose=0, codegen.input.delimiters.escape=0, codegen.auto.compile.dir=true, map.column.hive.AuthenticationSource=STRING, map.column.hive.IsRequestResponse=BOOLEAN, accumulo.batch.size=10240000, codegen.input.delimiters.field=0, accumulo.create.table=false, mainframe.input.dataset.type=p, map.column.hive.EncodingType=STRING, enable.compression=false, hive.compute.stats.table=false, map.column.hive.Active=BOOLEAN, accumulo.max.latency=5000, map.column.hive.Uri=STRING, map.column.hive.EndPointUserName=STRING, db.username=hadoopuser, map.column.hive.Protocol=STRING, db.clear.staging.table=false, codegen.input.delimiters.enclose=0, hdfs.append.dir=false, import.direct.split.size=0, map.column.hive.EndPointPassword=STRING, hcatalog.drop.and.create.table=false, codegen.output.delimiters.record=10, codegen.output.delimiters.field=44, hbase.bulk.load.enabled=false, hcatalog.table.name=EndPointCommunicationDetails, mapreduce.num.mappers=4, export.new.update=UpdateOnly, hive.import=false, customtool.options.jsonmap={}, hdfs.delete-target.dir=false, codegen.output.delimiters.enclose.required=false, direct.import=false, codegen.output.dir=., hdfs.file.format=TextFile, hive.drop.delims=false, hcatalog.storage.stanza=stored as orc tblproperties ("orc.compress"="SNAPPY"), codegen.input.delimiters.record=0, db.batch=false, map.column.hive.TransformType=STRING, split.limit=null, hcatalog.create.table=true, hive.fail.table.exists=false, hive.overwrite.table=false, incremental.mode=None, temporary.dirRoot=_sqoop, hcatalog.database.name=Enrollment_full, verbose=true, import.max.inline.lob.size=16777216, import.fetch.size=1000, codegen.input.delimiters.enclose.required=false, relaxed.isolation=false, map.column.hive.WrapperDocumentNamespace=STRING, map.column.hive.CommID=INT, db.table=EndPointCommunicationDetails, hbase.create.table=false, db.password.file=/user/mdrxsqoop/AzureDev_Password.txt, codegen.compile.dir=/tmp/sqoop-mdrxsqoop/compile/134166a19963465594d21d605c8790ac, codegen.output.delimiters.escape=0, db.connect.string=jdbc:sqlserver://10.0.4.4;database=Enrollment}, qualifiedName=sqoop import --connect jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01, inputs={Id='(type: sqoop_dbdatastore, id: <unassigned>)', traits=[], values={owner=mdrxsqoop, storeUri=jdbc:sqlserver://10.0.4.4;database=Enrollment, dbStoreType=sqlserver, qualifiedName=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, name=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, description=, source=EndPointCommunicationDetails, storeUse=TABLE}}, name=sqoop import --connect jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01, startTime=Sun Sep 18 21:19:43 UTC 2016, endTime=Sun Sep 18 21:21:12 UTC 2016, userName=mdrxsqoop, operation=import}}]] after 3 retries. Quitting
org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:249)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:222)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:84)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:129)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:114)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:177)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:51)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:52)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:232)
... 20 more
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:17 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@33c7e1bb
16/09/18 21:21:17 DEBUG hsqldb.HsqldbJobStorage: Flushing current transaction
16/09/18 21:21:17 DEBUG hsqldb.HsqldbJobStorage: Closing connection Can any one help. Thanks ram
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Kafka
-
Apache Sqoop
09-16-2016
01:10 PM
1 Kudo
Hi Good morning, I added the following to core-site.xml and restarted HDFS, YARN and MAPReduce <property> <name>hadoop.proxyuser.hive.hosts</name> <value>*</value> </property> and I am able to execute the sqoop Job. thanks ram
... View more
09-16-2016
02:48 AM
1 Kudo
Hi, Good evening, I have created a job to import data from SQL server and when I tried to execute the job using sqoop job -exec job.my.Account I am getting the folloiwing exception 16/09/16 01:39:38 INFO hcat.SqoopHCatUtilities: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/09/16 01:39:47 INFO hcat.SqoopHCatUtilities: FAILED: SemanticException MetaException(message:org.apache.hadoop.ipc.RemoteException(onException):
Unauthorized connection for super-user: hive/n02.myserver.com@MYSERVER.COM from IP xx.xx.xx.5)
16/09/16 01:39:48 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@33c7e1bb
16/09/16 01:39:48 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: HCat exited with status 64
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.executeExternalHCatProgram(SqoopHCatUtilities.java:1196)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.launchHCatCli(SqoopHCatUtilities.java:1145)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.createHCatTable(SqoopHCatUtilities.java:679)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:342)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:848)
at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:102)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:263)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
The same job works fine in without security on ( with out Kerboriztion) I configured the following in core-site. <property>
<name>hadoop.proxyuser.hcat.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hcat.groups</name>
<value>*</value>
</property> Can any one help. thanks Ram
... View more
Labels:
- Labels:
-
Apache Sqoop
09-04-2016
05:47 PM
I was able to identify my mistake. I forgot one step i.e ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Once I executed that command, it started working. The OS is centos 7.2. Thank you for your help. Thanks Ram
... View more
09-01-2016
10:56 PM
Hi, good evening, I started installing Ambari 2.4.0.1 and HDP 2.5 but I am not able to Test connection either for Oozie or Hive. I am getting the following error Ambari 2.4.0.1-1 and HDP 2.5 is not able to Test Connection to oozie or Hive metastore 2016-09-01 22:46:22,247 - There was an unknown error while checking database connectivity: coercing to Unicode: need string or buffer, NoneType found Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 144, in actionexecute
db_connection_check_structured_output = self.execute_db_connection_check(config, tmp_dir)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 285, in execute_db_connection_check
jdbc_url = jdk_location + jdbc_driver_mysql_name
TypeError: coercing to Unicode: need string or buffer, NoneType found
2016-09-01 22:46:22,248 - Check db_connection_check was unsuccessful. Exit code: 1. Message: coercing to Unicode: need string or buffer, NoneType found
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 506, in <module>
CheckHost().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 206, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: coercing to Unicode: need string or buffer, NoneType found
Can any one point me in right direction to resolve this issue. Thank you Ram
... View more
Labels:
09-01-2016
03:28 PM
Thank you for the quick response. I thought it may be better to start fresh and manually removed all artifacts related to Ambari 2.2 and Hdp 2.4 and started installing Ambari 2.4.0.1 as fresh install. I should have waited little longer.... Thank you Ram
... View more
09-01-2016
12:41 PM
1 Kudo
I was able to start the server using ambari-server start --skip-database-check and the following command returns ambari-server --version 2.4.0.1-1 I will continue my upgrade and see how it goes Thank you for your help. Ram
... View more