Member since
07-01-2016
38
Posts
11
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
776 | 09-21-2016 12:23 AM | |
855 | 09-16-2016 01:10 PM | |
849 | 09-04-2016 05:47 PM | |
1226 | 08-08-2016 01:44 AM | |
672 | 07-18-2016 12:09 AM |
01-25-2017
05:28 PM
I don't have any data in the cluster and hence it was easy for me to remove all bits from the nodes and did fresh install with 2.4. But if you have data in the cluster it may be better to proceed with cluster upgrade steps and verify them as you have already upgraded the Ambari to 2.4. Thanks Ram
... View more
01-25-2017
01:59 PM
Hi, Good morning, I ran into different issues and I felt it may be easy to do fresh install 2.4 after removing the 2.2. So I went with fresh install. Thanks Ram
... View more
09-21-2016
12:23 AM
Hi, I further researched and found that in Ranger there is a Kafka plug-in which is not enabled. I enabled Kafka plugin and restarted the services. Once I restart it, the SQOOP job import worked fine. including Atlas hooks. Thanks ram
... View more
09-19-2016
07:27 PM
Hi, I create a SQOOP import job and trying to execute that job using. sqoop job -exec myjob The job created a table and loaded the data but at the end I am seeing the following error 16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration key.deserializer = org.apache.kafka.common.serialization.StringDeserializer was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration value.deserializer = org.apache.kafka.common.serialization.StringDeserializer was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration hook.group.id = atlas was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration partition.assignment.strategy = roundrobin was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.connection.timeout.ms = 200 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.session.timeout.ms = 400 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.connect = server1:2181,server1:2181,server1:2181 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration zookeeper.sync.time.ms = 20 was supplied but isn't a known config.
16/09/18 21:21:15 WARN producer.ProducerConfig: The configuration auto.offset.reset = smallest was supplied but isn't a known config.
16/09/18 21:21:15 INFO utils.AppInfoParser: Kafka version : 0.10.0.2.5.0.0-1245
16/09/18 21:21:15 INFO utils.AppInfoParser: Kafka commitId : dae559f56f07e2cd
16/09/18 21:21:15 WARN clients.NetworkClient: Error while fetching metadata with correlation id 0 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:15 ERROR hook.AtlasHook: Failed to send notification - attempt #1; error=java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:16 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:16 ERROR hook.AtlasHook: Failed to send notification - attempt #2; error=java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:17 WARN clients.NetworkClient: Error while fetching metadata with correlation id 2 : {ATLAS_HOOK=TOPIC_AUTHORIZATION_FAILED}
16/09/18 21:21:17 ERROR hook.FailedMessagesLogger: {"version":{"version":"1.0.0"},"message":{"entities":[{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751121","version":0,"typeName":"sqoop_dbdatastore","state":"ACTIVE"},"typeName":"sqoop_dbdatastore","values":{"name":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","source":"EndPointCommunicationDetails","storeUse":"TABLE","description":"","storeUri":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","qualifiedName":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","owner":"mdrxsqoop","dbStoreType":"sqlserver"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751119","version":0,"typeName":"hive_table","state":"ACTIVE"},"typeName":"hive_table","values":{"qualifiedName":"destDbName.endpoint@DevCluster01","db":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},"name":"endpoint"},"traitNames":[],"traits":{}},{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751118","version":0,"typeName":"sqoop_process","state":"ACTIVE"},"typeName":"sqoop_process","values":{"name":"sqoop import --connect jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01","startTime":"2016-09-18T21:19:43.636Z","outputs":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751119","version":0,"typeName":"hive_table","state":"ACTIVE"},"typeName":"hive_table","values":{"qualifiedName":"destDbName.endpoint@DevCluster01","db":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751120","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"qualifiedName":"destDbName@DevCluster01","name":"Enrollment_full","clusterName":"DevCluster01"},"traitNames":[],"traits":{}},"name":"endpoint"},"traitNames":[],"traits":{}},"commandlineOpts":{"map.column.hive.IsRequestResponse":"BOOLEAN","db.clear.staging.table":"false","hcatalog.storage.stanza":"stored as orc tblproperties (\"orc.compress\"\u003d\"SNAPPY\")","hive.import":"false","codegen.output.delimiters.enclose":"0","codegen.input.delimiters.field":"0","map.column.hive.CommID":"INT","customtool.options.jsonmap":"{}","hive.compute.stats.table":"false","db.connect.string":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","incremental.mode":"None","db.table":"EndPointCommunicationDetails","verbose":"true","codegen.output.delimiters.enclose.required":"false","mapreduce.num.mappers":"4","hdfs.append.dir":"false","map.column.hive.EndPointUserName":"STRING","direct.import":"false","hive.drop.delims":"false","hive.overwrite.table":"false","hbase.bulk.load.enabled":"false","hive.fail.table.exists":"false","relaxed.isolation":"false","db.password.file":"/user/mdrxsqoop/AzureDev_Password.txt","hdfs.delete-target.dir":"false","split.limit":"null","db.username":"hadoopuser","codegen.input.delimiters.enclose.required":"false","codegen.output.dir":".","import.direct.split.size":"0","map.column.hive.Active":"BOOLEAN","reset.onemapper":"false","map.column.hive.Filter":"STRING","codegen.output.delimiters.record":"10","temporary.dirRoot":"_sqoop","hcatalog.create.table":"true","map.column.hive.Protocol":"STRING","db.batch":"false","map.column.hive.TransformType":"STRING","hcatalog.database.name":"Enrollment_full","import.fetch.size":"1000","accumulo.max.latency":"5000","hdfs.file.format":"TextFile","codegen.output.delimiters.field":"44","mainframe.input.dataset.type":"p","hcatalog.table.name":"EndPointCommunicationDetails","codegen.output.delimiters.escape":"0","hcatalog.drop.and.create.table":"false","map.column.hive.AuthenticationSource":"STRING","map.column.hive.EncodingType":"STRING","import.max.inline.lob.size":"16777216","hbase.create.table":"false","codegen.auto.compile.dir":"true","codegen.compile.dir":"/tmp/sqoop-mdrxsqoop/compile/134166a19963465594d21d605c8790ac","codegen.input.delimiters.enclose":"0","export.new.update":"UpdateOnly","enable.compression":"false","map.column.hive.WrapperDocumentNamespace":"STRING","accumulo.batch.size":"10240000","map.column.hive.Uri":"STRING","map.column.hive.EndPointPassword":"STRING","codegen.input.delimiters.record":"0","codegen.input.delimiters.escape":"0","accumulo.create.table":"false"},"endTime":"2016-09-18T21:21:12.560Z","inputs":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-1498950189751121","version":0,"typeName":"sqoop_dbdatastore","state":"ACTIVE"},"typeName":"sqoop_dbdatastore","values":{"name":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","source":"EndPointCommunicationDetails","storeUse":"TABLE","description":"","storeUri":"jdbc:sqlserver://10.0.4.4;database\u003dEnrollment","qualifiedName":"sqlserver --url jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails","owner":"mdrxsqoop","dbStoreType":"sqlserver"},"traitNames":[],"traits":{}},"operation":"import","qualifiedName":"sqoop import --connect jdbc:sqlserver://10.0.4.4;database\u003dEnrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01","userName":"mdrxsqoop"},"traitNames":[],"traits":{}}],"type":"ENTITY_CREATE","user":"mdrxsqoop"}}
16/09/18 21:21:17 ERROR hook.AtlasHook: Failed to notify atlas for entity [[{Id='(type: sqoop_dbdatastore, id: <unassigned>)', traits=[], values={owner=mdrxsqoop, storeUri=jdbc:sqlserver://10.0.4.4;database=Enrollment, dbStoreType=sqlserver, qualifiedName=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, name=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, description=, source=EndPointCommunicationDetails, storeUse=TABLE}}, {Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}, {Id='(type: hive_table, id: <unassigned>)', traits=[], values={qualifiedName=destDbName.endpoint@DevCluster01, name=endpoint, db={Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}}}, {Id='(type: sqoop_process, id: <unassigned>)', traits=[], values={outputs={Id='(type: hive_table, id: <unassigned>)', traits=[], values={qualifiedName=destDbName.endpoint@DevCluster01, name=endpoint, db={Id='(type: hive_db, id: <unassigned>)', traits=[], values={qualifiedName=destDbName@DevCluster01, clusterName=DevCluster01, name=Enrollment_full}}}}, commandlineOpts={reset.onemapper=false, map.column.hive.Filter=STRING, codegen.output.delimiters.enclose=0, codegen.input.delimiters.escape=0, codegen.auto.compile.dir=true, map.column.hive.AuthenticationSource=STRING, map.column.hive.IsRequestResponse=BOOLEAN, accumulo.batch.size=10240000, codegen.input.delimiters.field=0, accumulo.create.table=false, mainframe.input.dataset.type=p, map.column.hive.EncodingType=STRING, enable.compression=false, hive.compute.stats.table=false, map.column.hive.Active=BOOLEAN, accumulo.max.latency=5000, map.column.hive.Uri=STRING, map.column.hive.EndPointUserName=STRING, db.username=hadoopuser, map.column.hive.Protocol=STRING, db.clear.staging.table=false, codegen.input.delimiters.enclose=0, hdfs.append.dir=false, import.direct.split.size=0, map.column.hive.EndPointPassword=STRING, hcatalog.drop.and.create.table=false, codegen.output.delimiters.record=10, codegen.output.delimiters.field=44, hbase.bulk.load.enabled=false, hcatalog.table.name=EndPointCommunicationDetails, mapreduce.num.mappers=4, export.new.update=UpdateOnly, hive.import=false, customtool.options.jsonmap={}, hdfs.delete-target.dir=false, codegen.output.delimiters.enclose.required=false, direct.import=false, codegen.output.dir=., hdfs.file.format=TextFile, hive.drop.delims=false, hcatalog.storage.stanza=stored as orc tblproperties ("orc.compress"="SNAPPY"), codegen.input.delimiters.record=0, db.batch=false, map.column.hive.TransformType=STRING, split.limit=null, hcatalog.create.table=true, hive.fail.table.exists=false, hive.overwrite.table=false, incremental.mode=None, temporary.dirRoot=_sqoop, hcatalog.database.name=Enrollment_full, verbose=true, import.max.inline.lob.size=16777216, import.fetch.size=1000, codegen.input.delimiters.enclose.required=false, relaxed.isolation=false, map.column.hive.WrapperDocumentNamespace=STRING, map.column.hive.CommID=INT, db.table=EndPointCommunicationDetails, hbase.create.table=false, db.password.file=/user/mdrxsqoop/AzureDev_Password.txt, codegen.compile.dir=/tmp/sqoop-mdrxsqoop/compile/134166a19963465594d21d605c8790ac, codegen.output.delimiters.escape=0, db.connect.string=jdbc:sqlserver://10.0.4.4;database=Enrollment}, qualifiedName=sqoop import --connect jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01, inputs={Id='(type: sqoop_dbdatastore, id: <unassigned>)', traits=[], values={owner=mdrxsqoop, storeUri=jdbc:sqlserver://10.0.4.4;database=Enrollment, dbStoreType=sqlserver, qualifiedName=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, name=sqlserver --url jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails, description=, source=EndPointCommunicationDetails, storeUse=TABLE}}, name=sqoop import --connect jdbc:sqlserver://10.0.4.4;database=Enrollment --table EndPointCommunicationDetails --hive-import --hive-database destDbName --hive-table endpoint --hive-cluster DevCluster01, startTime=Sun Sep 18 21:19:43 UTC 2016, endTime=Sun Sep 18 21:21:12 UTC 2016, userName=mdrxsqoop, operation=import}}]] after 3 retries. Quitting
org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:249)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:222)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:84)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:129)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:114)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:177)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:51)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:52)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:232)
... 20 more
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ATLAS_HOOK]
16/09/18 21:21:17 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@33c7e1bb
16/09/18 21:21:17 DEBUG hsqldb.HsqldbJobStorage: Flushing current transaction
16/09/18 21:21:17 DEBUG hsqldb.HsqldbJobStorage: Closing connection Can any one help. Thanks ram
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Kafka
-
Apache Sqoop
09-16-2016
01:10 PM
1 Kudo
Hi Good morning, I added the following to core-site.xml and restarted HDFS, YARN and MAPReduce <property> <name>hadoop.proxyuser.hive.hosts</name> <value>*</value> </property> and I am able to execute the sqoop Job. thanks ram
... View more
09-16-2016
02:48 AM
1 Kudo
Hi, Good evening, I have created a job to import data from SQL server and when I tried to execute the job using sqoop job -exec job.my.Account I am getting the folloiwing exception 16/09/16 01:39:38 INFO hcat.SqoopHCatUtilities: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/09/16 01:39:47 INFO hcat.SqoopHCatUtilities: FAILED: SemanticException MetaException(message:org.apache.hadoop.ipc.RemoteException(onException):
Unauthorized connection for super-user: hive/n02.myserver.com@MYSERVER.COM from IP xx.xx.xx.5)
16/09/16 01:39:48 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@33c7e1bb
16/09/16 01:39:48 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: HCat exited with status 64
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.executeExternalHCatProgram(SqoopHCatUtilities.java:1196)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.launchHCatCli(SqoopHCatUtilities.java:1145)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.createHCatTable(SqoopHCatUtilities.java:679)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:342)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:848)
at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:102)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:263)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
The same job works fine in without security on ( with out Kerboriztion) I configured the following in core-site. <property>
<name>hadoop.proxyuser.hcat.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hcat.groups</name>
<value>*</value>
</property> Can any one help. thanks Ram
... View more
Labels:
- Labels:
-
Apache Sqoop
09-04-2016
05:47 PM
I was able to identify my mistake. I forgot one step i.e ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Once I executed that command, it started working. The OS is centos 7.2. Thank you for your help. Thanks Ram
... View more
09-01-2016
10:56 PM
Hi, good evening, I started installing Ambari 2.4.0.1 and HDP 2.5 but I am not able to Test connection either for Oozie or Hive. I am getting the following error Ambari 2.4.0.1-1 and HDP 2.5 is not able to Test Connection to oozie or Hive metastore 2016-09-01 22:46:22,247 - There was an unknown error while checking database connectivity: coercing to Unicode: need string or buffer, NoneType found Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 144, in actionexecute
db_connection_check_structured_output = self.execute_db_connection_check(config, tmp_dir)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 285, in execute_db_connection_check
jdbc_url = jdk_location + jdbc_driver_mysql_name
TypeError: coercing to Unicode: need string or buffer, NoneType found
2016-09-01 22:46:22,248 - Check db_connection_check was unsuccessful. Exit code: 1. Message: coercing to Unicode: need string or buffer, NoneType found
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 506, in <module>
CheckHost().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 206, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: coercing to Unicode: need string or buffer, NoneType found
Can any one point me in right direction to resolve this issue. Thank you Ram
... View more
Labels:
09-01-2016
03:28 PM
Thank you for the quick response. I thought it may be better to start fresh and manually removed all artifacts related to Ambari 2.2 and Hdp 2.4 and started installing Ambari 2.4.0.1 as fresh install. I should have waited little longer.... Thank you Ram
... View more
09-01-2016
12:41 PM
1 Kudo
I was able to start the server using ambari-server start --skip-database-check and the following command returns ambari-server --version 2.4.0.1-1 I will continue my upgrade and see how it goes Thank you for your help. Ram
... View more
09-01-2016
02:21 AM
2 Kudos
Hi, I have upgraded Ambari 2.2 to 2.4 and everything went well as per steps but it failed to start the ambari service and the following is the error 2016-08-31 21:52:28,082 INFO - ******************************* Check database started *******************************
2016-08-31 21:52:31,647 INFO - Checking for configs not mapped to any cluster
2016-08-31 21:52:31,653 INFO - Checking for configs selected more than once
2016-08-31 21:52:31,655 INFO - Checking for hosts without state
2016-08-31 21:52:31,657 INFO - Checking host component states count equals host component desired states count
2016-08-31 21:52:31,660 INFO - Checking services and their configs
2016-08-31 21:52:33,669 ERROR - Unexpected error, database check failed
java.lang.NullPointerException
at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkServiceConfigs(DatabaseConsistencyCheckHelper.java:543)
at org.apache.ambari.server.checks.DatabaseConsistencyChecker.main(DatabaseConsistencyChecker.java:115) Thank you for your help. thanks ram
... View more
- Tags:
- Ambari
- Hadoop Core
Labels:
- Labels:
-
Apache Ambari
08-08-2016
01:44 AM
Hi All, I would like post the solution that worked for me. I deleted data from the following tables from Ambari database. a) request b) stage c) host_role_command d) execution_command e) requestoperationlevel f) requestresourcefilter Thank you for your help Thanks Ram
... View more
08-04-2016
09:38 PM
Sharma, Thank you for your help. Here is the error from Ambari log 04 Aug 2016 17:20:36,026 ERROR [pool-9-thread-9] BaseProvider:240 - Caught exception getting JMX metrics : Connection refused, skipping same exceptions for next 5 minutes One of the agent log has the following error ERROR 2016-08-04 16:21:36,953 HostInfo.py:229 - Checking java processes failed Please let me know if you need more information. Thank you Ram
... View more
08-04-2016
09:35 PM
Thank you for your reply. The tried the above a) /var/logs and grep for ERROR I have identified in ambari agent logs ERROR 2016-08-03 16:19:01,144 Controller.py:350 - Connection to hdp-cent7-01 was lost (details=Request to https://hdp-cent7-01:8441/agent/v1/heartbeat/hdp-cent7-02 failed due to Error occured during connecting to the server: ('The read operation timed out',))
ERROR 2016-08-03 16:20:27,315 Controller.py:350 - Connection to hdp-cent7-01 was lost (details=Request to https://hdp-cent7-01:8441/agent/v1/heartbeat/hdp-cent7-02 failed due to Error occured during connecting to the server: ('The read operation timed out',)) based on the above, I followed the following article https://community.hortonworks.com/articles/49075/heartbeat-lost-due-to-ambari-agent-error-unable-to.html un-installed ambari-agent as well as ambar-server, reinstalled again. However, it is not working and I noticed the following error in the ambari server log. 04 Aug 2016 17:21:07,213 WARN [C3P0PooledConnectionPoolManager[identityToken->2w0zzb9io96x8a18kxg2w|3fc2959f]-HelperThread-#2] BasicResourcePool:223 - com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@1d91d05d -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
at sun.reflect.GeneratedConstructorAccessor174.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1015)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:975)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1112)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2488)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.GeneratedConstructorAccessor171.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696) b) I tested SSH to all nodes from Ambari server and I did not find any issues. Here is the final error I am seeing WARN [qtp-ambari-client-29] ServletHandler:563 - /api/v1/clusters/txhubdevcluster01/hosts/hdp-cent7-03.rd.allscripts.com/host_components/FLUME_HANDLER
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (Unknown error code) Thanks Ram
... View more
08-04-2016
03:29 AM
Hi , I Create a cluster using Ambari 2.2.2 on centos 7.2. It worked for about four days and I was able to ingest the data using Flume. All of sudden, I am not able to start any service using Ambari. The background operations with progress bar is not appearing and I am seeing the following exception in Ambari server log. 03 Aug 2016 17:15:53,863 ERROR [pool-9-thread-256] BaseProvider:240 - Caught exception getting JMX metrics : Connection refused, skipping same exceptions for next 5 minutes
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1512)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at org.apache.ambari.server.controller.internal.URLStreamProvider.processURL(URLStreamProvider.java:209)
at org.apache.ambari.server.controller.internal.URLStreamProvider.processURL(URLStreamProvider.java:133)
at org.apache.ambari.server.controller.internal.URLStreamProvider.readFrom(URLStreamProvider.java:107)
at org.apache.ambari.server.controller.internal.URLStreamProvider.readFrom(URLStreamProvider.java:112)
at org.apache.ambari.server.controller.jmx.JMXPropertyProvider.populateResource(JMXPropertyProvider.java:212)
at org.apache.ambari.server.controller.metrics.ThreadPoolEnabledPropertyProvider$1.call(ThreadPoolEnabledPropertyProvider.java:180)
at org.apache.ambari.server.controller.metrics.ThreadPoolEnabledPropertyProvider$1.call(ThreadPoolEnabledPropertyProvider.java:178)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
03 Aug 2016 17:16:09,289 INFO [qtp-ambari-client-445] RequestScheduleR I did the following: a) Removed the content /var/lib/ambari-agent/data and restarted all ambari-agents b) restarted the Ambari-server. I really appreciate your help. Thanks Ram
... View more
Labels:
- Labels:
-
Apache Ambari
07-22-2016
06:03 PM
Here are the details : a) The following is the show create table testtable results ( this table is created with Spark SQL CREATE TABLE `testtabletmp1`( `person_key` bigint, `pat_last` string, `pat_first` string, `pat_dob` timestamp, `pat_zip` string, `pat_gender` string, `pat_chksum1` bigint, `pat_chksum2` bigint, `dimcreatedgmt` timestamp, `pat_mi` string, `h_keychksum` string, `patmd5` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://hdp-cent7-01:8020/apps/hive/warehouse/datawarehouse.db/testtabledimtmp1' | TBLPROPERTIES ( 'orc.compress'='SNAPPY', 'transient_lastDdlTime'='1469207216') 2. The original table create when we scooped the data from SQL server using SQOOP import CREATE TABLE `testtabledim`( `person_key` bigint, `pat_last` varchar(35), `pat_first` varchar(35), `pat_dob` timestamp, `pat_zip` char(5), `pat_gender` char(1), `pat_chksum1` bigint, `pat_chksum2` bigint, `dimcreatedgmt` timestamp, `pat_mi` char(1), `h_keychksum` string, `patmd5` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://hdp-cent7-01:8020/apps/hive/warehouse/datawarehouse.db/testtabledim' TBLPROPERTIES ( 'COLUMN_STATS_ACCURATE'='false', 'last_modified_by'='hdfs', 'last_modified_time'='1469026541', 'numFiles'='1', 'numRows'='-1', 'orc.compress'='SNAPPY', 'rawDataSize'='-1', 'totalSize'='11144909', 'transient_lastDdlTime'='1469026541') If use the first script using spark sql and store the file as ORC with snappy compression it is working. if I store ORC file with snappy compression and use hive to create table using script 1 then it is working fine. But I use an existing table alter table with a new coulmn using the Spark Hive context and save as ORC with snappy compression, I am getting the following error ORC does not support type conversion from STRING to VARCHAR. if use the same ORC but use hive to create a table using second query even then I am getting the same error. I noticed some columns are defined as VARCHAR(35) and I think those columns may be the issue. After I made the change from VARCHAR to String and CHAR to String, it worked fine. I am still investigating what is the best way to handle VARCHAR/CHAR types through Spark dataframe. Please let me know if you need more information. Thank you for your help.
... View more
07-22-2016
04:29 PM
I executed the above statement and I indentified that we created a table with TBLPROPERTIES ( |
| 'COLUMN_STATS_ACCURATE'='false', |
| 'last_modified_by'='hdfs', |
| 'last_modified_time'='1469026541', |
| 'numFiles'='1', |
| 'numRows'='-1', |
| 'orc.compress'='SNAPPY', |
| 'rawDataSize'='-1', |
| 'totalSize'='11144909', |
| 'transient_lastDdlTime'='1469026541' I noticed that while storing ORC file I did not provide compress option and I used option("compression", "snappy") while saving the file and it appears the compression is not working. can you please help. thanks Ram
... View more
07-21-2016
09:06 PM
1 Kudo
Hi, thank you for your reply. I will post the results. However I followed these steps. a) Loaded the data from existing table testtable into dataframe using HiveContext b) Added a column using withColumn to dataframe c) Created the new table (testtabletmp) using Spark SQL with new column that saves as ORC d) Save the data frame as ORC dataframe.write.format("orc").save("testtabletmp") With the above steps, I am able to access the table from Hive. I will post the results related to SHOW CREATE TABLE testtable tomorrow. thanks Ram
... View more
07-20-2016
03:24 AM
I Sqooped the data from SQL server and stored the data in Hive in ORC file in a data warehouse as table testtable. I read the data using spark into a dataframe. Added a column using withColumn to dataframe and issued an alter to add the column alter table testtable add columns (PatMD5 VARCHAR(50) using hiveContext.sql and it is changing the table and I saved the dataframe using the following dataframe.write.format("orc").mode(SaveMode.Overwrite).save("testtable") I am able to save the file into ORC. But when I tried to query using Hue or Beeline, I am getting the following error ORC does not support type conversion from STRING to VARCHAR I tried with alter table testtable add columns (PatMD5 STRING) I am able to save the file in ORC but not able to query from hive. Can any one help. thanks in advance Ram
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
07-18-2016
12:09 AM
Hi All, I further researched this issue and found an alternative solution. if you define the function as follows val parsePatientfun = udf { (thestruct: Row) =>
thestruct.getAs[String]("City") } you can get to the fields from StuctureType. Thanks Ram
... View more
07-13-2016
11:52 PM
Hi, I am trying create a UDF and use it in dataframe select something like val selected = jsonRxMap.select(parsePatient($"Patient") ,parseProvider($"Provider"),parsePharmacy($"Pharmacy")) $"Patient" is StuctureType and I searched google find this SPARK-12823 and I am not sure is there any work around to solve the problem. The goal is that by passing the StructuredType ( struct<FirstName:string,Address1:string,Address2:string,AltID:string,Beeper:string,..) to parsePatient function which returns a unique value to that patient and I can store the patient in a dimention table in hive with ORC format. Can any one help. thanks Ram
... View more
Labels:
- Labels:
-
Apache Spark