Member since
10-19-2016
52
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
26383 | 02-26-2017 09:33 PM | |
3132 | 11-29-2016 04:39 PM | |
11728 | 11-15-2016 06:55 PM |
07-25-2019
10:00 AM
is a bit late but i post the solution that worked for me. the problem was the hostnames, impala with kerberos wants the hostnames in lowercase.
... View more
01-18-2018
12:16 AM
Hi, i´m trying to use sqoop via hue, but i keep getting this error: 2018-01-18 08:31:52,353 [main] WARN org.kitesdk.data.spi.hive.MetaStoreUtil - Aborting use of local MetaStore. Allow local MetaStore by setting kite.hive.allow-local-metastore=true in HiveConf
2018-01-18 08:31:52,353 [main] ERROR org.apache.sqoop.tool.ImportTool - Import failed: Missing Hive MetaStore connection URI Its not the same but seems to be quite similar. The cluster is useing HA for Hive metastore. I tried to set the Hive metastore uri like this: import -Dhive.metastore.uris=thrift://17.239.167.168:9083 -Dkite.hive.allow-local-metastore=true --connect jdbc:exa:17.239.167.201..205:8563;schema=sks_dp_steuerung --driver com.exasol.jdbc.EXADriver --username sys --password exasol --table dbo_institutspartitionen --hive-import --as-parquetfile --hive-table DBO_INSTITUTSPARTITIONEN --hive-database SKS_DP_STEUERUNG -m 1 but it makes no difference. The Exception tells to set kite.hive.allow-local-metastore=true I tried this via -Dkite.hive.allow-local-metastore=true I have no idea whether this would be the right way to do this. Is there something i might have missed? Or is this really a completely different error?
... View more
05-11-2017
04:14 PM
Ran into the same problem, resolved by enabling 'Hive Service' in Spark2.
... View more
04-10-2017
03:33 AM
Hi, We are also getting similar error like below: WARN Auto offset commit failed for group console-consumer-26249: Offset commit failed with a retriable exception. You should retry committing offsets. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) We have three node cluster. If we kill one of the Kafka node then remaining two nodes hang and continuely gave above message without consume any data. If we bring up the down node again then all are trying to consume data without above warning/exception message. we are using kafka 0.10.1.1 version and linux machine. we are tried below consumer properties. But no luck. enble.auto.commit = true
auto.commit.interval.ms = 1000 zhuangmz : we can't restart the cluster in production.It is not acceptable solutions at production environment. Any specific properties to resolve this group coordination. Thanks in Adv. Thanks Yarra
... View more
02-09-2017
11:28 AM
1 Kudo
Hello, This can happen if this host was never properly added to the cluster. The agent is stateless by design and it relies on the model state provided to it by Cloudera Manager. You cannot manually deploy parcels on host which are managed by Cloudera Manager. Cloudera Manager will simply instruct the agent to remove the parcels. When the host comes up you should review the state of operations in Cloudera Manager. Please be sure to visit the Parcel management page and see weather or not the system is attempting to deploy Parcels on any host. If this is not happening, you may need to remove the host from the cluster and re-add it. It may also be possible to force parcel distribution through the CM api despite it's present status. Please review our API documentation. Specifically the parcel commands. http://cloudera.github.io/cm_api/apidocs/v14/index.html /clusters/{clusterName}/parcels /clusters/{clusterName}/parcels/products/{product}/versions/{version} /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/activate /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/cancelDistribution /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/cancelDownload /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/deactivate /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/removeDownload /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/startDistribution /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/startDownload /clusters/{clusterName}/parcels/products/{product}/versions/{version}/commands/startRemovalOfDistribution /clusters/{clusterName}/parcels/usage
... View more
01-03-2017
04:47 PM
Hi, Eric. 1. I use IP address in Linux hosts(within or without CDH cluster hosts) is both OK. 2. I try FQDN "cdh-121", where the kerberos principal is "cdh-121@REALM". This works in Linux, fails in Windows. I think the hostnames/IP/DNS both work. The trigger to this issue is still unknown.
... View more
12-29-2016
02:26 AM
i'm using sbt, should i use spark-submit everytime we need to run a project? SBT run, is catering my needs for now, as im using it in local mode.
... View more
12-13-2016
04:59 PM
Hi, Venkat, maybe this will help you. https://community.cloudera.com/t5/Cloudera-Manager-Installation/Disabling-Kerberos/td-p/19654
... View more