Member since
09-04-2018
33
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8811 | 10-15-2018 09:26 AM | |
15673 | 09-15-2018 08:53 PM |
10-29-2018
03:16 PM
@Christos Stefanopoulos HDP 3.0 has different way of integrating Apache Hive with Apache Spark using Hive Warehouse Connector. Below article explains the steps: https://community.hortonworks.com/content/kbentry/223626/integrating-apache-hive-with-apache-spark-hive-war.html
... View more
10-29-2018
03:13 PM
HDP 3.0 has different way of integrating Apache Hive with Apache Spark using Hive Warehouse Connector. Below article explains the steps: https://community.hortonworks.com/content/kbentry/223626/integrating-apache-hive-with-apache-spark-hive-war.html
... View more
10-15-2018
09:31 AM
@Tongzhou Zhou I copied /etc/hive/hive-site.xml from hive conf directory to /etc/spark2/ and then removed below properties from /etc/spark2/conf/hive-site.xml. It's working now, I can see Hive databases in spark (pyspakr, spark-shell, spark-sql etc). hive.tez.cartesian-product.enabled
hive.metastore.warehouse.external.dir
hive.server2.webui.use.ssl
hive.heapsize
hive.server2.webui.port
hive.materializedview.rewriting.incremental
hive.server2.webui.cors.allowed.headers
hive.driver.parallel.compilation
hive.tez.bucket.pruning
hive.hook.proto.base-directory
hive.load.data.owner
hive.execution.mode
hive.service.metrics.codahale.reporter.classes
hive.strict.managed.tables
hive.create.as.insert.only
hive.optimize.dynamic.partition.hashjoin
hive.server2.webui.enable.cors
hive.metastore.db.type
hive.txn.strict.locking.mode
hive.metastore.transactional.event.listeners
hive.tez.input.generate.consistent.splits Can you please try this and let me know if you still face this issue?
... View more
10-15-2018
09:26 AM
1 Kudo
I copied /etc/hive/hive-site.xml from hive conf directory to /etc/spark2/ and then removed below properties from /etc/spark2/conf/hive-site.xml. It's working now, I can see Hive databases in spark (pyspakr, spark-shell, spark-sql etc). hive.tez.cartesian-product.enabled
hive.metastore.warehouse.external.dir
hive.server2.webui.use.ssl
hive.heapsize
hive.server2.webui.port
hive.materializedview.rewriting.incremental
hive.server2.webui.cors.allowed.headers
hive.driver.parallel.compilation
hive.tez.bucket.pruning
hive.hook.proto.base-directory
hive.load.data.owner
hive.execution.mode
hive.service.metrics.codahale.reporter.classes
hive.strict.managed.tables
hive.create.as.insert.only
hive.optimize.dynamic.partition.hashjoin
hive.server2.webui.enable.cors
hive.metastore.db.type
hive.txn.strict.locking.mode
hive.metastore.transactional.event.listeners
hive.tez.input.generate.consistent.splits Do you see any consequences?
... View more
10-13-2018
05:54 PM
@Felix Albani If I don't copy hive-site.xml from hive conf directory for spark then I can't see Hive databases in spark(pyspark and spark-shell). Could you please explain me what all properties I should add in hive-site.xml of saprk and where should I update hive.metastore.uris? If I copy below property from hive conf to spark conf, will this work? Technical Stack Details: HDP3.0 Spark2.3 Hive3.1 <configuration>
<property>
<name>hive.metastore.uris</name>
<!-- hostname must point to the Hive metastore URI in your cluster -->
<value>thrift://hostname:9083</value>
<description>URI for client to contact metastore server</description>
</property>
</configuration>
... View more
10-12-2018
10:32 AM
I tried 2 option: 1#. When trying to create parquet table in Hive 3.1 through Spark 2.3, Spark throws below error. df.write.format("parquet").mode("overwrite").saveAsTable("database_name.test1") --> This throws below error
Error : pyspark.sql.utils.AnalysisException: u'org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Table datamart.test1 failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional.);'
2#. Successfully able to insert data into existing parquet table and retrieve through Spark. df.write.format("parquet").mode("overwrite").insertInto("database_name.test2") --> This works fine, loaded data can be retrieved from Spark but NOT Hive
spark.sql("select * from database_name.test2").show() --> This works fine
spark.read.parquet("/path-to-table-dir/part-00000.snappy.parquet").show() --> This works fine But when I try to read the same table through Hive, Hive session gets disconnected and throws below error. SELECT * FROM database_name.test2
Error : org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:567)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.FetchResults(TCLIService.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1572)
at com.sun.proxy.$Proxy22.FetchResults(Unknown Source)
at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)
at org.apache.hive.beeline.BufferedRows.<init>(BufferedRows.java:56)
at org.apache.hive.beeline.IncrementalRowsWithNormalization.<init>(IncrementalRowsWithNormalization.java:50)
at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2250)
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026)
at org.apache.hive.beeline.Commands.execute(Commands.java:1201)
at org.apache.hive.beeline.Commands.sql(Commands.java:1130)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1425)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1287)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1071)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:538)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:520)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Unknown HS2 problem when communicating with Thrift server.
Error: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed) (state=08S01,code=0) After this error Hive session gets disconnected and I have to re-connect. All other queries are working fine, only this query is showing above error and getting disconnected. Environment details: Horotonworks HDP3.0 Spark 2.3.1 Hive 3.1
... View more
Labels:
10-12-2018
07:37 AM
We are using HDP 3.0 in PoC/Dev and does not have Hortonworks support yet. Is there any way I can directly install this patch without Hortonworks support?
... View more
10-11-2018
02:53 PM
What are the steps to install Hive patch https://issues.apache.org/jira/browse/HIVE-20593 I am using Hortonworks Data Platform HDP3.0 hosted on Oracle Linux machines. Issue for which patch is required : https://stackoverflow.com/questions/52761391/table-loaded-through-spark-not-accessible-in-hive
... View more
Labels:
10-11-2018
11:40 AM
df.write.format("orc").mode("overwrite").saveAsTable("database.table-name") When I create a Hive table through Spark, I am able to query the table from Spark but having issue while accessing table data through Hive. I get below error. Error: java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 (state=,code=0) I am able to view table metadata.
... View more
10-11-2018
09:09 AM
@Tongzhou Zhou Sorry for delayed response. After copying hive-site.xml from hive-conf dir to spark-conf dir, I am able to access Hive databases from pyspark and spark-shell, But I am also getting same error while initiating spark-sql session. Did you find what is the best way to use hive databases within all Spark APIs (spark-sql, pyspark, spark-shell and spark-submit etc)?
... View more
10-11-2018
08:58 AM
I am facing issue while initiating spark-sql session. Initially when I initiated spark session only default database was visible (Not default database of Hive but same of Spark). In order to view hive databases I copied hive-site.xml from hive-conf dir to spark-conf dir. After I copied hive-site.xml I am getting below error. $ spark-sql
WARN HiveConf: HiveConf of name hive.tez.cartesian-product.enabled does not exist
WARN HiveConf: HiveConf of name hive.metastore.warehouse.external.dir does not exist
WARN HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist
WARN HiveConf: HiveConf of name hive.heapsize does not exist
WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist
WARN HiveConf: HiveConf of name hive.materializedview.rewriting.incremental does not exist
WARN HiveConf: HiveConf of name hive.server2.webui.cors.allowed.headers does not exist
WARN HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist
WARN HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist
WARN HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist
WARN HiveConf: HiveConf of name hive.load.data.owner does not exist
WARN HiveConf: HiveConf of name hive.execution.mode does not exist
WARN HiveConf: HiveConf of name hive.service.metrics.codahale.reporter.classes does not exist
WARN HiveConf: HiveConf of name hive.strict.managed.tables does not exist
WARN HiveConf: HiveConf of name hive.create.as.insert.only does not exist
WARN HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist
WARN HiveConf: HiveConf of name hive.server2.webui.enable.cors does not exist
WARN HiveConf: HiveConf of name hive.metastore.db.type does not exist
WARN HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist
WARN HiveConf: HiveConf of name hive.metastore.transactional.event.listeners does not exist
WARN HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
INFO metastore: Trying to connect to metastore with URI thrift://<host-name>:9083
INFO metastore: Connected to metastore.
INFO SessionState: Created local directory: /tmp/7b9d5455-e71a-4bd5-aa4b-385758b575a8_resources
INFO SessionState: Created HDFS directory: /tmp/hive/spark/7b9d5455-e71a-4bd5-aa4b-385758b575a8
INFO SessionState: Created local directory: /tmp/spark/7b9d5455-e71a-4bd5-aa4b-385758b575a8
INFO SessionState: Created HDFS directory: /tmp/hive/spark/7b9d5455-e71a-4bd5-aa4b-385758b575a8/_tmp_space.db
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:529)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:133)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:904)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
INFO ShutdownHookManager: Shutdown hook called
INFO ShutdownHookManager: Deleting directory /tmp/spark-911cc8f5-f53b-4ae6-add3-0c745581bead
$ I am able to run pyspark and spark-shell session successfully and Hive databases are visible to me in pyspark/spark-shell session. The error is related to tez and I confirmed that tez services are running fine. I am successfully able to access hive tables through hive2. I am using HDP3.0 and for Hive execution engine is Tez (Map-Reduce has been removed).
... View more
Labels:
10-10-2018
11:53 AM
I am getting errors while installing HBase on a 3 node cluster. I am using HDP3.0 on oracle linux based machines. I have ranger installed on cluster. Error in Ambari-UI Installation: ERROR: KeeperErrorCode = NoNode for /hbase-unsecure/meta-region-server
NotImplementedError: fstat unimplemented unsupported or native support failed to load; see http://wiki.jruby.org/Native-Libraries Error in hbase log dir (/var/log/hbase/hbase-hbase-regionserver-<hostname>.out): INFO [main] internal.NativeLibraryLoader: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_641206763580541674939.so exists but cannot be executed even when execute permissions set; check volume for "noexec" flag; use -Dio.netty.native.workdir=[path] to set native working directory separately.
ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.UnsatisfiedLinkError: failed to load the required native library
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:81)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.<clinit>(EpollEventLoop.java:55)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:134)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:35)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:104)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:91)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:68)
at org.apache.hadoop.hbase.util.NettyEventLoopGroupConfig.<init>(NettyEventLoopGroupConfig.java:61)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupNetty(HRegionServer.java:673)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:532)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2977)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2995)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_641206763580541674939.so: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_641206763580541674939.so: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:187)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:207)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.<clinit>(Native.java:65)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33)
... 23 more
Suppressed: java.lang.UnsatisfiedLinkError: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_641206763580541674939.so: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_641206763580541674939.so: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
... 27 more
Suppressed: java.lang.UnsatisfiedLinkError: no org_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64 in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:124)
... 26 more
Suppressed: java.lang.UnsatisfiedLinkError: no org_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64 in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
... 27 more
Suppressed: java.lang.UnsatisfiedLinkError: could not load a native library: org_apache_hbase_thirdparty_netty_transport_native_epoll
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:205)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:210)
... 25 more
Caused by: java.io.FileNotFoundException: META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll.so
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:161)
... 26 more
Suppressed: java.lang.UnsatisfiedLinkError: no org_apache_hbase_thirdparty_netty_transport_native_epoll in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:124)
... 26 more
Suppressed: java.lang.UnsatisfiedLinkError: no org_apache_hbase_thirdparty_netty_transport_native_epoll in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
... 27 more
... View more
Labels:
10-10-2018
10:56 AM
@Geoffrey Shelton Okot Unfortunately issue still persists, but I got some insight from other comments in this post. I am also getting Kafka-Atlas Hook errors although job is completing successfully. I am still working on that and keep the group posted. FYI: schema parameter in sqoop take double hyphen twice (-- --schema), it takes schema name for RDBMS database.
... View more
10-10-2018
10:15 AM
Very useful information.
... View more
10-09-2018
01:05 PM
I am running sqoop import job to load data into hive table from Sql Server database, but after completion of job everytime data is stored on /user/<user-name>/<table-name> hdfs directory only.
I also tried to set --target-dir to /tmp for temporarily storing data before moving to hive table, but no success and data is still moved to /user/<user-name>/<table-name> hdfs directory. sqoop import \
--connect "jdbc:sqlserver://<server-name>:<port-no>;database=<database-name>" \
--username <user-name> \
-P \
--table <table-name> \
-- --schema <schema-name> \
--hive-import \
--hive-database <hive-database-name> \
--hive-table <hive-table-name> \
-m 1 Stack details: HDP3.0 Sqoop1.4 Hive3.1 Is there anything I am missing?
... View more
Labels:
09-19-2018
01:16 PM
@Tongzhou Zhou Try this: 1. Ensure hive-site.xml in hive-conf dir and spark-conf is identical, below command should not return anything. diff /etc/hive/conf/hive-site.xml /etc/spark2/conf/hive-site.xml 2. Invoke REPL spark session (pyspark or spark-shell) $pyspark
3. Show hive databases spark.sql("show databases") Are you able to access hive tables now?
... View more
09-19-2018
02:20 AM
I am seeing lots of error in ambari-server.log file and metrics information is not displaced on Ambari dashboard (showing NA). I have HDP3.0 installed on Linux 7 machines. Here are the errors from ambari-server.log. 2018-09-18 17:06:30,361 ERROR [ambari-client-thread-35898] MetricsRequestHelper:112 - Error getting timeline metrics : Server returned HTTP response code: 403 for URL: http://worker2:6188/ws/v1/timeline/metrics?metricNames=cpu_wio&hostname=master&appId=HOST
2018-09-18 17:06:30,380 ERROR [ambari-metrics-retrieval-service-thread-445] URLStreamProvider:245 - Received HTTP 403 response from URL: http://master:50070/jmx
2018-09-18 17:06:30,386 ERROR [ambari-metrics-retrieval-service-thread-439] URLStreamProvider:245 - Received HTTP 403 response from URL: http://master:8088/jmx
2018-09-18 17:06:30,388 ERROR [ambari-metrics-retrieval-service-thread-438] URLStreamProvider:245 - Received HTTP 403 response from URL: http://master:50070/jmx?get=Hadoop:service=NameNode,name=FSNamesystem::tag.HAState
2018-09-18 17:06:30,393 ERROR [ambari-metrics-retrieval-service-thread-441] URLStreamProvider:245 - Received HTTP 403 response from URL: http://worker1:19888/jmx
2018-09-18 17:06:30,406 ERROR [ambari-client-thread-35898] URLStreamProvider:245 - Received HTTP 403 response from URL: http://master:8088/ws/v1/cluster/info
2018-09-18 17:06:34,959 ERROR [ambari-client-thread-35892] URLStreamProvider:245 - Received HTTP 403 response from URL: http://worker2:6188/ws/v1/timeline/metrics?metricNames=cpu_wio&hostname=master&appId=HOST
2018-09-18 17:06:34,960 ERROR [ambari-client-thread-35892] MetricsRequestHelper:112 - Error getting timeline metrics : Server returned HTTP response code: 403 for URL: http://worker2:6188/ws/v1/timeline/metrics?metricNames=cpu_wio&hostname=master&appId=HOST
2018-09-18 17:06:34,980 ERROR [ambari-metrics-retrieval-service-thread-444] URLStreamProvider:245 - Received HTTP 403 response from URL: http://master:50070/jmx?get=Hadoop:service=NameNode,name=FSNamesystem::tag.HAState Here troubleshooting steps output: 1. I am successfully able to open below URLs in web and download these url/files through wget. wget http://master:8088/ws/v1/cluster/info
wget http://worker2:6188/ws/v1/timeline/metrics?metricNames=cpu_wio&hostname=master&appId=HOST 2. AMS info in config file /etc/hadoop/conf/hadoop-metrics2.properties $grep 'timeline.collector.host' /etc/hadoop/conf/hadoop-metrics2.properties | cut -d"=" -f2 | sort -n | uniq
datanode.sink.timeline.collector.hosts=worker2
namenode.sink.timeline.collector.hosts=worker2
resourcemanager.sink.timeline.collector.hosts=worker2
nodemanager.sink.timeline.collector.hosts=worker2
jobhistoryserver.sink.timeline.collector.hosts=worker2
journalnode.sink.timeline.collector.hosts=worker2
maptask.sink.timeline.collector.hosts=worker2
reducetask.sink.timeline.collector.hosts=worker2
applicationhistoryserver.sink.timeline.collector.hosts=worker2
$ 3. Sink JAR is loaded on those Nodes/components $ lsof -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` | grep sink
java 6967 hdfs mem REG 249,0 5716944 1842669 /usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink-with-common-2.7.0.0.897.jar
java 6967 hdfs 322r REG 249,0 5716944 1842669 /usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink-with-common-2.7.0.0.897.jar
$ 4. Ambari version $ rpm -qa | grep ambari
ambari-metrics-grafana-2.7.0.0-897.x86_64
ambari-server-2.7.0.0-897.x86_64
ambari-metrics-hadoop-sink-2.7.0.0-897.x86_64
ambari-agent-2.7.0.0-897.x86_64
ambari-metrics-monitor-2.7.0.0-897.x86_64
$ 5. ACTIVITY ANALYZER is going down after few minutes of restart. $ grep -i "ACTIVITY_ANALYZER" ambari-server.log
2018-09-18 17:45:51,148 INFO [agent-report-processor-1] HeartbeatProcessor:581 - State of service component ACTIVITY_ANALYZER of service SMARTSENSE of cluster 104 has changed from STARTED to INSTALLED at host master according to STATUS_COMMAND report
$
... View more
Labels:
09-17-2018
06:41 PM
1 Kudo
After commenting below properties it's works for me as well. ## To be commented out when not using [user] block / paintext passwordMatcher = org.apache.shiro.authc.credential.PasswordMatcher
iniRealm.credentialsMatcher = $passwordMatcher
... View more
09-17-2018
02:56 PM
After copying hive-site.xml from hive-conf dir to spark-conf dir, I restarted the spark services that reverted those changes, I copied hive-site.xml again and it's working now. cp /etc/hive/conf/hive-site.xml /etc/spark2/conf
... View more
09-15-2018
08:53 PM
The default database it was showing was the default database from Spark which has location as '/apps/spark/warehouse', not the default database of Hive. I am able to resolve this by copying hive-site.xml from hive-conf dir to spark-conf dir. cp /etc/hive/conf/hive-site.xml /etc/spark2/conf Try to run this query in your metastore database, in my case it is MySQL. mysql> SELECT NAME, DB_LOCATION_URI FROM hive.DBS; You will see 2 default databases there, one pointing to 'spark.sql.warehouse.dir' and other to 'hive.metastore.warehouse.di'. Location will depend in what value you have for these configuration properties.
... View more
09-15-2018
08:53 PM
I have installed Hortonworks hdp3.0 and configured Zeppelin as well. When I running spark or sql Zeppelin only showing me default database(This is the default database from Spark which has location as '/apps/spark/warehouse', not the default database of Hive). This is probably because hive.metastore.warehouse.dir property is not set from hive-site.xml and zeppelin is picking this from Spark config (spark.sql.warehouse.dir). I had similar issue with spark as well and it was due to hive-site.xml file on spark-conf dir, I was able to resolve this by copying hive-site.xml from hive-conf dir to spark-conf dir. I did the same for Zeppelin as well, copied hive-site.xml in zeppelin dir(where it has zeppelin-site.xml and also copied in zeppelin-external-dependency-conf dir. But this did not resolve the issue *** Edit#1 - adding some additional information *** I have create spark session by enabling hive support through enableHiveSupport(), and even tried setting spark.sql.warehouse.dir config property. but this did not help. import org.apache.spark.sql.SparkSession
val spark =SparkSession.builder.appName("Test Zeppelin").config("spark.sql.warehouse.dir","/apps/hive/db").enableHiveSupport().getOrCreate() Through some online help, I am learnt that Zeppelin uses only Spark's hive-site.xml file, but I can view all hive databases through spark it's only in Zeppelin (through spark2) I am not able to access Hive databases. Additionaly Zeppelin is not letting me choose programming language, it by default creates session with scala. I would prefer a Zeppeling session with pyspark. Any help on this will be highly appreciated
... View more
Labels:
09-14-2018
08:04 PM
I have installed hdp3.0 and using Spark 2.3 and Hive 3.1. When I am trying to access hive tables though spark(pyspark/spark-shell) then I am getting below error. Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/current/spark2-client/python/pyspark/sql/session.py", line 716, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 71, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"Database 'test' not found;" Only default hive database is visible in Spark. >>> spark.sql("show databases").show()
+------------+
|databaseName|
+------------+
| default|
+------------+
>>> Content of hive-site.xml is not exactly same in spark/conf and hive/conf dir. -rw-r--r-- 1 hive hadoop 23600 Sep 14 09:21 /usr/hdp/current/hive-client/conf/hive-site.xml
-rw-r--r-- 1 spark spark 1011 Sep 14 12:02 /etc/spark2/3.0.0.0-1634/0/hive-site.xml I even tried initiated spark session with hive/conf/hive-site.xml, even this did not help. pyspark --files /usr/hdp/current/hive-client/conf/hive-site.xml Should I copy hive-site.xml file from hive-conf to spark-conf dir (or anywhere else as well)? Or changing a property Ambari UI will work?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
09-14-2018
02:30 PM
I tried starting HS2 after setting JAVA_HOME but it did not help. 2018-09-14 08:29:31: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = d410dd44-b3ed-4c73-a391-f414da52f946
Hive Session ID = c7b15435-16b1-4f74-ac9a-a5f5fb09af35
Hive Session ID = 2bbef091-f52a-44a6-b1af-d1f78b30fb88
Hive Session ID = 4b405953-cf8d-4d72-a10c-49ccf873e03b
Hive Session ID = cc1d904c-b920-42da-8787-c9be4afabc2b This is error what I see when I invoke Hive : Error: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper (state=,code=0)
... View more
09-14-2018
04:56 AM
Thanks @Jay Kumar SenSharma for your prompt responses! I have manually validated hive-site.xml, it has correct entry. Later I started hiveserver2 services as suggested manually. Here is what I see in nohup.out file. +======================================================================+
| Error: JAVA_HOME is not set |
+----------------------------------------------------------------------+
| Please download the latest Sun JDK from the Sun Java web site |
| > http://www.oracle.com/technetwork/java/javase/downloads |
| |
| HBase requires Java 1.8 or later. |
+======================================================================+
2018-09-13 23:25:07: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = a7aa8a99-1a27-42b9-977a-d545742d1747
Hive Session ID = 41c8d3d5-e7ca-42df-acd5-2b8d1f23220c
Hive Session ID = 84951be6-015e-4634-a4de-6a6f18d51bb8
Hive Session ID = 1964a526-53da-4b50-90a9-9bc90142e2ff
Hive Session ID = f6188f9b-f3a9-43ec-a9ef-6a8a87a01498 I see JAVA_HOME not set error in nohup.out file, So ran the below command to set the JAVA_HOME and reran the command to start hiveserver2 manually, this time I did not get Error: JAVA_HOME is not set but still facing same issue. Do you suggest any other solution for this? My ambari dashboard looks like this (Image Attached), This cluster is running since 36 hours but no job ran because Hive is not working. Does this n/a and not data available is fine? All the links eg. resource manage, namenode web interface information. data node information are available and are showing correct information. FYI - Initially I was installing Hive Metastore(MySQL) and HiveServer2 service on worker2 node but had issue while testing connection to metastore (DBConnectionVerification.jar file issue) so moved metastore and service to master and that issue got resolved.
... View more
09-14-2018
03:08 AM
I have validated hive.server2.support.dynamic.service.discovery is true (check-box is selected) and I am still facing the same issue.
... View more
09-14-2018
01:51 AM
I am also facing same issue. After login into zk command like. I could only see below content. I can't see HS2 [zk: localhost:2181(CONNECTED) 10] ls / [registry, ambari-metrics-cluster, zookeeper, zk_smoketest, rmstore] [zk: localhost:2181(CONNECTED) 11] Could you Please help me on this? When I was starting HiveServer2 first time after installation I got below error. Tried multiple times but HS2 was never able to start. Traceback (most recent call last): File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/decorator.py", line 54, in wrapper return function(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 189, in wait_for_znode raise Fail(format("ZooKeeper node /{hive_server2_zookeeper_namespace} is not ready yet")) Fail: ZooKeeper node /hiveserver2 is not ready yet The above exception was the cause of the following exception: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 137, in <module> HiveServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute method(env) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 993, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 53, in start hive_service('hiveserver2', action = 'start', upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 101, in hive_service wait_for_znode() File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/decorator.py", line 62, in wrapper return function(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 189, in wait_for_znode raise Fail(format("ZooKeeper node /{hive_server2_zookeeper_namespace} is not ready yet")) resource_management.core.exceptions.Fail: ZooKeeper node /hiveserver2 is not ready yet
... View more
09-14-2018
01:51 AM
I am getting below error while starting HiveServer2 after installation.
Node does not exist: /hiveserver2
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/decorator.py", line 54, in wrapper
return function(*args, **kwargs)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 189, in wait_for_znode
raise Fail(format("ZooKeeper node /{hive_server2_zookeeper_namespace} is not ready yet"))
Fail: ZooKeeper node /hiveserver2 is not ready yet Here are some additional details around error : 2018-09-13 16:25:26,594 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-server.pid 1>/tmp/tmpIldfIy 2>/tmp/tmpDcXaXg''] {'quiet': False}
2018-09-13 16:25:26,638 - call returned (1, '')
2018-09-13 16:25:26,638 - Execution of 'cat /var/run/hive/hive-server.pid 1>/tmp/tmpIldfIy 2>/tmp/tmpDcXaXg' returned 1. cat: /var/run/hive/hive-server.pid: No such file or directory
2018-09-13 16:25:26,638 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive-server.pid: No such file or directory')
2018-09-13 16:25:26,639 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://seidevdsmastervm01.tsudev.seic.com:8020' | head -1'] {}
2018-09-13 16:25:33,149 - call returned (0, '')
2018-09-13 16:25:33,149 - Execute['/var/lib/ambari-agent/tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.err /var/run/hive/hive-server.pid /usr/hdp/current/hive-server2/conf/ /etc/tez/conf'] {'environment': {'HIVE_BIN': 'hive', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112', 'HADOOP_HOME': u'/usr/hdp/current/hadoop-client'}, 'not_if': 'ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1', 'user': 'hive', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-server2/bin:/usr/hdp/3.0.0.0-1634/hadoop/bin']}
2018-09-13 16:25:33,196 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://seidevdsmastervm01.tsudev.seic.com/hive?createDatabaseIfNotExist=true' hive [PROTECTED] com.mysql.jdbc.Driver'] {'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2018-09-13 16:25:33,476 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server worker1_node:2181,worker2_node.com:2181,master_node.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2018-09-13 16:25:34,068 - call returned (1, 'Node does not exist: /hiveserver2') Error Message : 2018-09-13 16:25:34,068 - call returned (1, 'Node does not exist: /hiveserver2') I have checked zk command line on master node where HS2 in configured and could not find HS2 there. Here is output : [zk: localhost:2181(CONNECTED) 1] ls / [registry, ambari-metrics-cluster, zookeeper, zk_smoketest, rmstore]
[zk: localhost:2181(CONNECTED) 2] I am using HDP3.0, This is a 3 node cluster (1 Master + 2 Worker). Hive metastore (MySQL 5.7) is installed on master node and HiveServer2 is also configured on master node. All Machines have Oracle Linux 7.
... View more
Labels:
09-13-2018
12:47 AM
Thanks for your response @Jay Kumar SenSharma I have validated and I don't have mariadb lib installed on host where I am trying to install MySQL db for Hive metastore. Below command is not returning anything. rpm -qa | grep -i maria Is there any version out-of-sync issue, I suspect by looking below in stack traceback. Removing mariadb-libs.x86_64 1:5.5.60-1.el7_5 - u due to obsoletes from installed mysql-community-libs-5.7.16-1.el7.x86_64
... View more
09-12-2018
11:29 PM
I am installing HDP3.0.0 (through Ambari) and installation of Hive metastore(MySQL) failed while installing mariadb dependency Trace Loaded plugins: langpacks
Resolving Dependencies --> Running transaction check ---> Package mariadb-server.x86_64 1:5.5.60-1.el7_5 will be installed -> Processing Dependency: mariadb-libs(x86-64) = 1:5.5.60-1.el7_5 for package: 1:mariadb-server-5.5.60-1.el7_5.x86_64 --> Processing Dependency: mariadb(x86-64) = 1:5.5.60-1.el7_5 for package: 1:mariadb-server-5.5.60-1.el7_5.x86_64 --> Processing Dependency: perl-DBD-MySQL for package: 1:mariadb-server-5.5.60-1.el7_5.x86_64 --> Running transaction check ---> Package mariadb.x86_64 1:5.5.60-1.el7_5 will be installed ---> Package mariadb-libs.x86_64 1:5.5.60-1.el7_5 will be installed ---> Package perl-DBD-MySQL.x86_64 0:4.023-6.0.1.el7 will be installed
Removing mariadb-libs.x86_64 1:5.5.60-1.el7_5 - u due to obsoletes from installed mysql-community-libs-5.7.16-1.el7.x86_64 --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.60-1.el7_5 will be installed --> Processing Dependency: mariadb-libs(x86-64) = 1:5.5.60-1.el7_5 for package: 1:mariadb-server-5.5.60-1.el7_5.x86_64 --> Processing Dependency: mariadb-libs(x86-64) = 1:5.5.60-1.el7_5 for package: 1:mariadb-5.5.60-1.el7_5.x86_64 --> Finished Dependency Resolution
###Error: Package: 1:mariadb-server-5.5.60-1.el7_5.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.60-1.el7_5 Available: 1:mariadb-libs-5.5.52-1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.56-2.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.56-2.el7 Available: 1:mariadb-libs-5.5.60-1.el7_5.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.60-1.el7_5 Error: Package: 1:mariadb-5.5.60-1.el7_5.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.60-1.el7_5 Available: 1:mariadb-libs-5.5.52-1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.56-2.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.56-2.el7 Available: 1:mariadb-libs-5.5.60-1.el7_5.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.60-1.el7_5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Can I try installing mariadb-server through rpm and resume installation ? /usr/bin/yum -y install mariadb-server Do I need to perform some manual steps, Can some one pls help.
... View more
Labels: