Support Questions

Find answers, ask questions, and share your expertise

The setting of hive.metastore.client.socket.timeout is not reflected

avatar
New Contributor

Team,

The value set for hive.metastore.client.socket.timeout and the value of the actually running process are different. I input 3600 at Ambari's hive.metastore.client.socket.timeout, but it is 14 on the actual process.

1001     24344 24324  3 16:55 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --hiveconf hive.metastore.uris=thrift://dev-m3:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e show databases;

this is the hive-site.xml

  <configuration>
    
    <property>
      <name>ambari.hive.db.schema.name</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>datanucleus.autoCreateSchema</name>
      <value>false</value>
    </property>
    
    <property>
      <name>datanucleus.cache.level2.type</name>
      <value>none</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask.size</name>
      <value>1073741824</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join.to.mapjoin</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cbo.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.cli.print.header</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.class</name>
      <value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
      <value>dev-m2:2181,dev-m3:2181,dev-m1:2181</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.znode</name>
      <value>/hive/cluster/delegation</value>
    </property>
    
    <property>
      <name>hive.compactor.abortedtxn.threshold</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.compactor.check.interval</name>
      <value>300L</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.num.threshold</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.pct.threshold</name>
      <value>0.1f</value>
    </property>
    
    <property>
      <name>hive.compactor.initiator.on</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.threads</name>
      <value>0</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.timeout</name>
      <value>86400L</value>
    </property>
    
    <property>
      <name>hive.compute.query.using.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.conf.restricted.list</name>
      <value>hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role</value>
    </property>
    
    <property>
      <name>hive.convert.join.bucket.mapjoin.tez</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.default.fileformat</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.default.fileformat.managed</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.enforce.bucketing</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sorting</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sortmergebucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.compress.intermediate</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.compress.output</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition.mode</name>
      <value>nonstrict</value>
    </property>
    
    <property>
      <name>hive.exec.failure.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.max.created.files</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions</name>
      <value>5000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions.pernode</name>
      <value>2000</value>
    </property>
    
    <property>
      <name>hive.exec.orc.compression.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.compress</name>
      <value>ZLIB</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.stripe.size</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.orc.encoding.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.parallel</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.parallel.thread.number</name>
      <value>8</value>
    </property>
    
    <property>
      <name>hive.exec.post.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.pre.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.bytes.per.reducer</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.max</name>
      <value>1009</value>
    </property>
    
    <property>
      <name>hive.exec.scratchdir</name>
      <value>/tmp/hive</value>
    </property>
    
    <property>
      <name>hive.exec.submit.local.task.via.child</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.submitviachild</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.execution.engine</name>
      <value>tez</value>
    </property>
    
    <property>
      <name>hive.fetch.task.aggr</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion</name>
      <value>more</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion.threshold</name>
      <value>1073741824</value>
    </property>
    
    <property>
      <name>hive.limit.optimize.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.limit.pushdown.memory.usage</name>
      <value>0.04</value>
    </property>
    
    <property>
      <name>hive.map.aggr</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.force.flush.memory.threshold</name>
      <value>0.9</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.min.reduction</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.percentmemory</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.mapjoin.bucket.cache.size</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.mapjoin.optimized.hashtable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.mapred.reduce.tasks.speculative.execution</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.mapfiles</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.mapredfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.orcfile.stripe.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.rcfile.block.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.size.per.task</name>
      <value>256000000</value>
    </property>
    
    <property>
      <name>hive.merge.smallfiles.avgsize</name>
      <value>16000000</value>
    </property>
    
    <property>
      <name>hive.merge.tezfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.authorization.storage.checks</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.cache.pinobjtypes</name>
      <value>Table,Database,Type,FieldSchema,Order</value>
    </property>
    
    <property>
      <name>hive.metastore.client.connect.retry.delay</name>
      <value>5</value>
    </property>
    
    <property>
      <name>hive.metastore.client.socket.timeout</name>
      <value>3600</value>
    </property>
    
    <property>
      <name>hive.metastore.connect.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.execute.setugi</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.metastore.failure.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/_HOST@EXAMPLE.COM</value>
    </property>
    
    <property>
      <name>hive.metastore.pre.event.listeners</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
    </property>
    
    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.server.max.threads</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://dev-m3:9083</value>
    </property>
    
    <property>
      <name>hive.metastore.warehouse.dir</name>
      <value>/apps/hive/warehouse</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin.sortedmerge</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.optimize.constant.propagation</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.index.filter</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.metadataonly</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.null.scan</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication.min.reducer</name>
      <value>4</value>
    </property>
    
    <property>
      <name>hive.optimize.sort.dynamic.partition</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.orc.compute.splits.num.threads</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.orc.splits.include.file.footer</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.numcontainers</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.security.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.authorization.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.security.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.auth.reads</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider,org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly</value>
    
</property>    
    
<property>      hive.server2.allow.user.substitution
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.server2.authentication
<name></name>      NONE
<value></value>    
</property>    
    
<property>      hive.server2.authentication.spnego.keytab
<name></name>      HTTP/_HOST@EXAMPLE.COM
<value></value>    
</property>    
    
<property>      hive.server2.authentication.spnego.principal
<name></name>      /etc/security/keytabs/spnego.service.keytab
<value></value>    
</property>    
    
<property>      hive.server2.enable.doAs
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.server2.logging.operation.enabled
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.server2.logging.operation.log.location
<name></name>      ${system:java.io.tmpdir}/${system:user.name}/operation_logs
<value></value>    
</property>    
    
<property>      hive.server2.support.dynamic.service.discovery
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.server2.table.type.mapping
<name></name>      CLASSIC
<value></value>    
</property>    
    
<property>      hive.server2.tez.default.queues
<name></name>      default
<value></value>    
</property>    
    
<property>      hive.server2.tez.initialize.default.sessions
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.server2.tez.sessions.per.default.queue
<name></name>      6
<value></value>    
</property>    
    
<property>      hive.server2.thrift.http.path
<name></name>      cliservice
<value></value>    
</property>    
    
<property>      hive.server2.thrift.http.port
<name></name>      10001
<value></value>    
</property>    
    
<property>      hive.server2.thrift.max.worker.threads
<name></name>      500
<value></value>    
</property>    
    
<property>      hive.server2.thrift.port
<name></name>      10010
<value></value>    
</property>    
    
<property>      hive.server2.thrift.sasl.qop
<name></name>      auth
<value></value>    
</property>    
    
<property>      hive.server2.transport.mode
<name></name>      http
<value></value>    
</property>    
    
<property>      hive.server2.use.SSL
<name></name>      false
<value></value>    
</property>    
    
<property>      hive.server2.zookeeper.namespace
<name></name>      hiveserver2
<value></value>    
</property>    
    
<property>      hive.smbjoin.cache.rows
<name></name>      10000
<value></value>    
</property>    
    
<property>      hive.stats.autogather
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.stats.dbclass
<name></name>      fs
<value></value>    
</property>    
    
<property>      hive.stats.fetch.column.stats
<name></name>      false
<value></value>    
</property>    
    
<property>      hive.stats.fetch.partition.stats
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.support.concurrency
<name></name>      false
<value></value>    
</property>    
    
<property>      hive.tez.auto.reducer.parallelism
<name></name>      false
<value></value>    
</property>    
    
<property>      hive.tez.container.size
<name></name>      3072
<value></value>    
</property>    
    
<property>      hive.tez.cpu.vcores
<name></name>      -1
<value></value>    
</property>    
    
<property>      hive.tez.dynamic.partition.pruning
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.tez.dynamic.partition.pruning.max.data.size
<name></name>      104857600
<value></value>    
</property>    
    
<property>      hive.tez.dynamic.partition.pruning.max.event.size
<name></name>      1048576
<value></value>    
</property>    
    
<property>      hive.tez.input.format
<name></name>      org.apache.hadoop.hive.ql.io.HiveInputFormat
<value></value>    
</property>    
    
<property>      hive.tez.java.opts
<name></name>      -server -Xmx2048m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps
<value></value>    
</property>    
    
<property>      hive.tez.log.level
<name></name>      WARN
<value></value>    
</property>    
    
<property>      hive.tez.max.partition.factor
<name></name>      2.0
<value></value>    
</property>    
    
<property>      hive.tez.min.partition.factor
<name></name>      0.25
<value></value>    
</property>    
    
<property>      hive.tez.smb.number.waves
<name></name>      0.5
<value></value>    
</property>    
    
<property>      hive.txn.manager
<name></name>      org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
<value></value>    
</property>    
    
<property>      hive.txn.max.open.batch
<name></name>      1000
<value></value>    
</property>    
    
<property>      hive.txn.timeout
<name></name>      300
<value></value>    
</property>    
    
<property>      hive.user.install.directory
<name></name>      /user/
<value></value>    
</property>    
    
<property>      hive.vectorized.execution.enabled
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.vectorized.execution.reduce.enabled
<name></name>      true
<value></value>    
</property>    
    
<property>      hive.vectorized.groupby.checkinterval
<name></name>      4096
<value></value>    
</property>    
    
<property>      hive.vectorized.groupby.flush.percent
<name></name>      0.1
<value></value>    
</property>    
    
<property>      hive.vectorized.groupby.maxentries
<name></name>      100000
<value></value>    
</property>    
    
<property>      hive.zookeeper.client.port
<name></name>      2181
<value></value>    
</property>    
    
<property>      hive.zookeeper.namespace
<name></name>      hive_zookeeper_namespace
<value></value>    
</property>    
    
<property>      hive.zookeeper.quorum
<name></name>      dev-m2:2181,dev-m3:2181,dev-m1:2181
<value></value>    
</property>    
    
<property>      javax.jdo.option.ConnectionDriverName
<name></name>      com.mysql.jdbc.Driver
<value></value>    
</property>    
    
<property>      javax.jdo.option.ConnectionPassword
<name></name>      hive
<value></value>    
</property>    
    
<property>      javax.jdo.option.ConnectionURL
<name></name>      jdbc:mysql://dev-m3/hive?createDatabaseIfNotExist=true
<value></value>    
</property>    
    
<property>      javax.jdo.option.ConnectionUserName
<name></name>      hive
<value></value>    
</property>    
  


</configuration>

Please let me know if there are necessary steps to reflect this setting.

Thanks

Tsuyoshi

3 REPLIES 3

avatar

Hi @Tsuyoshi Sanda!
Could you check the following command?

[hive@node1 ~]$ hive -e "set;" | grep -i hive.metastore.client.socket.timeout
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.4.0-91/0/hive-log4j.properties
hive.metastore.client.socket.timeout=1800s

And one thing that is tricking my mind, is who's 1001 user? Do you have a hive user owning this process?

Hope this helps!

avatar
New Contributor

Hi @Vinicius Higa Murakami!

Thank you for answering my question!

The following results came out.

[hive@dev-m3 ~]$ hive -e "set;" | grep -i hive.metastore.client.socket.timeout
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/spark/lib/spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARNING: Use "yarn jar" to launch YARN applications.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/spark/lib/spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,file:/usr/hdp/2.3.2.0-2950/hadoop/lib/hadoop-lzo-0.6.0.2.3.2.0-2950-sources.jar!/ivysettings.xml will be used
Logging initialized using configuration in file:/etc/hive/2.3.2.0-2950/0/hive-log4j.properties
hive.metastore.client.socket.timeout=1800

hive.metastore.client.socket.timeout is indeed the value set in Ambari.

However, the process generated by Hive Server when running a Hive job is different.

1001     14260 14111  0 12:51 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --hiveconf hive.metastore.uris=thrift://dev-m3:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e show databases;

The uid of 1001 seems to be possessed by root.

[root@dev-m3 ~]# ps -ef | grep hive
hive      2611     1  0 Jun15 ?        00:03:54 /usr/jdk64/jdk1.8.0_40/bin/java -Xmx1024m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:MaxPermSize=512m -Xmx4096m -Xloggc:/var/log/hive/gc.log-metastore-201806151647 -XX:ErrorFile=/var/log/hive/hive-metastore-error.log-201806151647 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.2.0-2950/hive/lib/hive-service-1.2.1.2.3.2.0-2950.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive
root      9000  8982  0 12:43 pts/4    00:00:00 su - hive
hive      9001  9000  0 12:43 pts/4    00:00:00 -bash
root     15832  2767  0 12:54 ?        00:00:00 /bin/bash /var/lib/ambari-agent/ambari-sudo.sh su ambari-qa -l -s /bin/bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/sbin/:/usr/hdp/current/hive-metastore/bin' ; export HIVE_CONF_DIR='/usr/hdp/current/hive-metastore/conf/conf.server' ; hive --hiveconf hive.metastore.uris=thrift://dev-m3:9083                 --hiveconf hive.metastore.client.connect.retry.delay=1                 --hiveconf hive.metastore.failure.retries=1                 --hiveconf hive.metastore.connect.retries=1                 --hiveconf hive.metastore.client.socket.timeout=14                 --hiveconf hive.execution.engine=mr -e 'show databases;'
root     15838 15832  0 12:54 ?        00:00:00 su ambari-qa -l -s /bin/bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/sbin/:/usr/hdp/current/hive-metastore/bin' ; export HIVE_CONF_DIR='/usr/hdp/current/hive-metastore/conf/conf.server' ; hive --hiveconf hive.metastore.uris=thrift://dev-m3:9083                 --hiveconf hive.metastore.client.connect.retry.delay=1                 --hiveconf hive.metastore.failure.retries=1                 --hiveconf hive.metastore.connect.retries=1                 --hiveconf hive.metastore.client.socket.timeout=14                 --hiveconf hive.execution.engine=mr -e 'show databases;'
1001     15841 15838  0 12:54 ?        00:00:00 -bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/sbin/:/usr/hdp/current/hive-metastore/bin' ; export HIVE_CONF_DIR='/usr/hdp/current/hive-metastore/conf/conf.server' ; hive --hiveconf hive.metastore.uris=thrift://dev-m3:9083                 --hiveconf hive.metastore.client.connect.retry.delay=1                 --hiveconf hive.metastore.failure.retries=1                 --hiveconf hive.metastore.connect.retries=1                 --hiveconf hive.metastore.client.socket.timeout=14                 --hiveconf hive.execution.engine=mr -e 'show databases;'
root     15848  2767  0 12:54 ?        00:00:00 /bin/bash /var/lib/ambari-agent/ambari-sudo.sh su ambari-qa -l -s /bin/bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/lib/hive/bin/:/usr/sbin/' ; ! beeline -u 'jdbc:hive2://dev-m3:10001/;transportMode=http;httpPath=cliservice' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL'
1001     15860 15841  0 12:54 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --hiveconf hive.metastore.uris=thrift://dev-m3:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e show databases;
root     15865 15848  0 12:54 ?        00:00:00 su ambari-qa -l -s /bin/bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/lib/hive/bin/:/usr/sbin/' ; ! beeline -u 'jdbc:hive2://dev-m3:10001/;transportMode=http;httpPath=cliservice' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL'
1001     15879 15865  0 12:54 ?        00:00:00 -bash -c export  PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/lib/hive/bin/:/usr/sbin/' ; ! beeline -u 'jdbc:hive2://dev-m3:10001/;transportMode=http;httpPath=cliservice' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL'
1001     15896 15879  0 12:54 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --service beeline -u jdbc:hive2://dev-m3:10001/;transportMode=http;httpPath=cliservice -e
1001     15919 15860  0 12:54 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --hiveconf hive.metastore.uris=thrift://dev-m3:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e show databases;
1001     15920 15919  0 12:54 ?        00:00:00 /usr/jdk64/jdk1.8.0_40/bin/java -Xmx1024m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/ambari-qa -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=ambari-qa -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.2.0-2950/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:MaxPermSize=512m -Xmx4096m -Xloggc:/var/log/hive/gc.log-cli-201806181254 -XX:ErrorFile=/var/log/hive/hive-metastore-error.log-201806181254 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.VersionInfo
1001     15970 15896  0 12:54 ?        00:00:00 bash /usr/hdp/2.3.2.0-2950/hive/bin/hive.distro --service beeline -u jdbc:hive2://dev-m3:10001/;transportMode=http;httpPath=cliservice -e
1001     15971 15970  0 12:54 ?        00:00:00 /usr/jdk64/jdk1.8.0_40/bin/java -Xmx1024m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/ambari-qa -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=ambari-qa -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.2.0-2950/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:MaxPermSize=512m -Xmx1536m -Xloggc:/var/log/hive/gc.log-beeline-201806181254 -XX:ErrorFile=/var/log/hive/hive-metastore-error.log-201806181254 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx1536m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.VersionInfo
root     16023 14010  0 12:54 pts/5    00:00:00 grep hive
hive     16865     1  1 12:01 ?        00:00:37 /usr/jdk64/jdk1.8.0_40/bin/java -Xmx1024m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:MaxPermSize=512m -Xmx4096m -Xloggc:/var/log/hive/gc.log-hiveserver2-201806181201 -XX:ErrorFile=/var/log/hive/hive-metastore-error.log-201806181201 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.2.0-2950/hive/lib/hive-service-1.2.1.2.3.2.0-2950.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris=  -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
hcat     30632     1  0 Jun15 ?        00:05:27 /usr/jdk64/jdk1.8.0_40/bin/java -Xmx1024m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.3.2.0-2950/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:MaxPermSize=512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.2.0-2950/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1.2.3.2.0-2950.jar org.apache.hive.hcatalog.templeton.Main

I am waiting for an answer.

Thanks

avatar

Hey @Tsuyoshi Sanda!
Looks like this 1001 user is setting manually this hiveconf (including the hive.metastore.client.socket.timeout), if your concern is to impact your hive sessions with a low value for hive.metastore.client.socket.timeout, don't worry about it. Cause that command hive -e "set;"| grep -i hive.metastore.client.socket.timeout its showing that Ambari is able to set the correct value to your hive sessions.
And going further to figure out what's happening, I'd login as 1001 and run the pstree on those strange PID's. In the meantime,
What it seems to be weird, is root/1001 users are running the same command at the end (hive + show databases), and login as ambari-qa. Try to grep the PID 2767 and if you able share with us, please.

Hope this helps!