Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4067 | 08-20-2018 08:26 PM | |
| 1963 | 08-15-2018 01:59 PM | |
| 2390 | 08-13-2018 02:20 PM | |
| 4140 | 07-23-2018 04:37 PM | |
| 5046 | 07-19-2018 12:52 PM |
12-19-2016
09:04 PM
2 Kudos
you can use set hive.execution.engine=tez; while in a session or beeline --hiveconf hive.execution.engine=tez should work. what version of hive are you using? Are you setting any other value for --hiveconf? Another option is the sel set hive.execution.engine=tez; inside your hive script or have --hivevar has parameter into your hive script to set execution engine beeline -u jdbc:hive2://hostname:10000 -n xxxx -p xxxx -f /home/hdfs/scripts/hive/myscript.hql --hivevar engine=tez Then use ${engine} variable inside your script.
... View more
12-19-2016
07:59 PM
Here is documenation on enabling cpu scheduling https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_yarn_resource_mgt/content/ch_cpu_scheduling.html
... View more
12-19-2016
03:35 PM
@Artem Ervits @Timothy Spann I have confirmed this is a known bug. fixed in next patch.
... View more
12-19-2016
05:27 AM
Did you provide hive user access to local file system directory /tmp?
... View more
12-19-2016
05:23 AM
You will need to enable CPU scheduling. by default yarn only accounts for ram. if you enable cpu scheduling, each yarn app will ram for and cpu, and yarn will provide containers based on what is available per your ram and CPU limits. if cpu scheduling is not enabled, very difficult to limit.
... View more
12-17-2016
05:33 AM
I found the answer to this. ListenSMTP can sit behind LB if the processor will run on all nodes if single DNS is to be exposed. you can also have processor only run on single node (primary). Another option, not tested or fully vetted. have SMTP response payload into nifi rest processor. from there push to process group which will have SMTPListen processor. This will automatically load balance @Artem Ervits I provided this answer after some research. no response from other. please accept if this answer is good.
... View more
12-17-2016
05:04 AM
1 Kudo
I am trying to access ambari hive llap view and encounter error: Service 'userhome' check failed: Operation category READ is not supported in state standby any ideas? I do not get these error for the hive view. log Service 'userhome' check failed:
org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:509)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:113)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:801)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:767)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:582)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:612)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:608)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:987)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1003)
at org.apache.ambari.view.utils.hdfs.HdfsApi$3.run(HdfsApi.java:127)
at org.apache.ambari.view.utils.hdfs.HdfsApi$3.run(HdfsApi.java:125)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.ambari.view.utils.hdfs.HdfsApi.execute(HdfsApi.java:397)
... View more
Labels:
- Labels:
-
Apache Ambari
12-17-2016
04:46 AM
1 Kudo
@shakir mulani HBase does the load balancing for you when a regionserver goes down. When this happens, hbase master will resign all the regions from the failed regionserver to other available region servers.
... View more
12-17-2016
02:32 AM
the solr ambari mpack is failing. can you take at look at this post https://community.hortonworks.com/questions/65919/solr-install-fails-on-hdp-25-sandbox.html
... View more
12-17-2016
02:21 AM
You import into hbase column family one at a time. if you have 2 column families you import into each with two sqoop run. if you have 3 CF then it takes 3 sqoop runs. Here is good examples from here $ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,customername"
--hbase-table customercontactinfo
--column-family CustomerName
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 16:53:01 INFO mapreduce.ImportJobBase: Retrieved 5 records.
$ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,contactinfo"
--hbase-table customercontactinfo
--column-family ContactInfo
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 17:00:59 INFO mapreduce.ImportJobBase: Retrieved 5 records.
$ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,productnums"
--hbase-table customercontactinfo
--column-family ProductNums
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 17:05:54 INFO mapreduce.ImportJobBase: Retrieved 5 records.
... View more