Member since
01-21-2016
290
Posts
76
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3216 | 01-09-2017 11:00 AM | |
1288 | 12-15-2016 09:48 AM | |
5569 | 02-03-2016 07:00 AM |
09-12-2016
06:09 AM
I am planning to run a flexi cluster through some degree of automation. For that i need to some help on stopping all the components on a node and deleting the node. Can anyone provide the ambari rest api to stop all services on a particular host and delete that host from the cluster.
... View more
Labels:
- Labels:
-
Apache Ambari
09-02-2016
05:21 AM
I have a 5 node HDP cluster and i want to add Hbase on to it. On how many nodes should i install HMaster. By default one node is enough i guess. Any thoughts on it. Also do i need to install phoenix query server, if so on how many nodes.
... View more
Labels:
- Labels:
-
Apache HBase
08-31-2016
02:16 AM
1 Kudo
I changed the hbase.client.scanner.timeout.period at both the client and server level. After that i use phoenix to run a query, but still see the error. Seems even after changing the require parameters the defaults get applied. Any suggestions on how to fix this? Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 1864561ms passed since the last invocation, timeout is currently set to 60000 Also sometimes i do see the below exception too java.net.SocketTimeoutException: callTimeout=60000, callDuration=60307 i have set the following properties <property>
<name>hbase.client.scanner.timeout.period</name>
<value>1800000</value>
</property>
--
<property>
<name>hbase.rpc.timeout</name>
<value>1800000</value>
</property>
--
<property>
<name>phoenix.query.keepAliveMs</name>
<value>1800000</value>
</property>
<property>
<name>phoenix.query.timeoutMs</name>
<value>1800000</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>1800000</value>
</property>
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
08-24-2016
12:26 PM
When i create a hbase table using phoenix and if i dont specify the TTL option, what will be the value for TTL. Also if i want to ensure that the data in hbase table created using phoenix should live forever, what is the setting that needs to be done.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
08-05-2016
06:12 AM
@Bernhard Walter, my requirement is similar. File with data like Col1|Col2|Col3|Col4 "AB"|"CD"|"DE"| "EF" and i need an output like after loading into a dataframe Col1|Col2|Col3|Col4 AB|CD|DE|EF I dont see your suggestion working. How will escaping : escape doble quotes
... View more
08-04-2016
04:50 AM
I am reading a csv file into a spark dataframe. i have the double quotes ("") in some of the fields and i want to escape it. can anyone let me know how can i do this?. since double quotes is used in the parameter list for options method, i dont know how to escape double quotes in the data val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("delimiter", "|"). option("escape", -----
... View more
Labels:
- Labels:
-
Apache Spark
08-03-2016
12:28 PM
We are having a 5 node cluster. ( 2 master and 3 slave) and we are running MR jobs. but we always see that only 2 nodes are getting loaded and utilized, while the other node remains idle. what all could be the reasons for this. all the 3 nodes are in the same rack.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN