Member since
08-05-2016
52
Posts
1
Kudos Received
1
Solution
01-17-2021
12:41 PM
Hi @vjain , To configure the BuckeCache in the descripption there is a two JVM properties. Which one to use please? : HBASE_OPTS or HBASE_REGIONSERVER_OPTS In the hbase-env.sh file for each RegionServer, or in the hbase-env.sh file supplied to Ambari, set the -XX:MaxDirectMemorySize argument forHBASE_REGIONSERVER_OPTS to the amount of direct memory you wish to allocate to HBase. In the configuration for the example discussed above, the value would be 241664m. (-XX:MaxDirectMemorySize accepts a number followed by a unit indicator; m indicates megabytes.) HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize=241664m" Thanks, Helmi KHALIFA
... View more
11-14-2019
02:42 AM
hi @avengers If it works for you, would you be kind enough to accept the answer please ? Best, Helmi KHALIFA
... View more
11-08-2019
08:42 AM
Hi @avengers , U will need to share variables between two zeppelin interpreters and i dont think that we can do it between spark and sparkSQL. I find an easier way by using sqlContext inside the same interpreter %spark: %spark val df = spark.read.format("csv").option("header", "true") .option("inferSchema", "true").load("/somefile.csv") df.createOrReplaceTempView("csvTable"); val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) val resultat = sqlContext.sql("select * from csvTable lt join hiveTable rt on lt.col = rt.col") resultat.show() I tried it and it works ! Best, Helmi KHALIFA
... View more
11-06-2019
02:08 AM
Hi @av , Here the links for the Hive and Spark interpreter doc's : https://zeppelin.apache.org/docs/0.8.2/interpreter/hive.html https://zeppelin.apache.org/docs/0.8.2/interpreter/spark.html Best, Helmi KHALIFA
... View more
12-20-2018
01:36 PM
hi Muji, Great job 🙂 just missing a ',' after : B_df("_c1").cast(StringType).as("S_STORE_ID") // Assign column names to the Region dataframe
val storeDF = B_df.select( B_df("_c0").cast(IntegerType).as("S_STORE_SK"), B_df("_c1").cast(StringType).as("S_STORE_ID"), B_df("_c5").cast(StringType).as("S_STORE_NAME")
)
... View more
08-20-2018
09:20 PM
Hi Neeraj, Allowing read and wright to all users to Poenix SYSTEM tables is not really secure. Is there any solution to avoid it? Thanks Helmi
... View more