Member since
08-05-2016
52
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1024 | 07-21-2017 12:22 PM |
01-17-2021
12:41 PM
Hi @vjain , To configure the BuckeCache in the descripption there is a two JVM properties. Which one to use please? : HBASE_OPTS or HBASE_REGIONSERVER_OPTS In the hbase-env.sh file for each RegionServer, or in the hbase-env.sh file supplied to Ambari, set the -XX:MaxDirectMemorySize argument for HBASE_REGIONSERVER_OPTS to the amount of direct memory you wish to allocate to HBase. In the configuration for the example discussed above, the value would be 241664m . ( -XX:MaxDirectMemorySize accepts a number followed by a unit indicator; m indicates megabytes.) HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize=241664m" Thanks, Helmi KHALIFA
... View more
02-22-2020
11:35 AM
Hi, I am facing the same problem. Do you find a solution to your problem ? Best, Helmi Khalifa
... View more
11-26-2019
10:46 AM
Hi,
I have an HBASE table with one million rows and when we query the table using a none existant rowkey value the query takes more than 50 sec. Example:
table : test
rowkey 1 : AB1234
query 1 : get 'test', 'AB12345'
rowkey 2 : DF1234
query 2 : get 'test', 'DF12345'
rowkey 3 : BC1234
query 3 : get 'test', 'BC12345'
this kind of queries 1, 2 and 3 take more than 50 sec
any idea please ?
best,
Helmi KHALIFA
... View more
Labels:
11-14-2019
05:07 AM
Hi!
I have some problems managing the HBase major compaction.
I configured the major compaction between 1 an 4 am but we still see major compactions executed at any hour.
Here the two configurations I tried :
First configuration:
hbase.hregion.majorcompaction=7 Days 0 Hours
hbase.offpeak.start.hour=1
hbase.offpeak.end.hour=4
Second configuration:
hbase.hregion.majorcompaction=0 Days 0 Hours
hbase.offpeak.start.hour=1
hbase.offpeak.end.hour=4
Did I miss something, please?
Thank you for your answer.
Best,
Helmi KHALIFA
... View more
Labels:
11-14-2019
02:42 AM
hi @avengers If it works for you, would you be kind enough to accept the answer please ? Best, Helmi KHALIFA
... View more
11-08-2019
08:42 AM
Hi @avengers , U will need to share variables between two zeppelin interpreters and i dont think that we can do it between spark and sparkSQL. I find an easier way by using sqlContext inside the same interpreter %spark: %spark val df = spark.read.format("csv").option("header", "true") .option("inferSchema", "true").load("/somefile.csv") df.createOrReplaceTempView("csvTable"); val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) val resultat = sqlContext.sql("select * from csvTable lt join hiveTable rt on lt.col = rt.col") resultat.show() I tried it and it works ! Best, Helmi KHALIFA
... View more
11-06-2019
02:08 AM
Hi @av , Here the links for the Hive and Spark interpreter doc's : https://zeppelin.apache.org/docs/0.8.2/interpreter/hive.html https://zeppelin.apache.org/docs/0.8.2/interpreter/spark.html Best, Helmi KHALIFA
... View more
11-05-2019
01:21 AM
Hi @Rak ; here the script : CREATE EXTERNAL TABLE IF NOT EXISTS sample_date (sc_code string, ddate timestamp, co_code DECIMAL, high DECIMAL, low DECIMAL, open DECIMAL, close DECIMAL, volume DECIMAL, no_trades DECIMAL, net_turnov DECIMAL, dmcap DECIMAL, return DECIMAL, factor DECIMAL, ttmpe DECIMAL, yepe DECIMAL, flag string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/lab/itim/ccbd/helmi/sampleDate' tblproperties('skip.header.line.count'='1'); ALTER TABLE sample_date SET SERDEPROPERTIES ("timestamp.formats"="MM/DD/YYYY"); Could you accept the answer please ? Best, Helmi KHALIFA
... View more
- Tags:
- Hive
11-04-2019
03:03 AM
Hi @Ra You have to change the name and column type as youscan see in red below : sc_code string ddate date co_code double high double low double open double close double volume double no_trades double net_turnov double dmcap double return double factor double ttmpe double yepe double flag string I tried it and it works well for me. Best, Helmi KHALIFA
... View more
- Tags:
- Hive
10-31-2019
06:17 AM
Hi @Rak , Can you show us some sample the 5 first rows of your csv file, please ? Best, Helmi KHALIFA
... View more
- Tags:
- Hive
10-31-2019
06:14 AM
Hi @saivenkatg55 More details please? screenshots of Yarn UI ? Best, Helmi KHALIFA
... View more
10-25-2019
01:18 AM
1 Kudo
Hi @RNN The best solution is to convert the Monthes to integers like: -Oct- => -10- -Dec- =>-12- So that is what i tested as you can see my file below: $ hdfs dfs -cat /lab/helmi/test_timestamp_MM.txt 1,2019-10-14 20:00:01.027898 2,2019-12-10 21:00:01.023 3,2019-11-25 20:00:01.03 4,2019-01-06 20:00:01.123 Create a Hive table : hive> CREATE EXTERNAL TABLE ttime(id int, t string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/lab/helmi/'; hive> select * from ttime; OK 1 2019-10-14 20:00:01.027898 2 2019-12-10 21:00:01.023 3 2019-11-25 20:00:01.03 4 2019-01-06 20:00:01.123 Time taken: 0.566 seconds, Fetched: 4 row(s) Finally i created another table with the right format: hive> create table mytime as select id, from_utc_timestamp(date_format(t,'yyyy-MM-dd HH:mm:ss.SSSSSS'),'UTC') as datetime from ttime; Best, Helmi KHALIFA
... View more
10-24-2019
07:34 AM
Could you try on another table? Looks like you have an encoding problem with these caracteres. Best, Helmi KHALIFA
... View more
- Tags:
- Hive
10-24-2019
07:04 AM
I just tried it and it works for me as you can see in the print screen below: Are you sure that the table is not empty ? Best, Helmi
... View more
- Tags:
- Hive
10-24-2019
06:55 AM
I am not sure that you cant use it as your verison of hbase is 2.1.0 > 2.0.3 right? could you try it and share the result of RIT ? Best, Helmi
... View more
- Tags:
- HBase
10-24-2019
06:49 AM
Hi @Algrach , Could you try this and tell me if it works ? create view if not exists tdv.test_rus as select * from tdv.test_t_rus where c1 = '<the value>'; Best, Helmi KHALIFA
... View more
10-24-2019
06:29 AM
Hi @sachith , Use QueryRecord processor and configure/enable Reader/Writer controller services Add custom sql query as new property to the processor QueryRecord configs: select id,age, CASE WHEN name='' THEN 'abc' end name from FLowfile The output flowfile from QueryRecord processor will have your desired result id,name,age 1,sachith,29 2,abc,17 Best, Helmi KHALIFA
... View more
- Tags:
- NiFi
10-24-2019
05:55 AM
Hi @Li ; Did you try hbck1 before ? it is deprecated and if you tried it it seems that it complicate the situation with hbase 2.x https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 Best, Helmi KHALIFA
... View more
- Tags:
- HBase
09-24-2019
02:18 AM
HI @hadoopguy Yes there is an impact you will have longer processing time and the operations will be queued. You have to carefully handle the timeout in your jobs. Best, @helmi_khalifa
... View more
09-24-2019
02:11 AM
Hi Suresh, There is no command but you can easily find the information on the HBase Web UI. http://host:16010/master-status#baseStats Best, Helmi KHALIFA
... View more
06-20-2019
09:12 AM
Hi rename the file gateway.jks mv /var/lib/knox/data-2.6.4.0-91/security/keystores/gateway.jks /var/lib/knox/data-2.6.4.0-91/security/keystores/gateway.jks.bck when you start the know instance it will create a new certificate. Best, Helmi KHALIFA
... View more
06-20-2019
09:11 AM
Hi rename the file gateway.jks mv /var/lib/knox/data-2.6.4.0-91/security/keystores/gateway.jks /var/lib/knox/data-2.6.4.0-91/security/keystores/gateway.jks.bck when you start the know instance it will create a new certificate. Best, Helmi KHALIFA
... View more
05-27-2019
09:56 AM
Hi, Because of too frequent HBase major compaction I am trying to run major compaction manually on all tables using a script. Is there easier way of doing this? Best, Helmi KHALIFA
... View more
Labels:
05-19-2019
10:57 AM
Hi, I am encountring slow opertaions problems on HBase cluster. Some queries took more than One minute. When i checked the RegionServers logs i found too frequent major compactions every day on the same Column Family and same regions although we have hbase default parameters (Major compaction once a week)? Any idea how to solve this ? Thanks. Helmi KHALIFA
... View more
Labels:
05-17-2019
09:56 AM
Hi Sanket, Did you solve this problem? I am encountring the same problem.. many Major compaction the same day in the same Column Family and the same server? Thanke Helmi KHALIFA
... View more
03-07-2019
02:37 PM
Hi, I installed : zeppelin 0.8.0 HDP-3.1.0.0 (3.1.0.0-78) then I configured the zeppelin.server.port=8080 The problem now is that it works randomly. When it shows green everything is ok and i see my notebooks but when i login and it still showing red with the message WebSocket Disconnected my notebook disappear and i can't work or create anything! Any help please ? Thanks Best, Helmi KHALIFA
... View more
Labels:
02-25-2019
10:49 PM
Hi Geoffrey, Thank you for your answer. I checked the pdf but there are manu differences between yours and mine as it is hdp 2.6.5 Vs hdp 3.1.0.0-78 Thanks, Helmi
... View more
02-25-2019
03:09 PM
Hi, I installed a cluster HDP-3.1.0.0 (3.1.0.0-78) with HDFS 3.1.1 And I can't execute even an hdfs dfs -ls / Did any one encountred the same problems , please ? Here the errors : /usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 239: hadoop_abs: command not found
/usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 248: hadoop_need_reexec: command not found
/usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 256: hadoop_verify_user_perm: command not found
/usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 267: hadoop_add_client_opts: command not found
/usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 274: hadoop_subcommand_opts: command not found
/usr/hdp/3.1.0.0-78//hadoop-hdfs/bin/hdfs.distro: line 277: hadoop_generic_java_subcmd_handler: command not found Best, Helmi KHALIFA
... View more
12-20-2018
01:36 PM
hi Muji, Great job 🙂 just missing a ',' after : B_df("_c1").cast(StringType).as("S_STORE_ID") // Assign column names to the Region dataframe
val storeDF = B_df.select( B_df("_c0").cast(IntegerType).as("S_STORE_SK"), B_df("_c1").cast(StringType).as("S_STORE_ID"), B_df("_c5").cast(StringType).as("S_STORE_NAME")
)
... View more
08-20-2018
09:20 PM
Hi Neeraj, Allowing read and wright to all users to Poenix SYSTEM tables is not really secure. Is there any solution to avoid it? Thanks Helmi
... View more