Member since
08-05-2016
52
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3783 | 07-21-2017 12:22 PM |
11-10-2021
12:54 AM
Hi, it should. But when You need to use certs signed with Your organisation use: convert .p12 to pfx (you will need also pem file) openssl pkcs12 -export -out YOUROWNNAME.pfx -inkey YOUR_KEYS.pem -in YOUR_KEYS.pem -certfile YOUR_KEYS.pem When You manage to get pfx file use: keytool -importkeystore -srckeystore gateway.pfx -srcstoretype pkcs12
-srcalias [ALIAS_SRC] -destkeystore [MY_KEYSTORE.jks]
-deststoretype jks -deststorepass [PASSWORD_JKS] -destalias gateway-identity [ALIAS_SRC] - read from pfx file to do that use: keytool -v -list -storetype pkcs12 -keystore YOUROWNNAME.pfx At end use this: mv gateway.jks /var/lib/knox/data-2.6.4.0-91/security/keystores/
... View more
03-24-2021
01:47 PM
--show diff values if hour is 02 select '2021-03-14 02:01:03.118', cast('2021-03-14 02:01:03.118' as timestamp) OUTPUT _c0 _c1 1 2021-03-14 02:01:03.118 2021-03-14 03:01:03.118
... View more
01-17-2021
12:41 PM
Hi @vjain , To configure the BuckeCache in the descripption there is a two JVM properties. Which one to use please? : HBASE_OPTS or HBASE_REGIONSERVER_OPTS In the hbase-env.sh file for each RegionServer, or in the hbase-env.sh file supplied to Ambari, set the -XX:MaxDirectMemorySize argument forHBASE_REGIONSERVER_OPTS to the amount of direct memory you wish to allocate to HBase. In the configuration for the example discussed above, the value would be 241664m. (-XX:MaxDirectMemorySize accepts a number followed by a unit indicator; m indicates megabytes.) HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize=241664m" Thanks, Helmi KHALIFA
... View more
02-24-2020
06:14 PM
I tried with spark.streaming.backpressure.pid.minRate which work as expected. My configuration: spark.shuffle.service.enabled: "true"
spark.streaming.kafka.maxRatePerPartition: "600"
spark.streaming.backpressure.enabled: "true"
spark.streaming.concurrentJobs: "1"
spark.executor.extraJavaOptions: "-XX:+UseConcMarkSweepGC"
batch.duration: 5
spark.streaming.backpressure.pid.minRate: 2000 so, by configuration it starts with = 15 (total Number of partitions) X 600 (maxRatePerPartition) X 5 (batch Duration) = 45000 but it doesn't able to process these many records in 5 seconds. It drops to ~10,000 = 2000 (pid.minRate) X 5 (batch duration) So, it spark.streaming.backpressure.pid.minRate is total records per seconds. Just set spark.streaming.backpressure.pid.minRate and leave following config as default spark.streaming.backpressure.pid.integral
spark.streaming.backpressure.pid.proportional
spark.streaming.backpressure.pid.derived
... View more
11-14-2019
02:54 AM
Hey @avengers, Just thought, this could add some more value to this question here. Spark SQL uses a Hive Metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access) [1]. Also, I don't think there would be a MetaStore crash if we use it along with HiveOnSpark. [1] https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-hive-metastore.html
... View more
11-05-2019
01:21 AM
Hi @Rak ; here the script : CREATE EXTERNAL TABLE IF NOT EXISTS sample_date (sc_code string, ddate timestamp, co_code DECIMAL, high DECIMAL, low DECIMAL, open DECIMAL, close DECIMAL, volume DECIMAL, no_trades DECIMAL, net_turnov DECIMAL, dmcap DECIMAL, return DECIMAL, factor DECIMAL, ttmpe DECIMAL, yepe DECIMAL, flag string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/lab/itim/ccbd/helmi/sampleDate' tblproperties('skip.header.line.count'='1'); ALTER TABLE sample_date SET SERDEPROPERTIES ("timestamp.formats"="MM/DD/YYYY"); Could you accept the answer please ? Best, Helmi KHALIFA
... View more
09-24-2019
02:18 AM
HI @hadoopguy Yes there is an impact you will have longer processing time and the operations will be queued. You have to carefully handle the timeout in your jobs. Best, @helmi_khalifa
... View more
09-24-2019
02:11 AM
Hi Suresh, There is no command but you can easily find the information on the HBase Web UI. http://host:16010/master-status#baseStats Best, Helmi KHALIFA
... View more
03-18-2019
09:08 PM
@Josh Elser Can you pls. guide me, how and where to set the Java heap space on the client ? I have windows machines where my app runs and the phoenix queries are trigger from these windows system. I see no logs on the server side, I believe, the query is failing on the client side itself.
... View more
06-07-2018
10:26 AM
here are some hints given: http://hbase.apache.org/0.94/book/secondary.indexes.html In most cases you'll have to create a second index table.
... View more