Member since
12-06-2016
136
Posts
12
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
646 | 01-18-2018 12:56 PM |
04-04-2019
04:03 PM
Hi all! Also, have this error.
... View more
03-07-2019
01:22 PM
Hi! Have error after filling/add parameters in to file /etc/hadoop/conf/core-site.xml Hadoop hosted om EC2 instances . [ec2-user@ip ~]$ hadoop fs -ls s3a://hive-tables/ 19/03/07 11:19:10 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 19/03/07 11:19:10 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 19/03/07 11:19:10 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started ls: From option fs.s3a.aws.credentials.provider java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider not found grep -A 1 -i fs.s3a. /etc/hadoop/conf/core-site.xml
<name>fs.s3a.access.key</name>
<value>1111111111111</value>
--
<name>fs.s3a.assumed.role.arn</name>
<value>arn:aws:iam::111111111111:role/hive-tables</value>
--
<name>fs.s3a.assumed.role.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
<text>true</text>
--
<name>fs.s3a.assumed.role.session.duration</name>
<value>30m</value>
--
<name>fs.s3a.aws.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,com.amazonaws.auth.EnvironmentVariableCredentialsProvider, com.amazonaws.auth.InstanceProfileCredentialsProvider,org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider,org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</value>
<text>true</text>
--
<name>fs.s3a.fast.upload</name>
<value>true</value>
--
<name>fs.s3a.fast.upload.buffer</name>
<value>disk</value>
--
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<text>true</text>
--
<name>fs.s3a.multipart.size</name>
<value>67108864</value>
--
<name>fs.s3a.secret.key</name>
<value>111111</value>
--
<name>fs.s3a.user.agent.prefix</name>
<value>User-Agent: APN/1.0 Hortonworks/1.0 HDP/3.1.0.0-78</value> [hdfs@ip- ~]$ hadoop version Hadoop 3.1.1.3.1.0.0-78 Source code repository git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64 Compiled by jenkins on 2018-12-06T12:26Z Compiled with protoc 2.5.0 From source with checksum eab9fa2a6aa38c6362c66d8df75774 This command was run using /usr/hdp/3.1.0.0-78/hadoop/hadoop-common-3.1.1.3.1.0.0-78.jar
... View more
Labels:
03-07-2019
01:22 PM
Hi! Have error after filling/add parameters in to file /etc/hadoop/conf/core-site.xml Hadoop hosted om EC2 instances . [ec2-user@ip ~]$ hadoop fs -ls s3a://hive-tables/ 19/03/07 11:19:10 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 19/03/07 11:19:10 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 19/03/07 11:19:10 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started ls: From option fs.s3a.aws.credentials.provider java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider not found grep -A 1 -i fs.s3a. /etc/hadoop/conf/core-site.xml
<name>fs.s3a.access.key</name>
<value>1111111111111</value>
--
<name>fs.s3a.assumed.role.arn</name>
<value>arn:aws:iam::111111111111:role/hive-tables</value>
--
<name>fs.s3a.assumed.role.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
<text>true</text>
--
<name>fs.s3a.assumed.role.session.duration</name>
<value>30m</value>
--
<name>fs.s3a.aws.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,com.amazonaws.auth.EnvironmentVariableCredentialsProvider, com.amazonaws.auth.InstanceProfileCredentialsProvider,org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider,org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</value>
<text>true</text>
--
<name>fs.s3a.fast.upload</name>
<value>true</value>
--
<name>fs.s3a.fast.upload.buffer</name>
<value>disk</value>
--
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<text>true</text>
--
<name>fs.s3a.multipart.size</name>
<value>67108864</value>
--
<name>fs.s3a.secret.key</name>
<value>111111</value>
--
<name>fs.s3a.user.agent.prefix</name>
<value>User-Agent: APN/1.0 Hortonworks/1.0 HDP/3.1.0.0-78</value> [hdfs@ip- ~]$ hadoop version Hadoop 3.1.1.3.1.0.0-78 Source code repository git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64 Compiled by jenkins on 2018-12-06T12:26Z Compiled with protoc 2.5.0 From source with checksum eab9fa2a6aa38c6362c66d8df75774 This command was run using /usr/hdp/3.1.0.0-78/hadoop/hadoop-common-3.1.1.3.1.0.0-78.jar
... View more
- Tags:
- Hadoop Core
- S3
10-16-2018
06:17 AM
In moment shutdown HiveServer2 see a lot of GC warning [root@serv02 hive]$ cat hiveserver2.log | grep GC | grep WARN | tail -20
2018-10-16 05:31:21,969 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 25250ms
2018-10-16 05:31:51,089 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28618ms
2018-10-16 05:32:21,611 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 30019ms
2018-10-16 05:33:01,439 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 39326ms
2018-10-16 05:33:27,008 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 25067ms
2018-10-16 05:33:49,902 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 22391ms
2018-10-16 05:34:17,273 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 24213ms
2018-10-16 05:35:57,881 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 25266ms
2018-10-16 06:44:20,963 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 11909ms
2018-10-16 06:46:13,689 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 46503ms
2018-10-16 06:46:42,536 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28346ms
2018-10-16 06:47:17,036 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 33999ms
2018-10-16 06:47:32,386 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 14844ms
2018-10-16 06:49:50,693 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 14302ms
2018-10-16 06:50:19,092 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 15315ms
2018-10-16 06:52:04,384 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 14102ms
2018-10-16 06:52:22,888 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 15489ms
2018-10-16 07:35:50,690 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 17582ms
2018-10-16 07:36:10,431 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 19238ms
2018-10-16 07:54:04,241 WARN [org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@cae2a97]: common.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Detected pause in JVM or host machine (eg GC): pause of approximately 22707ms
Hive_env_template
export HADOOP_USER_CLASSPATH_FIRST=true #this prevents old metrics libs from mapreduce lib from bringing in old jar deps overriding HIVE_LIB
if [ "$SERVICE" = "cli" ]; then
if [ -z "$DEBUG" ]; then
export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseNUMA -XX:+UseParallelGC -XX:-UseGCOverheadLimit"
else
export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
fi
fi
# The heap size of the jvm stared by hive shell script can be controlled via:
if [ "$SERVICE" = "metastore" ] || [ "$SERVICE" = "hiveserver2" ]; then
export HADOOP_HEAPSIZE=53960
fi
export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Xmx${HADOOP_HEAPSIZE}m"
# Larger heap size may be required when running queries over large number of files or partitions.
# By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be
# appropriate for hive server (hwi etc).
# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}
export HIVE_HOME=${HIVE_HOME:-{{hive_home_dir}}}
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=${HIVE_CONF_DIR:-{{hive_config_dir}}}
# Folder containing extra libraries required for hive compilation/execution can be controlled by:
if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
if [ -f "${HIVE_AUX_JARS_PATH}" ]; then
export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}
elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
fi
elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
fi
export METASTORE_PORT={{hive_metastore_port}}
{% if sqla_db_used or lib_dir_available %}
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:{{jdbc_libs_dir}}"
export JAVA_LIBRARY_PATH="$JAVA_LIBRARY_PATH:{{jdbc_libs_dir}}"
{% endif %}
export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS{{heap_dump_opts}}"
... View more
Labels:
10-12-2018
01:39 PM
Hi all!
we are running pyspark code with NIFI ExecuteSparkInteractive, but in YARN see a lot of generated livy-session about 103024
Why this happen ?
... View more
Labels:
10-04-2018
06:44 AM
Hi! this in express upgrade with "Skip all Service Check failures" Documentation: https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/upgrade-hdf.html
... View more
10-03-2018
04:40 PM
Hi! Thanks you , I check and take an ansver .
... View more
10-03-2018
03:08 PM
Hi! Have problem in process NIFI Upgrade , [root@serv12 ~]# grep -i Ranger /var/lib/ambari-agent/data/output-869.txt 2018-10-03 17:49:28,216 - Ranger admin not installed
... View more
Labels:
10-03-2018
02:34 PM
Hi! Have same problem, but Ranger not installed [root@serv12 ~]# grep -i Ranger /var/lib/ambari-agent/data/output-869.txt
2018-10-03 17:49:28,216 - Ranger admin not installed
... View more
09-25-2018
11:32 AM
How to respond to messages related to GC Failure notifications ? : Logs for container_e152_1537807360391_1057_01_000081
ResourceManager RM Home NodeManager Tools
2018-09-25 12:22:15 Starting to run new task attempt: attempt_1537807360391_1057_2_04_000182_0
1.148: [GC (Allocation Failure) [PSYoungGen: 134807K->24906K(204288K)] 134807K->24994K(2068480K), 0.0255463 secs] [Times: user=0.21 sys=0.02, real=0.03 secs]
1.411: [GC (Metadata GC Threshold) [PSYoungGen: 74769K->22331K(379904K)] 74857K->22427K(2244096K), 0.0233332 secs] [Times: user=0.12 sys=0.03, real=0.02 secs]
1.435: [Full GC (Metadata GC Threshold) [PSYoungGen: 22331K->0K(379904K)] [ParOldGen: 96K->21509K(264192K)] 22427K->21509K(644096K), [Metaspace: 20929K->20929K(1069056K)], 0.0289288 secs] [Times: user=0.32 sys=0.03, real=0.03 secs]
6.755: [GC (Allocation Failure) [PSYoungGen: 237172K->28660K(379904K)] 258681K->95006K(644096K), 0.0204362 secs] [Times: user=0.08 sys=0.24, real=0.02 secs]
15.663: [GC (Allocation Failure) [PSYoungGen: 213057K->28659K(547328K)] 279403K->193109K(811520K), 0.0390040 secs] [Times: user=0.14 sys=0.50, real=0.04 secs]
15.702: [Full GC (Ergonomics) [PSYoungGen: 28659K->0K(547328K)] [ParOldGen: 164449K->172137K(616960K)] 193109K->172137K(1164288K), [Metaspace: 32575K->32575K(1079296K)], 0.0628115 secs] [Times: user=0.96 sys=0.16, real=0.06 secs]
33.575: [GC (Allocation Failure) [PSYoungGen: 280808K->28661K(569344K)] 5743954K->5559068K(6478848K), 0.0278063 secs] [Times: user=0.21 sys=0.32, real=0.03 secs]
34.210: [GC (Allocation Failure) [PSYoungGen: 490957K->82416K(791552K)] 6021364K->5612831K(6701056K), 0.0320689 secs] [Times: user=0.57 sys=0.16, real=0.03 secs]
34.243: [Full GC (Ergonomics) [PSYoungGen: 82416K->0K(791552K)] [ParOldGen: 5530414K->5603149K(6443008K)] 5612831K->5603149K(7234560K), [Metaspace: 33968K->33968K(1079296K)], 0.9965374 secs] [Times: user=26.00 sys=0.29, real=0.99 secs]
36.154: [GC (Allocation Failure) [PSYoungGen: 459509K->36576K(683520K)] 6062659K->5639733K(7126528K), 0.0232837 secs] [Times: user=0.69 sys=0.02, real=0.02 secs]
36.746: [GC (Allocation Failure) [PSYoungGen: 466467K->13984K(783360K)] 6069625K->5617149K(7226368K), 0.0202414 secs] [Times: user=0.54 sys=0.01, real=0.02 secs]
37.123: [GC (Allocation Failure) [PSYoungGen: 321932K->14048K(784896K)] 5925097K->5617213K(7227904K), 0.0258958 secs] [Times: user=0.69 sys=0.01, real=0.02 secs]
37.723: [GC (Allocation Failure) [PSYoungGen: 566423K->15086K(787968K)] 6169588K->5618252K(7230976K), 0.0238336 secs] [Times: user=0.58 sys=0.00, real=0.02 secs]
38.036: [GC (Allocation Failure) [PSYoungGen: 259899K->14478K(790016K)] 5863064K->5617644K(7233024K), 0.0251177 secs] [Times: user=0.55 sys=0.01, real=0.03 secs]
38.293: [GC (Allocation Failure) [PSYoungGen: 222330K->1806K(792576K)] 5825496K->5617860K(7235584K), 0.0262013 secs] [Times: user=0.57 sys=0.02, real=0.02 secs]
38.624: [GC (Allocation Failure) [PSYoungGen: 270228K->1248K(793088K)] 5886282K->5617856K(7236096K), 0.0277400 secs] [Times: user=0.56 sys=0.00, real=0.03 secs]
... View more
Labels:
09-05-2018
01:51 PM
Hang occure when select use many partitions
--------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED
--------------------------------------------------------------------------------
Map 1 llap INITIALIZING -1 0 0 -1 0
--------------------------------------------------------------------------------
VERTICES: 00/01 [>>--------------------------] 0% ELAPSED TIME: 153.51 s
--------------------------------------------------------------------------------
... View more
Labels:
09-04-2018
06:43 AM
After run processor GenerateFlowFile from Primary Node only (Run Schedule 6000 sec), I see 2 flow files. Files generated from 2 nodes (see scrins).
... View more
Labels:
07-31-2018
10:39 AM
Hi! Try One of this changes:
Zeppelin
NIFI
ExecuteSparkInteractive
preds =
spark.sql(‘select * from sandbox.CHURN_PRP_D_PREDS_S ’)
preds =
spark.sql("select
* from andbox.CHURN_PRP_D_PREDS_S")
dir = ‘
/user/CHURN_PRP_D’
dir =
"hdfs:/user/CHURN_PRP_D"
date_df =
spark.sql("select to_date('{}') as score_date".format(date_calc))
ttt =
spark.createDataFrame([(date_calc,)], ["t"])
date_df =
ttt.select(to_date(ttt.t))
sqlContext.registerDataFrameAsTable(train,
"train")
train.registerTempTable("train")
... View more
06-21-2018
07:23 AM
Thanks you!!!
In my case need route “CSV” content, example:
Id date_col name
1|2018:06:2108:40:00|Ukraine
2|2018:06:2108:15:00|USA
If date_col less then 25 minutes of now(), route to rule1
If date_col great equivalent 25 minutes of now(), route to rule2
now() = 2018:06:21 08:50:00
Result
2018:06:2108:40:00 to rule1 <<<<<<<<<
2018:06:21 08:15:00 to rule2 <<<<<<<<<
... View more
06-20-2018
07:41 PM
I mean apply Expression Language on content : now(), gt, ge
... View more
Labels:
06-11-2018
01:45 PM
Hi! one more question, how control count "livy-session-"" ?
... View more
06-11-2018
11:46 AM
Problem was in "sqlContext.registerDataFrameAsTable" What investigate similar (HTTP) errors ?
... View more
06-06-2018
03:48 PM
Do you have example run Zeppelin REST API ?
... View more
06-06-2018
12:38 PM
CODE : executesparkinteractive-code.txt
ERROR :
2018-06-06 15:32:13,566 ERROR [Timer-Driven Process Thread-5] o.a.n.p.livy.ExecuteSparkInteractive ExecuteSparkInteractive[id=aeb74038-5333-13d2-0000-00001ea7e32e] ExecuteSparkInteractive[i
d=aeb74038-5333-13d2-0000-00001ea7e32e] failed to process session due to java.lang.RuntimeException: Failed : HTTP error code : 400 : Bad Request: {}
java.lang.RuntimeException: Failed : HTTP error code : 400 : Bad Request
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.readJSONObjectFromUrlPOST(ExecuteSparkInteractive.java:282)
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.submitAndHandleJob(ExecuteSparkInteractive.java:234)
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.onTrigger(ExecuteSparkInteractive.java:197)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
05-31-2018
08:03 PM
1 Kudo
Thank you! But what to do with processor , for example FetchFTP , after import change (version control) from DEV to PROD property: Password "No value set" Can add pasword to variables (on processor group), but this is not secured.
... View more
05-31-2018
09:02 AM
This is need for correct working in two environment dev and prod for example. Nifi 1.5
... View more
Labels:
04-23-2018
09:57 AM
---------------------------
ACID Transactions - ON
hive.support.concurrency=true
-------------------------------------
Table ODS_C1.CALL_HISTORYSS - not transaction
In table ODS_C1.CALL_HISTORYSS inserting data, but not in partition hday='2017-01-28'
Executing command: select count(*) from ODS_C1.CALL_HISTORYSS where hday='2017-01-28'; <<<<<<<<<<<<< problem select
Error: Error while processing statement: FAILED: Error in acquiring locks: Lock acquisition for LockRequest(component:[LockComponent(type:SHARED_READ, level:TABLE, dbname:ods_c1, tablename:
call_historyss, operationType:SELECT)], txnid:0, user:hive, hostname:ks-dmp01.kyivstar.ua, agentInfo:hive_20180423093534_533cc06f-dd4d-4bea-987b-af0e9c5ed468) timed out after 5515859ms. Lo
ckResponse(lockid:15373078, state:WAITING) (state=42000,code=10)
java.sql.SQLException: Error while processing statement: FAILED: Error in acquiring locks: Lock acquisition for LockRequest(component:[LockComponent(type:SHARED_READ, level:TABLE,
dbname:ods_c1, tablename:call_historyss, operationType:SELECT)], txnid:0, user:hive, hostname:ks-dmp01.kyivstar.ua, agentInfo:hive_20180423093534_533cc06f-dd4d-4bea-987b-af0e9c5ed468)
timed out after 5515859ms. LockResponse(lockid:15373078, state:WAITING)
at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:354)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:245)
at org.apache.hive.beeline.Commands.execute(Commands.java:859)
at org.apache.hive.beeline.Commands.sql(Commands.java:729)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1000)
at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:730)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:779)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
... View more
Labels:
03-20-2018
03:56 PM
load-gc.pnghang-reducer-task.png In Container ID log see many GC notification
... View more
Labels:
03-16-2018
04:00 PM
Hi! finded this tables! Thank you!
... View more
03-15-2018
02:42 PM
Thanks you ! But I do not see this tables in postgres database: postgres=# \connect ambari
You are now connected to database "ambari" as user "postgres".
ambari=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+------------------------
ambari | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | ambari=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
ranger | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | rangerdba=CTc/postgres
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
ambari=# \dt[S+]
List of relations
Schema | Name | Type | Owner | Size | Description
------------+-------------------------+-------+----------+------------+-------------
pg_catalog | pg_aggregate | table | postgres | 40 kB |
pg_catalog | pg_am | table | postgres | 40 kB |
pg_catalog | pg_amop | table | postgres | 64 kB |
pg_catalog | pg_amproc | table | postgres | 56 kB |
pg_catalog | pg_attrdef | table | postgres | 64 kB |
pg_catalog | pg_attribute | table | postgres | 784 kB |
pg_catalog | pg_auth_members | table | postgres | 0 bytes |
pg_catalog | pg_authid | table | postgres | 40 kB |
pg_catalog | pg_cast | table | postgres | 48 kB |
pg_catalog | pg_class | table | postgres | 192 kB |
pg_catalog | pg_collation | table | postgres | 256 kB |
pg_catalog | pg_constraint | table | postgres | 120 kB |
pg_catalog | pg_conversion | table | postgres | 56 kB |
pg_catalog | pg_database | table | postgres | 8192 bytes |
pg_catalog | pg_db_role_setting | table | postgres | 16 kB |
pg_catalog | pg_default_acl | table | postgres | 0 bytes |
pg_catalog | pg_depend | table | postgres | 560 kB |
pg_catalog | pg_description | table | postgres | 280 kB |
pg_catalog | pg_enum | table | postgres | 0 bytes |
pg_catalog | pg_extension | table | postgres | 40 kB |
pg_catalog | pg_foreign_data_wrapper | table | postgres | 0 bytes |
pg_catalog | pg_foreign_server | table | postgres | 0 bytes |
pg_catalog | pg_foreign_table | table | postgres | 0 bytes |
pg_catalog | pg_index | table | postgres | 104 kB |
pg_catalog | pg_inherits | table | postgres | 0 bytes |
pg_catalog | pg_language | table | postgres | 40 kB |
pg_catalog | pg_largeobject | table | postgres | 0 bytes |
pg_catalog | pg_largeobject_metadata | table | postgres | 0 bytes |
pg_catalog | pg_namespace | table | postgres | 40 kB |
pg_catalog | pg_opclass | table | postgres | 48 kB |
pg_catalog | pg_operator | table | postgres | 144 kB |
pg_catalog | pg_opfamily | table | postgres | 48 kB |
pg_catalog | pg_pltemplate | table | postgres | 40 kB |
pg_catalog | pg_proc | table | postgres | 536 kB |
pg_catalog | pg_range | table | postgres | 40 kB |
pg_catalog | pg_rewrite | table | postgres | 496 kB |
pg_catalog | pg_seclabel | table | postgres | 8192 bytes |
pg_catalog | pg_shdepend | table | postgres | 48 kB |
pg_catalog | pg_shdescription | table | postgres | 48 kB |
pg_catalog | pg_shseclabel | table | postgres | 0 bytes |
pg_catalog | pg_statistic | table | postgres | 872 kB |
pg_catalog | pg_tablespace | table | postgres | 40 kB |
pg_catalog | pg_trigger | table | postgres | 160 kB |
pg_catalog | pg_ts_config | table | postgres | 40 kB |
pg_catalog | pg_ts_config_map | table | postgres | 48 kB |
pg_catalog | pg_ts_dict | table | postgres | 40 kB |
pg_catalog | pg_ts_parser | table | postgres | 40 kB |
pg_catalog | pg_ts_template | table | postgres | 40 kB |
pg_catalog | pg_type | table | postgres | 160 kB |
pg_catalog | pg_user_mapping | table | postgres | 0 bytes |
(50 rows)
ambari=#
... View more
03-15-2018
11:29 AM
In my case, ambari installed on postgres database
... View more
Labels:
03-15-2018
11:26 AM
yarn.scheduler.capacity.root.aggregate.queues=airflow <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
yarn.scheduler.capacity.queue-mappings=u:airflow:aggregate.airflow <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< not working
yarn.scheduler.capacity.queue-mappings-override.enable=true
**********
yarn.scheduler.capacity.root.aggregate.airflow.acl_submit_applications=*
yarn.scheduler.capacity.root.aggregate.airflow.capacity=100
yarn.scheduler.capacity.root.aggregate.airflow.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.aggregate.airflow.maximum-applications=20000
yarn.scheduler.capacity.root.aggregate.airflow.maximum-capacity=100
yarn.scheduler.capacity.root.aggregate.airflow.minimum-user-limit-percent=10
yarn.scheduler.capacity.root.aggregate.airflow.state=RUNNING
yarn.scheduler.capacity.root.aggregate.airflow.user-limit-factor=0.5
*********
... View more
Labels: